<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Peter Jausovec</title>
    <description>The latest articles on DEV Community by Peter Jausovec (@peterj).</description>
    <link>https://dev.to/peterj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/peterj"/>
    <language>en</language>
    <item>
      <title>Kubernetes CLI (kubectl) tips you didn't know about</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Tue, 28 Jun 2022 16:45:55 +0000</pubDate>
      <link>https://dev.to/peterj/kubernetes-cli-kubectl-tips-you-didnt-know-about-3fde</link>
      <guid>https://dev.to/peterj/kubernetes-cli-kubectl-tips-you-didnt-know-about-3fde</guid>
      <description>&lt;p&gt;&lt;a href="https://twitter.com/ahmetb" rel="noopener noreferrer"&gt;AhmetB&lt;/a&gt; started an excellent thread yesterday where he asked people to share a Kubernetes CLI (kubectl) tip that they think a lot of users don't know about.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1523741824141041664-800" src="https://platform.twitter.com/embed/Tweet.html?id=1523741824141041664"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1523741824141041664-800');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1523741824141041664&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;There are so many excellent tips I decided to collect the most interesting ones in a single post. The first thing that I typically do when I am on a new machine is to set the alias for &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're working with Kubernetes, you'll be typing &lt;code&gt;kubectl&lt;/code&gt; a lot, so why not make it shorter.&lt;/p&gt;

&lt;p&gt;Enjoy the tips below and &lt;a href="https://twitter.com/learn_cnative" rel="noopener noreferrer"&gt;let us know&lt;/a&gt; if you have any other tips you want to share. Here are all the tips in no particular order.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Set up load-based horizontal pod autoscaling on your Kubernetes resources
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/pixie_run" rel="noopener noreferrer"&gt;@pixie_run&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl autoscale deployment foo &lt;span class="nt"&gt;--min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nt"&gt;--max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Create a new job from a cronjob
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/mauilion" rel="noopener noreferrer"&gt;@mauilion&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create job &lt;span class="nt"&gt;--from&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cronjob/&amp;lt;name of cronjob&amp;gt; &amp;lt;name of this run&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Enumerate permissions for a given service account
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/mauilion" rel="noopener noreferrer"&gt;@mauilion&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; &amp;lt;namespace&amp;gt; auth can-i &lt;span class="nt"&gt;--list&lt;/span&gt; &lt;span class="nt"&gt;--as&lt;/span&gt; system:serviceaccount:&amp;lt;namespace&amp;gt;:&amp;lt;service account name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Annotate resources
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/hikhvar" rel="noopener noreferrer"&gt;@hikvar&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# To add annotation&lt;/span&gt;
kubectl annotate &amp;lt;resource-type&amp;gt;/&amp;lt;resource-name&amp;gt; &lt;span class="nv"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bar
&lt;span class="c"&gt;# To remove annotation&lt;/span&gt;
kubectl annotate &amp;lt;resource-type&amp;gt;/&amp;lt;resource-name&amp;gt; foo-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Get a list of endpoints across all namespaces
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/ivnrcki" rel="noopener noreferrer"&gt;ivnrcki&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ep &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Get a list of events sorted by lastTimestamp
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/ReesPozzi" rel="noopener noreferrer"&gt;RewssPozzi&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get events &lt;span class="nt"&gt;--sort-by&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;".lastTimestamp"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Watch all warnings across the namespaces
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/LachlanEvenson" rel="noopener noreferrer"&gt;@LachlanEvenson&lt;/a&gt; and &lt;a href="https://twitter.com/jpetazzo" rel="noopener noreferrer"&gt;@jpetazzo&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get events &lt;span class="nt"&gt;-w&lt;/span&gt; &lt;span class="nt"&gt;--field-selector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Warning &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. Add the &lt;code&gt;EVENT&lt;/code&gt; column to the list of watched pods
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/jpetazzo" rel="noopener noreferrer"&gt;Jérôme Petazzoni&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;--watch&lt;/span&gt; &lt;span class="nt"&gt;--output-watch-events&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  9. Get raw JSON for the various APIs
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/cheddarmint" rel="noopener noreferrer"&gt;cheddarmint&lt;/a&gt; and &lt;a href="https://twitter.com/mauilion" rel="noopener noreferrer"&gt;@mauilion&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get &lt;span class="nt"&gt;--raw&lt;/span&gt; /apis/apps/v1

&lt;span class="c"&gt;# Get metrics&lt;/span&gt;
kubectl get &lt;span class="nt"&gt;--raw&lt;/span&gt; /metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  10. Wait for specific pods to be ready
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/csanchez" rel="noopener noreferrer"&gt;@csanchez&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ready pod &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  11. Explain the various resources
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/pratikbin" rel="noopener noreferrer"&gt;@pratikbin&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl explain pod.spec
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  12. Get all resources that match a selector
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/jpetazzo" rel="noopener noreferrer"&gt;@jpetazzo&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get deployments,replicasets,pods,services &lt;span class="nt"&gt;--selector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;hello&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;yourecute
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  13. Forward a port from a service to a local port
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/victortrac" rel="noopener noreferrer"&gt;@victortrac&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/&amp;lt;service-name&amp;gt; &amp;lt;local-port&amp;gt;:&amp;lt;remote-port&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  14. List the environment variables for a resource
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/oldmanuk" rel="noopener noreferrer"&gt;@oldmanuk&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;set env&lt;/span&gt; &amp;lt;resource&amp;gt;/&amp;lt;resource-name&amp;gt; &lt;span class="nt"&gt;--list&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  15. Get a list of pods and the node they run on
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/damnlamb" rel="noopener noreferrer"&gt;@damnlamb&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po &lt;span class="nt"&gt;-o&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;custom-columns&lt;span class="o"&gt;=&lt;/span&gt;NODE:.spec.nodeName,NAME:.metadata.name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  16. Create a starter YAML manifest for deployment (also works for other resources)
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/ll_Cool__Josh" rel="noopener noreferrer"&gt;@ll_Cool__Josh&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deploy nginx-deployment &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  17. Get a list of pods sorted by memory usage
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/NehmiHungry" rel="noopener noreferrer"&gt;@NehmiHungry&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl top pods &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nt"&gt;--sort-by&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'memory'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  18. Get a list of pods that have a specific label value set
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/shrayk" rel="noopener noreferrer"&gt;@shrayk&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'app in (foo,bar)'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  19. Get the pod logs before the last restart
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/_elvir_kuric_" rel="noopener noreferrer"&gt;@_elvir_kuric_&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;--previous&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  20. Copy a file from a pod to a local file
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/oshi1136" rel="noopener noreferrer"&gt;@oshi1136&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;cp&lt;/span&gt; &amp;lt;namespace&amp;gt;/&amp;lt;pod&amp;gt;:&amp;lt;file_path&amp;gt; &amp;lt;local_file_path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  21. Delete a pod immediately
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/Rodrigo_Loza_L" rel="noopener noreferrer"&gt;@Rodrigo_Loza_L&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;--now&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  22. Show the logs from pods with specific labels
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/oldmanuk" rel="noopener noreferrer"&gt;@oldmanuk&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xyz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  23. Get more information about the resources (aka wide-view)
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/hrvojekov" rel="noopener noreferrer"&gt;@hrvojekov&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get &amp;lt;resource&amp;gt; &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  24. Output and apply the patch
&lt;/h2&gt;

&lt;p&gt;By &lt;a href="https://twitter.com/gipszlyakab" rel="noopener noreferrer"&gt;@gipszlyakab&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Edit a resource and get the patch&lt;/span&gt;
kubectl edit &amp;lt;resource&amp;gt;/&amp;lt;name&amp;gt; &lt;span class="nt"&gt;--output-patch&lt;/span&gt;

&lt;span class="c"&gt;# Use the output from the command above to apply the patch&lt;/span&gt;
kubectl patch &lt;span class="nt"&gt;--patch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;output_from_previous_command&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>discuss</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to configure Firebase emulators with Next.js?</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Fri, 04 Mar 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/peterj/how-to-configure-firebase-emulators-with-nextjs-6bi</link>
      <guid>https://dev.to/peterj/how-to-configure-firebase-emulators-with-nextjs-6bi</guid>
      <description>&lt;p&gt;I've been working with &lt;a href="https://nextjs.org/"&gt;Next.js&lt;/a&gt; and &lt;a href="https://firebase.google.com/"&gt;Firebase&lt;/a&gt; and using the Firebase emulators for local development. Firebase has a generous free tier, so unless you're doing something wrong or your project is growing, you probably won't hit those limits any time soon. Regardless, setting up the emulators and running against them instead of a deployed Firebase project simplifies development and testing. It also allows you to iterate faster (i.e., you don't have to re-deploy the functions N times a day), you can quickly run tests to make sure you don't break any basic functionality (or even security rules), and you can quickly seed the database with different documents.&lt;/p&gt;

&lt;p&gt;My project is in a single repo that contains the functions (&lt;code&gt;backend&lt;/code&gt; folder) and the Next.js frontend (&lt;code&gt;web&lt;/code&gt; folder). Here's a simplified version of the repo structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="c"&gt;.
&lt;/span&gt;&lt;span class="go"&gt;├── backend
│   └── functions
└── web
    ├── node_modules
    ├── public
    └── src
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;backend&lt;/code&gt; folder contains the functions, and it's where you initialize the Firebase project (i.e., run the &lt;code&gt;firebase init&lt;/code&gt; com). The &lt;code&gt;web&lt;/code&gt; folder contains the Next.js frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Launching the Firebase emulators
&lt;/h2&gt;

&lt;p&gt;Before you launch the emulators, you'll have to configure (and download) them first. The easiest way to do that is to run &lt;code&gt;firebase init&lt;/code&gt;, go through the prompts and select the emulators you want to use.&lt;/p&gt;

&lt;p&gt;You should end up with something similar in the &lt;code&gt;firebase.json&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"emulators"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"auth"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9099&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"functions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5001&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"firestore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9000&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"hosting"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pubsub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8085&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"storage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9199&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ui"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Launching the emulators is pretty straightforward - I typically launch them in a separate terminal window with &lt;code&gt;firebase emualtors:start&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;firebase emulators:start
&lt;span class="go"&gt;i  emulators: Starting emulators: auth, functions, firestore, database, hosting, pubsub
, storage
&lt;/span&gt;&lt;span class="c"&gt;...
&lt;/span&gt;&lt;span class="go"&gt;┌─────────────────────────────────────────────────────────────┐
│ ✔  All emulators ready! It is now safe to connect your app. │
│ i  View Emulator UI at http://localhost:4000                │
└─────────────────────────────────────────────────────────────┘

┌────────────────┬──────────────────────────────────┬─────────────────────────────────┐
│ Emulator       │ Host:Port                        │ View in Emulator UI             │
├────────────────┼──────────────────────────────────┼─────────────────────────────────┤
│ Authentication │ localhost:9099                   │ http://localhost:4000/auth      │
├────────────────┼──────────────────────────────────┼─────────────────────────────────┤
│ Functions      │ localhost:5001                   │ http://localhost:4000/functions │
├────────────────┼──────────────────────────────────┼─────────────────────────────────┤
│ Firestore      │ localhost:8080                   │ http://localhost:4000/firestore │
├────────────────┼──────────────────────────────────┼─────────────────────────────────┤
│ Database       │ localhost:9000                   │ http://localhost:4000/database  │
├────────────────┼──────────────────────────────────┼─────────────────────────────────┤
│ Hosting        │ Failed to initialize (see above) │                                 │
├────────────────┼──────────────────────────────────┼─────────────────────────────────┤
│ Pub/Sub        │ localhost:8085                   │ n/a                             │
├────────────────┼──────────────────────────────────┼─────────────────────────────────┤
│ Storage        │ localhost:9199                   │ http://localhost:4000/storage   │
└────────────────┴──────────────────────────────────┴─────────────────────────────────┘
  Emulator Hub running at localhost:4400
  Other reserved ports: 4500
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If all goes well, the emulators will be running, and you can open the emulator UI on &lt;code&gt;localhost:4000&lt;/code&gt; - it should look similar to the figure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A4e7VpaF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xlirq8uhsbkph8nrg4h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A4e7VpaF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xlirq8uhsbkph8nrg4h.jpg" alt="Firebase Emulator Suite" width="860" height="1440"&gt;&lt;/a&gt;&lt;/p&gt;

Firebase Emulator Suite




&lt;p&gt;You can click the "Go to emulator" links to get to the specific emulator UI. The UI looks similar to the actual Firebase UI. For example, in the Firestore emulator, you can create and delete collections and documents, in the Authentication emulator, you can add users, and so on.&lt;/p&gt;

&lt;p&gt;We can now connect to them from the front-end application with the emulators running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to the emulators from Next.js
&lt;/h2&gt;

&lt;p&gt;Whether you're using Next.js or not, the way you connect to the local emulators is the same. The only difference is whether you're connecting from the frontend (using the &lt;a href="https://github.com/firebase/firebase-js-sdk"&gt;&lt;code&gt;firebase&lt;/code&gt; library&lt;/a&gt;) or the backend (using the &lt;a href="https://github.com/firebase/firebase-admin-node"&gt;&lt;code&gt;firebase-admin&lt;/code&gt; library&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Since each service in the Firebase suite has its emulator, you have to connect to them individually. This flexibility is great because theoretically, you could use the "production" auth service and connect to the Firestore or Storage emulator or any other combination of emulator and non-emulator service. However, in most cases, if you're developing locally, you'd connect to all service emulators and not mix them up.&lt;/p&gt;

&lt;p&gt;With the modular Firebase library, each one of the modules exposes a &lt;code&gt;connectXYZEmulator&lt;/code&gt; function. I am also using the &lt;a href="https://github.com/FirebaseExtended/reactfire"&gt;reactfire library&lt;/a&gt; that implements providers for each service - that's where I decide whether to connect to the emulator or not.&lt;/p&gt;

&lt;p&gt;For example, in my &lt;code&gt;FirebaseAuthProvider&lt;/code&gt; function, I do something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;useFirebaseApp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;reactfire&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;connectAuthEmulator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;getAuth&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;firebase/auth&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;shouldConnectAuthEmulator&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// You could do any logic here to decide whether to connect to the emulator or not&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NODE_ENV&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;development&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;getAuthEmulatorHost&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// You'd read this from config/environment variable/etc.&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost:9099&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;FirebaseAuthProvider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;children&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useFirebaseApp&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getAuth&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;shouldConnectAuthEmulator&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;connectAuthEmulator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;getAuthEmulatorHost&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The connect function takes the Firebase service, host, and any specific options to the emulator. And that's all! That one call to &lt;code&gt;connectAuthEmulator&lt;/code&gt; will connect to the local Authentication emulator. Similarly, you can connect to other emulators.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting from the backend using &lt;code&gt;firebase-admin&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;firebase-admin&lt;/code&gt; library is meant to be used for accessing Firebase from server environments (as opposed to frontend and client browsers).&lt;/p&gt;

&lt;p&gt;In the front-end scenario, exposing the API key, app ID, auth domain, and other configuration values is not an issue. In a typical scenario, you'd register/login the user first and then use their token to allow/deny different types of access to the Firebase.&lt;/p&gt;

&lt;p&gt;However, using the &lt;code&gt;firebase-admin&lt;/code&gt; library in the backend scenario, you have to provide a private key to connect to the Firebase. It would be best never to share the private key as it gives administrative access to the Firebase. Hence, you'd only use the Firebase admin library from trusted environments and never from the client (frontend).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read more on how to set up Firebase on a server &lt;a href="https://firebase.google.com/docs/admin/setup/"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When using the admin version of the Firebase library, you won't find the &lt;code&gt;connectXYZEmulator&lt;/code&gt; functions. Instead, you'll have to set environment variables that point to the emulators.&lt;/p&gt;

&lt;p&gt;If we continue with the auth example above, you connect to the emulator by setting the &lt;code&gt;FIREBASE_AUTH_EMULATOR_HOST&lt;/code&gt; environment variable to &lt;code&gt;localhost:9099&lt;/code&gt;. Similarly, you'd set &lt;code&gt;FIREBASE_STORAGE_EMULATOR_HOST&lt;/code&gt; for Storage and &lt;code&gt;FIREBASE_FUNCTIONS_EMULATOR_HOST&lt;/code&gt; for Functions so on. You can check more details in the &lt;a href="https://firebase.google.com/docs/emulator-suite/connect_and_prototype"&gt;documentation here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>firebase</category>
      <category>typescript</category>
      <category>testing</category>
    </item>
    <item>
      <title>Monitoring containers with cAdvisor</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Wed, 25 Aug 2021 20:43:25 +0000</pubDate>
      <link>https://dev.to/peterj/monitoring-containers-with-cadvisor-50bd</link>
      <guid>https://dev.to/peterj/monitoring-containers-with-cadvisor-50bd</guid>
      <description>&lt;p&gt;Monitoring with cAdvisor allows you to gather information about individual Docker containers running on your host - be it a virtual machine, Kubernetes cluster, or any other host capable of running containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/google/cadvisor" rel="noopener noreferrer"&gt;cAdvisor&lt;/a&gt; (short for "Container Advisor") is a daemon that collects the data about the resource usage and performance of your containers.&lt;/p&gt;

&lt;p&gt;In addition to container usage metrics, cAdvisor can also collect metrics from your applications. If your applications are already emitting metrics, you can configure cAdviser to scrape the endpoint and include which metrics you want to extract.&lt;/p&gt;

&lt;p&gt;cAdvisor also features a built-in UI, and it also allows you to export the collected data to different storage driver plugins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fcadvisor-ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fcadvisor-ui.png" title="cAdvisor UI" alt="cAdvisor UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, you can export the collected data to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/etsy/statsd" rel="noopener noreferrer"&gt;StatsD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.elastic.co/" rel="noopener noreferrer"&gt;ElasticSearch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/bigquery/" rel="noopener noreferrer"&gt;BigQuery&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://influxdb.com/" rel="noopener noreferrer"&gt;InfluxDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://kafka.apache.org/" rel="noopener noreferrer"&gt;Kafka&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;or to standard out&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The easiest way to get started and see the data that gets collected is to run the cAdvisor Docker image locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker run \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --publish=8080:8080 \
  --detach=true \
  --name=cadvisor \
  --privileged \
  --device=/dev/kmsg \
  gcr.io/cadvisor/cadvisor:v0.37.5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The latest version of cAdvisor at the time of writing this was v0.37.5. Make sure you're always using the latest bits.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In case you're wondering about all those volumes... These are the folders you need to mount inside the cAdvisor image so that the cAdvisor can analyze all the data from them.&lt;/p&gt;

&lt;p&gt;Once the cAdvisor is running, it will collect data about all containers running on the same host. Note that there are &lt;a href="https://github.com/google/cadvisor/blob/master/docs/runtime_options.md" rel="noopener noreferrer"&gt;options you can set&lt;/a&gt; to limit which containers get monitored.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running cAdvisor in Kubernetes
&lt;/h2&gt;

&lt;p&gt;cAdvisor is integrated with the kubelet binary, and it exposes the metrics on &lt;code&gt;/metrics/cadvisor&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;p&gt;Therefore we don't need to install cAdvisor on the Kubernetes cluster explicitly.&lt;/p&gt;

&lt;p&gt;Here's an example of how we can use &lt;code&gt;kubectl&lt;/code&gt; to retrieve the cluster node metrics and Pod metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/[node-name]
{
  "kind": "NodeMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "[node-name]",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/[node-name]]",
    "creationTimestamp": "2021-08-26T22:12:26Z"
  },
  "timestamp": "2021-08-26T22:11:53Z",
  "window": "30s",
  "usage": {
    "cpu": "39840075n",
    "memory": "487200Ki"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, we can use the following URL &lt;code&gt;/apis/metrics.k8s.io/v1beta1/namespaces/&amp;lt;NAMESPACE&amp;gt;/pods/&amp;lt;POD_NAME&amp;gt;&lt;/code&gt; to get the metrics about a specific pod.&lt;/p&gt;

&lt;p&gt;Let's create an &lt;code&gt;httpbin&lt;/code&gt; deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To retrieve the metrics from the httpbin pod, run the command below (make sure you replace the pod name with the name of your pod running in your cluster):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/httpbin-74fb669cc6-xs74p
{
  "kind": "PodMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "httpbin-74fb669cc6-xs74p",
    "namespace": "default",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/httpbin-74fb669cc6-xs74p",
    "creationTimestamp": "2021-08-26T22:15:40Z"
  },
  "timestamp": "2021-08-26T22:15:16Z",
  "window": "30s",
  "containers": [
    {
      "name": "httpbin",
      "usage": {
        "cpu": "316267n",
        "memory": "38496Ki"
      }
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connecting cAdvisor to Prometheus and Grafana
&lt;/h2&gt;

&lt;p&gt;By default, cAdvisor exposes the &lt;a href="https://prometheus.io" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; metrics on the &lt;code&gt;/metrics&lt;/code&gt; endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# HELP cadvisor_version_info A metric with a constant '1' value labeled by kernel version, OS version, docker version, cadvisor version &amp;amp; cadvisor revision.
# TYPE cadvisor_version_info gauge
cadvisor_version_info{cadvisorRevision="de117632",cadvisorVersion="v0.39.0",dockerVersion="20.10.3",kernelVersion="5.4.104+",osVersion="Alpine Linux v3.12"} 1
# HELP container_blkio_device_usage_total Blkio Device bytes usage
# TYPE container_blkio_device_usage_total counter
container_blkio_device_usage_total{container_env_ARG1="",container_env_ARG2="",container_env_CADVISOR_HEALTHCHECK_URL="",container_env_DEFAULT_HTTP_BACKEND_PORT="",container_env_DEFAULT_HTTP_BACKEND_PORT_80_TCP="",container_env_DEFAULT_HTTP_BACKEND_PORT_80_TCP_ADDR="",container_env_DEFAULT_HTTP_BACKEND_PORT_80_TCP_PORT="",container_env_DEFAULT_HTTP_BACKEND_PORT_80_TCP_PROTO="",
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because metrics are already in Prometheus format and cAdvisor exports them automatically on a well-known endpoint, we don't need to change the existing cAdvisor deployment. Instead, we can install and configure Prometheus to scrape the metrics from the &lt;code&gt;/metrics&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Prometheus on Kubernetes
&lt;/h3&gt;

&lt;p&gt;I'll use &lt;a href="https://prometheus-operator.dev/" rel="noopener noreferrer"&gt;Prometheus Operator&lt;/a&gt; to install Prometheus on Kubernetes. We'll install the complete monitoring bundle, including Prometheus, Grafana, and Alert manager.&lt;/p&gt;

&lt;p&gt;Start by cloning the &lt;code&gt;kube-prometheus&lt;/code&gt; repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/prometheus-operator/kube-prometheus.git

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, go to the &lt;code&gt;kube-prometheus&lt;/code&gt; folder and deploy the CRDs first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f manifests/setup

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for a bit for the CRDs to be applied and then create the deployments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f manifests/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you've deployed everything (you can run &lt;code&gt;kubectl get pod -A&lt;/code&gt; to check all pods are up and running), you can open the Prometheus UI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/prometheus-k8s 9090 -n monitoring

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you open &lt;code&gt;http://localhost:9090&lt;/code&gt;, you can now query for any metrics collected by the cAdvisor - e.g., metrics starting with &lt;code&gt;container_*&lt;/code&gt; as shown in the figure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fprom_container_metrics.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fprom_container_metrics.png" title="Prometheus" alt="Prometheus metrics from cAdvisor"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Grafana dashboards
&lt;/h3&gt;

&lt;p&gt;Grafana gets installed as part of the &lt;code&gt;kube-prometheus&lt;/code&gt; operator. We can open the Grafana UI by port-forwarding to port 3000:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/grafana 5000:3000 -n monitoring

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you open Grafana on &lt;code&gt;http://localhost:5000&lt;/code&gt; you'll notice there's already a set of pre-created dashboards that came with the &lt;code&gt;kube-prometheus&lt;/code&gt; operator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fgrafana-dash.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fgrafana-dash.png" title="Grafana" alt="Grafana dashboards"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dashboards show you the information about Kubernetes resources - memory usage, CPU usage, quotas, and so on. These metrics are coming from the &lt;a href="https://github.com/prometheus/node_exporter" rel="noopener noreferrer"&gt;node-exporter&lt;/a&gt; component.&lt;/p&gt;

&lt;p&gt;The node-exporter exports the hardware and OS metrics to Prometheus while cAdvisor collects the metrics about containers.&lt;/p&gt;

&lt;p&gt;To get the cAdvisor metrics pulled into Grafana, we'll install the &lt;a href="https://grafana.com/grafana/dashboards/315/revisions" rel="noopener noreferrer"&gt;Kubernetes cluster monitoring (via Prometheus)&lt;/a&gt; dashboard from Grafana.&lt;/p&gt;

&lt;p&gt;Installing a dashboard is straightforward.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In Grafana, go to the "+" button on the sidebar.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Import&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fgrafana-import.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fgrafana-import.png" title="Grafana import dashboard" alt="Grafana import"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Paste the dashboard ID (315 in our case) to the ID text field&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Load&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;From the Prometheus drop-down list, select "prometheus".&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Import&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fgrafana-import-screen.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fgrafana-import-screen.png" title="Grafana import screen" alt="Grafana import screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When Grafana imports the dashboard, it will automatically open it. The dashboard features high-level metrics about the total CPU and memory usage and detailed metrics about each specific container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fcadvisor-dashboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fassets%2Fposts%2Fimg%2Fcadvisor-dashboard.png" title="cAdvisor dashboard" alt="aAdvisor dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;As the next step, you should familiarize yourself with the graphs and data displayed in Grafana and learn how to read them. Find which metrics and dashboards are valuable to you and your system.&lt;/p&gt;

&lt;p&gt;Once you've decided that, you might want to set up the alerting. The Prometheus operator includes the &lt;a href="https://prometheus.io/docs/alerting/latest/alertmanager/" rel="noopener noreferrer"&gt;Alert manager&lt;/a&gt;. You can use the alert manager and configure it to send alerts when specific metrics are not within the defined thresholds. For example, you could configure the system to send a notification to &lt;a href="https://www.pagerduty.com/" rel="noopener noreferrer"&gt;PagerDuty&lt;/a&gt; whenever the cluster memory or CPU usage is above a certain threshold.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/google/cadvisor" rel="noopener noreferrer"&gt;cAdvisor&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://prometheus-operator.dev/" rel="noopener noreferrer"&gt;Prometheus operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/prometheus/node_exporter" rel="noopener noreferrer"&gt;Node exporter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/docs/alerting/latest/alertmanager/" rel="noopener noreferrer"&gt;Alert manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://grafana.com" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pagerduty.com/" rel="noopener noreferrer"&gt;PagerDuty&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>containers</category>
    </item>
    <item>
      <title>🎥 Kubernetes Services, Ingress, Jobs and CronJobs</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Thu, 17 Jun 2021 03:37:22 +0000</pubDate>
      <link>https://dev.to/peterj/kubernetes-services-ingress-jobs-and-cronjobs-3p1c</link>
      <guid>https://dev.to/peterj/kubernetes-services-ingress-jobs-and-cronjobs-3p1c</guid>
      <description>&lt;p&gt;How can you access workloads inside the Kubernetes cluster? Should you use a NodePort or LoadBalancer service type? How about if you want to expose multiple applications through a single load balancer? In this session, you'll learn how to do all that. We'll deploy an Ambassador ingress controller and show how to expose multiple applications through the load balancer. Finally, we'll look into Jobs and CronJobs and show how to use them to run tasks on schedule. &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ef6UJ5Pa3Dw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  What's covered in the video?
&lt;/h3&gt;

&lt;p&gt;00:00 - Introduction&lt;br&gt;
01:00 - Agenda&lt;br&gt;
02:54 - Kubernetes Services&lt;br&gt;
10:24 - Talking to Pods using services (demo/lab)&lt;br&gt;
23:37 - Service types&lt;br&gt;
24:45 - ClusterIP service type&lt;br&gt;
25:56 - NodePort service type&lt;br&gt;
28:00 - LoadBalancer service type&lt;br&gt;
29:00 - ExternalName service type&lt;br&gt;
30:03 - Service types (demo/lab)&lt;br&gt;
40:40 - Ingress introduction (exposing multiple services)&lt;br&gt;
50:50 - Ingress (demo)&lt;br&gt;
54:33 - Deploying Ambassador ingress controller (demo)&lt;br&gt;
01:00:50 - Single service ingress (demo)&lt;br&gt;
01:02:59 - Path-based routing with Ingress (demo)&lt;br&gt;
01:08:13 - Using a hostname instead of an IP address (demo)&lt;br&gt;
01:15:31 - Name-based ingress(multiple hosts) (demo)&lt;br&gt;
01:19:00 - Kubernetes Jobs&lt;br&gt;
01:26:39 - Kubernetes CronJobs&lt;br&gt;
01:32:40 - Jobs and CronJobs (demo) &lt;/p&gt;

&lt;p&gt;You can find all the information on how to join the next live Kubernetes session at &lt;a href="//live.startkubernetes.com"&gt;live.startkubernetes.com&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Start Kubernetes Live Stream: Pods, ReplicaSets, and Deployments </title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Mon, 31 May 2021 22:58:17 +0000</pubDate>
      <link>https://dev.to/peterj/start-kubernetes-live-stream-pods-replicasets-and-deployments-4cgb</link>
      <guid>https://dev.to/peterj/start-kubernetes-live-stream-pods-replicasets-and-deployments-4cgb</guid>
      <description>&lt;p&gt;I did my first YouTube live stream this weekend. I took a couple of days to edit the video - not because there was a lot of edits, but because it takes 8+ hours for YouTube to process the changes :)&lt;/p&gt;

&lt;p&gt;In the first video I am talking about Kubernetes Pods, ReplicaSets, and Deployments.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/tsAH_Vv8GOI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Here are the topics covered in the video:&lt;/p&gt;

&lt;p&gt;00:00 Introduction&lt;br&gt;
05:22 Container orchestration&lt;br&gt;
08:32 Kubernetes introduction&lt;br&gt;
10:21 Kubernetes architecture&lt;br&gt;
18:37 Kubernetes resources&lt;br&gt;
22:40 Anatomy of a resource&lt;br&gt;
26:56 Labels&lt;br&gt;
27:27 Selectors&lt;br&gt;
29:19 Annotations&lt;br&gt;
31:46 Namespaces&lt;br&gt;
34:53 Pods&lt;br&gt;
47:17 Creating a cluster in GCP&lt;br&gt;
01:02:04 Pods Demo&lt;br&gt;
01:09:40 ReplicaSet&lt;br&gt;
01:16:40 ReplicaSet Demo&lt;br&gt;
01:34:14 Deployments&lt;br&gt;
01:37:11 Deployment strategies&lt;br&gt;
01:44:57 Deployments Demo&lt;br&gt;
01:53:50 Deployment strategies Demo&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Attach multiple VirtualServices to Istio Gateway</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Mon, 23 Nov 2020 20:43:25 +0000</pubDate>
      <link>https://dev.to/peterj/attach-multiple-virtualservices-to-istio-gateway-7do</link>
      <guid>https://dev.to/peterj/attach-multiple-virtualservices-to-istio-gateway-7do</guid>
      <description>&lt;h2&gt;
  
  
  What will I learn?
&lt;/h2&gt;

&lt;p&gt;In this post, you'll learn how to expose multiple Kubernetes services running inside your cluster using Istio' Gateway and VirtualService resources.&lt;/p&gt;

&lt;p&gt;The idea for this post came from a comment on the &lt;a href="https://www.youtube.com/watch?v=ssqDgcEvdZ0"&gt;Istio Gateway video&lt;/a&gt; I recorded last year.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/9d2ba36e97e25aca97361b7eae28e10a/ade6e/yt-comment.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GYpd7Ej5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://learncloudnative.com/static/9d2ba36e97e25aca97361b7eae28e10a/ade6e/yt-comment.png" alt="YouTube Comment" title="YouTube comment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The question was: "Is it possible to route multiple VirtualServices using the same Gateway resource?".&lt;/p&gt;

&lt;p&gt;The answer is &lt;strong&gt;YES&lt;/strong&gt;. You can use the Gateway resource and bind multiple VirtualServices to it, exposing them outside of the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;The key to understanding how to do that is in the &lt;code&gt;hosts&lt;/code&gt; fields in the Gateway and VirtualService resources.&lt;/p&gt;

&lt;p&gt;When you attach a VirtualService to a Gateway (using the &lt;code&gt;gateway&lt;/code&gt; field), only the hosts defined in the Gateway resource will be allowed to make it to the VirtualService.&lt;/p&gt;

&lt;p&gt;Let's look at the Gateway resource example, that defines two hosts: &lt;code&gt;red.example.com&lt;/code&gt; and &lt;code&gt;green.example.com&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;istio&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingressgateway&lt;/span&gt;
  &lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
        &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;red.example.com'&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;green.example.com'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the &lt;code&gt;hosts&lt;/code&gt; field, you can define one or more hosts you want to expose with the gateway. In this example, we are specifying the host with an FQDN name (e.g., &lt;code&gt;red.example.com&lt;/code&gt;). We could optionally include a wildcard character (e.g. &lt;code&gt;my-namespace/*&lt;/code&gt;) to select all VirtualService hosts from &lt;code&gt;my-namespace&lt;/code&gt;. You can think of the list of hosts in the Gateway resource as a filter. For example, with the above definition, you are filtering the hosts down to &lt;code&gt;red.example.com&lt;/code&gt; and &lt;code&gt;green.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In addition to the Gateway, we also have two VirtualServices:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VirtualService&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;red&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;red.example.com'&lt;/span&gt;
  &lt;span class="na"&gt;gateways&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;
  &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;red.default.svc.cluster.local&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VirtualService&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;green&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;green.example.com'&lt;/span&gt;
  &lt;span class="na"&gt;gateways&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;
  &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;green.default.svc.cluster.local&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that both VirtualServices are attached to the Gateway, allowing us to 'expose' the services (destinations) through the Gateway.&lt;/p&gt;

&lt;p&gt;However, attaching the Gateway is not enough. We also need to specify the &lt;code&gt;hosts&lt;/code&gt; in the VirtualService. Gateway uses the values in the &lt;code&gt;hosts&lt;/code&gt; field to do the matching when the traffic comes in.&lt;/p&gt;

&lt;p&gt;Let's take the &lt;code&gt;red.example.com&lt;/code&gt; as an example. We make the following request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Host: red.example.com"&lt;/span&gt; http://&lt;span class="nv"&gt;$GATEWAY_URL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The request hits the ingress gateway (because we defined the host is in the &lt;code&gt;hosts&lt;/code&gt; field in the Gateway resource), then, because we have a VirtualService with a matching host attached to the gateway, the traffic makes it to the destination (&lt;code&gt;red.default.svc.cluster.local&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;If we send a request to &lt;code&gt;blue.example.com&lt;/code&gt;, we get back a 404. That's because we didn't specify that host name in the Gateways' hosts field. Even if we deployed a VirtualService that is attached to the gateway and has &lt;code&gt;blue.example.com&lt;/code&gt; defined in its host field, we'd still get back a 404.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it out
&lt;/h2&gt;

&lt;p&gt;To try this out on your cluster, following these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;a href="https://istio.io"&gt;Istio&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Label the &lt;code&gt;default&lt;/code&gt; namespace for sidecar injection.&lt;/li&gt;
&lt;li&gt;Deploy the Green and Red applications:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/peterj/color-app/main/examples/green.yaml

   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/peterj/color- app/main/examples/red.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create the Gateway, and VirtualServices:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;istio&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingressgateway&lt;/span&gt;
  &lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
        &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
      &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;red.example.com'&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;green.example.com'&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VirtualService&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;red&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;red.example.com'&lt;/span&gt;
  &lt;span class="na"&gt;gateways&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;
  &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;red.default.svc.cluster.local&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="s"&gt;--------&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.istio.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VirtualService&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;green&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;green.example.com'&lt;/span&gt;
  &lt;span class="na"&gt;gateways&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;
  &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;green.default.svc.cluster.local&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With everything deployed, we can try to make a request to the GATEWAY_URL.&lt;/p&gt;

&lt;p&gt;You can use &lt;code&gt;kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'&lt;/code&gt; to get the GATEWAY_URL.&lt;/p&gt;

&lt;p&gt;If you want to try this out in a browser, make sure you install an extension that allows you to modify the Host header. Alternatively, if you have access to an actual domain name, you can set the GATEWAY_URL as an A name record in your domain registrars' settings and use it directly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can refer to &lt;a href="https://dev.to/blog/2018-09-28-expose_kubernetes_service_on_your_own_custom_domain/"&gt;Expose a Kubernetes service on your own custom domain&lt;/a&gt; article to learn how to do that.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's make a request with Host set to &lt;code&gt;green.example.com&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Host: green.example.com"&lt;/span&gt; http://&lt;span class="nv"&gt;$GATEWAY_URL&lt;/span&gt;
&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/css/style.css"&lt;/span&gt; &lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"text/css"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"preconnect"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://fonts.gstatic.com"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://fonts.googleapis.com/css2?family=Montserrat:wght@500&amp;amp;display=swap"&lt;/span&gt; &lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;

&amp;lt;div &lt;span class="nv"&gt;class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="nv"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"background-color:#10b981; color:#FFFFFF"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;h1&amp;gt;GREEN&amp;lt;/h1&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get a similar response if you use &lt;code&gt;red.example.com&lt;/code&gt; as your host.&lt;/p&gt;

</description>
      <category>istio</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Exploring Kubernetes Volumes</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Wed, 11 Nov 2020 20:43:25 +0000</pubDate>
      <link>https://dev.to/peterj/exploring-kubernetes-volumes-1nla</link>
      <guid>https://dev.to/peterj/exploring-kubernetes-volumes-1nla</guid>
      <description>&lt;p&gt;Running stateful workloads inside Kubernetes is different from running stateless services. The reason being is that the containers and Pods can get created and destroyed at any time. If any of the cluster nodes go down or a new node appears, Kubernetes needs to reschedule the Pods.&lt;/p&gt;

&lt;p&gt;If you ran a stateful workload or a database in the same way you are running a stateless service, all of your data would be gone the first time your Pod restarts.&lt;/p&gt;

&lt;p&gt;Therefore you need to store the data outside of the container. Storing the data outside ensures that nothing happens to it when the container restarts.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Volumes&lt;/strong&gt; abstraction in Kubernetes solves the problem of storing data outside of containers problem. The Volume lives as long as the Pod lives. If any of the containers within the Pod get restarted, Volume preserves the data. However, once you delete the Pod, the Volume gets deleted as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/bd77805968e552ae0f670fc6b1800bd4/895f3/volumes-1.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0dxCxZHZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://learncloudnative.com/static/bd77805968e552ae0f670fc6b1800bd4/914ae/volumes-1.png" alt="Volumes in a Pod" title="Volumes in a Pod"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Volume is just a folder that may or may not have any data in it. The folder is accessible to all containers in a pod. How this folder gets created and the backing storage is determined by the volume type.&lt;/p&gt;

&lt;p&gt;The most basic volume type is an empty directory (&lt;code&gt;emptyDir&lt;/code&gt;). When you create a Volume with the &lt;code&gt;emptyDir&lt;/code&gt; type, Kubernetes creates it when it assigns a Pod to a node. The Volume exists for as long as the Pod is running. As the name suggests, it is initially empty, but the containers can write and read from the Volume. Once you delete the Pod, Kubernetes deletes the Volume as well.&lt;/p&gt;

&lt;p&gt;There are two parts to using the Volumes. The first one is the Volume definition. You can define the volumes in the Pod spec by specifying the volume name and the type (&lt;code&gt;emptyDir&lt;/code&gt; in our case). The second part is mounting the Volume inside of the containers using the &lt;code&gt;volumeMounts&lt;/code&gt; key. In each Pod you can use multiple different Volumes at the same time.&lt;/p&gt;

&lt;p&gt;Inside the volume mount we refer to the Volume by name (&lt;code&gt;pod-storage&lt;/code&gt;) and specifying which path we want to mount the Volume under (&lt;code&gt;/data/&lt;/code&gt;).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out &lt;a href="https://www.learncloudnative.com/blog/2020-05-26-getting-started-with-kubernetes-part-1/"&gt;Getting started with Kubernetes&lt;/a&gt; to get set up your cluster and run through the examples in this post.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: empty-dir-pod
spec:
  containers:
    - name: alpine
      image: alpine
      args:
        - sleep
        - "120"
      volumeMounts:
        - name: pod-storage
          mountPath: /data/
  volumes:
    - name: pod-storage
      emptyDir: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the above YAML in &lt;code&gt;empty-dir-pod.yaml&lt;/code&gt; and run &lt;code&gt;kubectl apply -f empty-dir.pod.yaml&lt;/code&gt; to create the Pod.&lt;/p&gt;

&lt;p&gt;Next, we are going to use the &lt;code&gt;kubectl exec&lt;/code&gt; command to get a terminal inside the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it empty-dir-pod -- /bin/sh
/ # ls
bin dev home media opt root sbin sys usr
data etc lib mnt proc run srv tmp var
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run &lt;code&gt;ls&lt;/code&gt; inside the container, you will notice the &lt;code&gt;data&lt;/code&gt; folder. The &lt;code&gt;data&lt;/code&gt; folder is mounted from the &lt;code&gt;pod-storage&lt;/code&gt; Volume defined in the YAML.&lt;/p&gt;

&lt;p&gt;Let's create a dummy file inside the &lt;code&gt;data&lt;/code&gt; folder and wait for the container to restart (after 2 minutes) to prove that the data inside the &lt;code&gt;data&lt;/code&gt; folder stays around.&lt;/p&gt;

&lt;p&gt;From inside the container create a &lt;code&gt;hello.txt&lt;/code&gt; file under the &lt;code&gt;data&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "hello" &amp;gt;&amp;gt; data/hello.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can type &lt;code&gt;exit&lt;/code&gt; to exit the container. If you wait for 2 minutes, the container will automatically restart. To watch the container restart, run the &lt;code&gt;kubectl get po -w&lt;/code&gt; command from a separate terminal window.&lt;/p&gt;

&lt;p&gt;Once container restarts, you can check that the file &lt;code&gt;data/hello.txt&lt;/code&gt; is still in the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it empty-dir-pod -- /bin/sh
/ # ls data/hello.txt
data/hello.txt
/ # cat data/hello.txt
hello
/ #
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes stores the data on the host under the &lt;code&gt;/var/lib/kubelet/pods&lt;/code&gt; folder. That folder contains a list of pod IDs, and inside each of those folders is the &lt;code&gt;volumes&lt;/code&gt;. For example, here's how you can get the pod ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get po empty-dir-pod -o yaml | grep uid
  uid: 683533c0-34e1-4888-9b5f-4745bb6edced
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Armed with the Pod ID, you can run &lt;code&gt;minikube ssh&lt;/code&gt; to get a terminal inside the host Minikube uses to run Kubernetes. Once inside the host, you can find the &lt;code&gt;hello.txt&lt;/code&gt; in the following folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo cat /var/lib/kubelet/pods/683533c0-34e1-4888-9b5f-4745bb6edced/volumes/kubernetes.io~empty-dir/pod-storage/hello.txt
hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using &lt;a href="https://www.docker.com/products/docker-desktop"&gt;Docker Desktop&lt;/a&gt;, you can run a privileged container and using &lt;code&gt;nsenter&lt;/code&gt; run a shell inside all namespace of the process with id 1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
/ #
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you get the terminal, the process is the same - navigate to the &lt;code&gt;/var/lib/kubelet/pods&lt;/code&gt; folder and find the &lt;code&gt;hello.txt&lt;/code&gt; just like you would if you're using Minikube.&lt;/p&gt;

&lt;p&gt;Kubernetes supports a large variety of other volume types. Some of the types are generic, such as &lt;code&gt;emtpyDir&lt;/code&gt; or &lt;code&gt;hostPath&lt;/code&gt; (used for mounting folders from the nodes' filesystem). Other types are either used for &lt;strong&gt;cloud-provider storage&lt;/strong&gt; (such as &lt;code&gt;azureFile&lt;/code&gt;, &lt;code&gt;awsElasticBlockStore&lt;/code&gt;, or &lt;code&gt;gcePersistentDisk&lt;/code&gt;), &lt;strong&gt;network storage&lt;/strong&gt; (&lt;code&gt;cephfs&lt;/code&gt;, &lt;code&gt;cinder&lt;/code&gt;, &lt;code&gt;csi&lt;/code&gt;, &lt;code&gt;flocker&lt;/code&gt;, ...), or for mounting Kubernetes resources into the Pods (&lt;code&gt;configMap&lt;/code&gt;, &lt;code&gt;secret&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Lastly, another particular type of Volumes are Persistent Volumes and Persistent Volume Claims.&lt;/p&gt;

&lt;p&gt;The lack of the word "persistent" when talking about other volumes can be misleading. If you are using any cloud-provider storage volume types (&lt;code&gt;azureFile&lt;/code&gt; or &lt;code&gt;awsElasticBlockStore&lt;/code&gt;), the data will still be persisted. The persistent volume and persistent volume claims are just a way to abstract how Kubernetes provisions the storage.&lt;/p&gt;

&lt;p&gt;For the full and up-to-date list of all volume types, check the &lt;a href="https://kubernetes.io/docs/contepcts/storage/volumes"&gt;Kubernetes Docs&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Send a Slack message when Docker images are updated</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Wed, 14 Oct 2020 20:43:25 +0000</pubDate>
      <link>https://dev.to/peterj/send-a-slack-message-when-docker-images-are-updated-3f4c</link>
      <guid>https://dev.to/peterj/send-a-slack-message-when-docker-images-are-updated-3f4c</guid>
      <description>&lt;p&gt;The Kubernetes labs that are part of the &lt;a href="https://startkubernetes.com" rel="noopener noreferrer"&gt;Start Kubernetes course&lt;/a&gt; run as multiple Docker images inside a Kubernetes cluster. I wanted a way to notify the users when I push new Docker images to the registry. That way, they can restart the Pods and get the updated images.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://azure.microsoft.com/en-us/services/container-registry/" rel="noopener noreferrer"&gt;Azure container registry&lt;/a&gt; I use to host the images allows me to create &lt;strong&gt;webhooks&lt;/strong&gt;. Whenever you push or delete an image, the container registry sends a JSON payload to a URL of my choosing.&lt;/p&gt;

&lt;p&gt;For the course, I am using a Slack workspace. Slack also has support for apps. You can create and add apps to Slack workspaces and give them permission to post messages, for example. One of the Slack apps' features is the ability to use &lt;strong&gt;[incoming webhooks]&lt;/strong&gt;(&lt;a href="https://api.slack.com/messaging/webhooks" rel="noopener noreferrer"&gt;https://api.slack.com/messaging/webhooks&lt;/a&gt;) to post messages to a Slack channel.&lt;/p&gt;

&lt;p&gt;For example, you can configure a channel for the incoming webhook and then use a POST request like the one below to send a message to that channel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST -H 'Content-type: application/json' --data '{"text":"Hello, World!"}' https://hooks.slack.com/services/[somethingsomething]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a diagram that shows what I wanted to achieve:&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/4c686d9161251836407cc99a05a27773/b2cef/registry-slack.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F4c686d9161251836407cc99a05a27773%2Fb2cef%2Fregistry-slack.png" title="Registry webhook to Slack" alt="Registry webhook to Slack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The issue I ran into quickly was that I couldn't control the payload that the container registry sends. You can only configure the URL and the headers you want to send. The container registry sends a payload that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "id": "673aeeaa-6493-41d3-bcdd-68242942bcb0",
    "timestamp": "2020-10-14T00:15:02.82330594Z",
    "action": "push",
    "target": {
        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
        "size": 1778,
        "digest": "sha256:a71f5e4bf56c05a3d6264b8ef4d3bb4c90b4b0af579fedb6ccb68ea59f597435",
        "length": 1778,
        "repository": "startkubernetes/sk-web",
        "tag": "1"
    },
    "request": {
        "id": "adbc2757-8d80-49ac-af5f-ec30e5147bdf",
        "host": "myregistry.azurecr.io",
        "method": "PUT",
        "useragent": "docker/19.03.13+azure go/go1.13.15 git-commit/bd33bbf0497b2327516dc799a5e541b720822a4c kernel/5.4.0-1026-azure os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.13+azure \\(linux\\))"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, Slack is expecting a different looking payload. In its simplest form, the Slack message payload looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "text": "This is my message"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Slack also has a &lt;a href="https://app.slack.com/block-kit-builder/T018ZCB8C0Z#%7B%22blocks%22:%5B%5D%7D" rel="noopener noreferrer"&gt;Block Kit Builder&lt;/a&gt; that allows you to build more complex messages.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In any case, I needed an intermediary that does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accepts the container registry payload&lt;/li&gt;
&lt;li&gt;Extracts the information (repository, tag, and a timestamp)&lt;/li&gt;
&lt;li&gt;Creates payload that Slack understands&lt;/li&gt;
&lt;li&gt;Sends the payload to the Slack incoming webhook URL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="/static/0b94fb68ad126cd6ea355d24b1b6317e/31aff/registry-fn-slack.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F0b94fb68ad126cd6ea355d24b1b6317e%2F31aff%2Fregistry-fn-slack.png" title="Registry to function to Slack" alt="Registry to Function to Slack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That seemed like a perfect use case for a &lt;strong&gt;serverless function&lt;/strong&gt;. The function has an endpoint, and I can use that in the container registry webhook. The webhook sends the container registry payload to my function. In the function, I can write any code I want, modify the payload, and then it to the Slack webhook.&lt;/p&gt;

&lt;p&gt;To create the function, I downloaded the &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions" rel="noopener noreferrer"&gt;Azure Functions VS Code extension&lt;/a&gt;, logged in, and I was able to do everything from my editor.&lt;/p&gt;

&lt;p&gt;Here's how the function looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const axios = require('axios');
const slackUrl =
  'https://hooks.slack.com/services/[somethingsomething]';

module.exports = async function (context, req) {
  let message = JSON.stringify(req.body);

  if (
    req.body.hasOwnProperty('target') &amp;amp;&amp;amp;
    req.body.hasOwnProperty('timestamp')
  ) {
    const {
      target: { repository, tag },
      timestamp,
    } = req.body;
    message = `Pushed image ${repository}:${tag} (${timestamp})`;
  } else {
    context.res = {
      status: 500,
      body: req.body,
    };
    return;
  }

  axios
    .post(
      slackUrl,
      { text: message },
      {
        headers: {
          'content-type': 'application/json',
        },
      }
    )
    .then((response) =&amp;gt; {
      context.res = { body: response.body };
    })
    .catch((err) =&amp;gt; {
      context.res = {
        status: 500,
        body: err,
      };
      context.done();
    });
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I deployed the function, then updated the container registry webhook to send the payloads to the function URL, and that was it. The container registry webhook has the option of sending a Ping to the endpoint. That way you, can test out the connection. Similarly, if I wanted to test the function-to-Slack connection, I could either send a POST request directly to the Slack, or manually invoke the function.&lt;/p&gt;

&lt;p&gt;Every time I make changes to the labs or any code, and when I merge a PR to the main branch, the Github action builds the images and pushes them to the registry. Then, registry takes over, sends the payload to the function and sends the message to a Slack channel.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Kubernetes Network Policy</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Wed, 07 Oct 2020 20:43:25 +0000</pubDate>
      <link>https://dev.to/peterj/kubernetes-network-policy-11di</link>
      <guid>https://dev.to/peterj/kubernetes-network-policy-11di</guid>
      <description>&lt;h2&gt;
  
  
  Network Policy
&lt;/h2&gt;

&lt;p&gt;Using the NetworkPolicy resource, you can control the traffic flow for your applications in the cluster, at the IP address level or port level (OSI layer 3 or 4).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Open Systems Interconnection model (OSI model) is a conceptual model that characterises and standardizes the communication functions, regardless of the underlying technology. For more information, see the &lt;a href="https://en.wikipedia.org/wiki/OSI_model" rel="noopener noreferrer"&gt;OCI model&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With the NetworkPolicy you can define how your Pod can communicate with various network entities in the cluster. There are three parts to defining a NetworkPolicy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the Pods the policy applies to. You can do that using labels. For example, using &lt;code&gt;app=hello&lt;/code&gt; applies the policy to all Pods with that label.&lt;/li&gt;
&lt;li&gt;Decide if the policy applies for incoming (ingress) traffic, outgoing (egress) traffic, or both.&lt;/li&gt;
&lt;li&gt;Define the ingress or egress rules by specifying IP blocks, ports, Pod selectors, or namespace selectors.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is a sample NetworkPolicy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: my-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: hello
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          owner: ricky 
    - podSelector:
        matchLabels:
          version: v2
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 500
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down the above YAML. The &lt;code&gt;podSelector&lt;/code&gt; tells us that the policy applies to all Pods in the &lt;code&gt;default&lt;/code&gt; namespace that have the &lt;code&gt;app: hello&lt;/code&gt; label set. We are defining policy for both ingress and egress traffic.&lt;/p&gt;

&lt;p&gt;The calls to the Pods policy applies to can be made from any IP within the CIDR block &lt;code&gt;172.17.0.0/16&lt;/code&gt; (that's 65536 IP addresses, from 172.17.0.0 to 172.17.255.255), except for Pods whose IP falls within the CIDR block 172.17.1.0/24 (256 IP addresses, from 172.17.1.0 to 172.17.1.255) to the port &lt;code&gt;8080&lt;/code&gt;. Additionally, the calls to the Pods policy applies to can be coming from any Pod in the namespace(s) with the label &lt;code&gt;owner: ricky&lt;/code&gt; and any Pod from the &lt;code&gt;default&lt;/code&gt; namespace, labeled &lt;code&gt;version: v2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/93ebff475c9f429af31f85755f7c919b/20982/network-policy-1.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F93ebff475c9f429af31f85755f7c919b%2F20982%2Fnetwork-policy-1.png" title="Ingress Network Policy" alt="Ingress Network Policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The egress policy specifies that Pods with the label &lt;code&gt;app: hello&lt;/code&gt; in the &lt;code&gt;default&lt;/code&gt; namespace can make calls to any IP within 10.0.0.0/24 (256 IP addresses, from 10.0.0.0, 10.0.0.255), but only to the port &lt;code&gt;5000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/7d3395cfa9c43452a8ec5fd15810cc49/487bb/network-policy-2.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F7d3395cfa9c43452a8ec5fd15810cc49%2F487bb%2Fnetwork-policy-2.png" title="Egress Network Policy" alt="Egress Network Policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Pod and namespace selectors support &lt;strong&gt;and&lt;/strong&gt; and &lt;strong&gt;or&lt;/strong&gt; semantics. Let's consider the following snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ...
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          user: ricky
      podSelector:
        matchLabels:
          app: website
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above snippet with a single element in the &lt;code&gt;from&lt;/code&gt; array, includes all Pods with labels &lt;code&gt;app: website&lt;/code&gt; from the namespace labeled &lt;code&gt;user: ricky&lt;/code&gt;. This is the equivalent of &lt;strong&gt;and&lt;/strong&gt; operator.&lt;/p&gt;

&lt;p&gt;If you change the &lt;code&gt;podSelector&lt;/code&gt; to be a separate element in the &lt;code&gt;from&lt;/code&gt; array by adding &lt;code&gt;-&lt;/code&gt;, you are using the &lt;strong&gt;or&lt;/strong&gt; operator.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ...
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          user: ricky
    - podSelector:
        matchLabels:
          app: website
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above snippet includes all Pods labeled &lt;code&gt;app: website&lt;/code&gt; or all Pods from the namespace with the label &lt;code&gt;user: ricky&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Cilium
&lt;/h2&gt;

&lt;p&gt;Network policies are implemented (and rules enforced) through network plugins. If you don't install a network plugin, the policies won't have any effect.&lt;/p&gt;

&lt;p&gt;I will use the &lt;a href="https://docs.cilium.io/en/stable/gettingstarted/minikube/#install-cilium" rel="noopener noreferrer"&gt;Cilium plugin&lt;/a&gt; and install it on top of Minikube. You could also use a different plugin, such as &lt;a href="https://www.projectcalico.org/" rel="noopener noreferrer"&gt;Calico&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you already have Minikube running, you will have to stop and delete the cluster (or create a new one). You will have to start Minikube with the &lt;code&gt;cni&lt;/code&gt; flag for the Cilium to work correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube start --network-plugin=cni
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once Minikube starts, you can install Cilium.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.8.3/install/kubernetes/quick-install.yaml
all/kubernetes/quick-install.yaml
serviceaccount/cilium created
serviceaccount/cilium-operator created
configmap/cilium-config created
clusterrole.rbac.authorization.k8s.io/cilium created
clusterrole.rbac.authorization.k8s.io/cilium-operator created
clusterrolebinding.rbac.authorization.k8s.io/cilium created
clusterrolebinding.rbac.authorization.k8s.io/cilium-operator created
daemonset.apps/cilium created
deployment.apps/cilium-operator created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cilium is installed in &lt;code&gt;kube-system&lt;/code&gt; namespace, so you can run &lt;code&gt;kubectl get po -n kube-system&lt;/code&gt; and wait until the Cilium Pods are up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;Let's look at an example that demonstrates how to disable egress traffic from the Pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: no-egress-pod
  labels:
    app.kubernetes.io/name: hello
spec:
  containers:
    - name: container
      image: radial/busyboxplus:curl
      command: ["sh", "-c", "sleep 3600"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the above YAML to &lt;code&gt;no-egress-pod.yaml&lt;/code&gt; and create the Pod using &lt;code&gt;kubectl apply -f no-egress-pod.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once the Pod is running, let's try calling &lt;code&gt;google.com&lt;/code&gt; using &lt;code&gt;curl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it no-egress-pod -- curl -I -L google.com
HTTP/1.1 301 Moved Permanently
Location: http://www.google.com/
Content-Type: text/html; charset=UTF-8
Date: Thu, 24 Sep 2020 16:30:59 GMT
Expires: Sat, 24 Oct 2020 16:30:59 GMT
Cache-Control: public, max-age=2592000
Server: gws
Content-Length: 219
X-XSS-Protection: 0
X-Frame-Options: SAMEORIGIN

HTTP/1.1 200 OK
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The call completes successfully. Let's define a network policy that will prevent egress for Pods with the label &lt;code&gt;app.kubernetes.io/name: hello&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-egress
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: hello
  policyTypes:
    - Egress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you run the same command this time, &lt;code&gt;curl&lt;/code&gt; won't be able to resolve the host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it no-egress-pod -- curl -I -L google.com
curl: (6) Couldn't resolve host 'google.com'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try running &lt;code&gt;kubectl edit pod no-egress-pod&lt;/code&gt; and change the label value to &lt;code&gt;hello123&lt;/code&gt;. Save the changes and then re-run the curl command. This time, the command works fine because we changed the Pod label, and the network policy does not apply to it anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Network Policies
&lt;/h2&gt;

&lt;p&gt;Let's look at a couple of scenarios and corresponding network policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deny all egress traffic
&lt;/h3&gt;

&lt;p&gt;Denies all egress traffic from the Pods in the namespace, so Pods cannot make any outgoing requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
spec:
  podSelector: {}
  policyTypes:
    - Egress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deny all ingress traffic
&lt;/h3&gt;

&lt;p&gt;Denies all ingress traffic, and Pods cannot receive any requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
spec:
  podSelector: {}
  policyTypes:
    - Ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Allow ingress traffic to specific Pods
&lt;/h3&gt;

&lt;p&gt;Allow ingress to specific Pods, identified by the label &lt;code&gt;app: my-app&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: pods-allow-all
spec:
  podSelector:
    matchLabels:
      app: my-app
  ingress:
    - {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deny ingress to specific Pods
&lt;/h3&gt;

&lt;p&gt;Denies ingress to specific Pods, identified by the label &lt;code&gt;app: my-app&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: pods-deny-all
spec:
  podSelector:
    matchLabels:
      app: my-app
  ingress: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Restrict traffic to specific Pods
&lt;/h3&gt;

&lt;p&gt;Allows traffic from certain Pods only. Allow traffic from &lt;code&gt;app: customers&lt;/code&gt; to any frontend Pods (&lt;code&gt;role: frontend&lt;/code&gt;) that are part of the same app (&lt;code&gt;app: customers&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: frontend-allow
spec:
  podSelector:
    matchLabels:
      app: customers
      role: frontend
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: customers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deny all traffic to and within a namespace
&lt;/h3&gt;

&lt;p&gt;Denies all incoming traffic (no ingress rules defined) to all Pods (empty &lt;code&gt;podSelector&lt;/code&gt;) in the &lt;code&gt;prod&lt;/code&gt; namespace. Any calls from outside of the &lt;code&gt;default&lt;/code&gt; namespace will be blocked and any calls between Pods in the same namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: prod-deny-all
  namespace: prod
spec:
  podSelector: {}
  ingress: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deny all traffic from other namespaces
&lt;/h3&gt;

&lt;p&gt;Denies all traffic from other namespaces, coming to the Pods in the &lt;code&gt;prod&lt;/code&gt; namespace. It matches all pods (empty &lt;code&gt;podSelector&lt;/code&gt;) in the &lt;code&gt;prod&lt;/code&gt; namespace and allows ingress from all Pods in the &lt;code&gt;prod&lt;/code&gt; namespace, as the ingress &lt;code&gt;podSelector&lt;/code&gt; is empty as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-other-namespaces
  namespace: prod
spec:
  podSelector: {}
  ingress:
    - from:
        - podSelector: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deny all egress traffic for specific Pods
&lt;/h3&gt;

&lt;p&gt;Denies Pods labeled with &lt;code&gt;app: api&lt;/code&gt; from making any external calls.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-deny-egress
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
    - Egress
  egress: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Sidecar Container Pattern</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Mon, 05 Oct 2020 03:30:25 +0000</pubDate>
      <link>https://dev.to/peterj/sidecar-container-pattern-314</link>
      <guid>https://dev.to/peterj/sidecar-container-pattern-314</guid>
      <description>&lt;p&gt;The sidecar container aims to add or augment an existing container's functionality without changing the container. In comparison to the init container, we discussed previously, the sidecar container starts and runs simultaneously as your application container. The sidecar is just a second container you have in your container list, and the startup order is not guaranteed.&lt;/p&gt;

&lt;p&gt;Probably one of the most popular implementations of the sidecar container is in Istio service mesh. The sidecar container (an Envoy proxy) is running next to the application container and intercepting inbound and outbound requests. In this scenario, the sidecar adds the functionality to the existing container and allows the operator to do traffic routing, failure injection, and other features.&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/40533f36597741a1858f63c898189394/7422e/sidecar-log-collector.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F40533f36597741a1858f63c898189394%2F7422e%2Fsidecar-log-collector.png" title="Sidecar Pattern" alt="Sidecar Pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A simpler idea might be having a sidecar container (&lt;code&gt;log-collector&lt;/code&gt;) that collects and stores application container's logs. That way, as an application developer, you don't need to worry about collecting and storing logs. You only need to write logs to a location (a volume, shared between the containers) where the sidecar container can collect them and send them to further processing or archive them.&lt;/p&gt;

&lt;p&gt;If we continue with the example we used for the init container; we could create a sidecar container that periodically updates runs &lt;code&gt;git pull&lt;/code&gt; and updates the repository. For this to work, we will keep the init container to do the initial clone, and a sidecar container that will periodically (every 60 seconds for example) check and pull the repository changes.&lt;/p&gt;

&lt;p&gt;To try this out, make sure you fork the &lt;a href="https://github.com/peterj/simple-http-page.git" rel="noopener noreferrer"&gt;original Github repository&lt;/a&gt; and use your fork in the YAML below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: website
spec:
  initContainers:
    - name: clone-repo
      image: alpine/git
      command:
        - git
        - clone
        - --progress
        - https://github.com/peterj/simple-http-page.git
        - /usr/share/nginx/html
      volumeMounts:
        - name: web
          mountPath: "/usr/share/nginx/html"
  containers:
    - name: nginx
      image: nginx
      ports:
        - name: http
          containerPort: 80
      volumeMounts:
        - name: web
          mountPath: "/usr/share/nginx/html"
    - name: refresh
      image: alpine/git
      command:
        - sh
        - -c
        - watch -n 60 git pull
      workingDir: /usr/share/nginx/html
      volumeMounts:
        - name: web
          mountPath: "/usr/share/nginx/html"
  volumes:
    - name: web
      emptyDir: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We added a container called &lt;code&gt;refresh&lt;/code&gt; to the YAML above. It uses the &lt;code&gt;alpine/git&lt;/code&gt; image, the same image as the init container, and runs the &lt;code&gt;watch -n 60 git pull&lt;/code&gt; command.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;watch&lt;/code&gt; command periodically executes a command. In our case, it executes &lt;code&gt;git pull&lt;/code&gt; command and updates the local repository every 60 seconds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another field we haven't mentioned before is &lt;code&gt;workingDir&lt;/code&gt;. This field will set the working directory for the container. We are setting it to &lt;code&gt;/usr/share/nginx/html&lt;/code&gt; as that's where we originally cloned the repo to using the init container.&lt;/p&gt;

&lt;p&gt;Save the above YAML to &lt;code&gt;sidecar-container.yaml&lt;/code&gt; and create the Pod using &lt;code&gt;kubectl apply -f sidecar-container.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you run &lt;code&gt;kubectl get pods&lt;/code&gt; once the init container has executed, you will notice the &lt;code&gt;READY&lt;/code&gt; column now shows &lt;code&gt;2/2&lt;/code&gt;. These numbers tell you right away that this Pod has a total of two containers, and both of them are ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get po
NAME READY STATUS RESTARTS AGE
website 2/2 Running 0 3m39s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you set up the port forward to the Pod using &lt;code&gt;kubectl port-forward pod/website 8000:80&lt;/code&gt; command and open the browser to &lt;code&gt;http://localhost:8000&lt;/code&gt;, you will see the same webpage as before.&lt;/p&gt;

&lt;p&gt;We can open a separate terminal window and watch the logs from the &lt;code&gt;refresh&lt;/code&gt; container inside the &lt;code&gt;website&lt;/code&gt; Pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl logs website -c refresh -f

Every 60.0s: git pull
Already up to date.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;watch&lt;/code&gt; command is running, and the response from the last &lt;code&gt;git pull&lt;/code&gt; command was &lt;code&gt;Already up to date&lt;/code&gt;. Let's make a change to the &lt;code&gt;index.html&lt;/code&gt; in the repository you forked.&lt;/p&gt;

&lt;p&gt;I added &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt; element and here's how the updated &lt;code&gt;index.html&lt;/code&gt; file looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
  &amp;lt;head&amp;gt;
    &amp;lt;title&amp;gt;Hello from Simple-http-page&amp;lt;/title&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;h1&amp;gt;Welcome to simple-http-page&amp;lt;/h1&amp;gt;
    &amp;lt;div&amp;gt;Hello!&amp;lt;/div&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you need to stage this and commit it to the &lt;code&gt;master&lt;/code&gt; branch. The easiest way to do that is from the Github's webpage. Open the &lt;code&gt;index.html&lt;/code&gt; on Github (I am opening &lt;code&gt;https://github.com/peterj/simple-http-page/blob/master/index.html&lt;/code&gt;, but you should replace my username &lt;code&gt;peterj&lt;/code&gt; with your username or the organization you forked the repo to) and click the pencil icon to edit the file (see the figure below).&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/14cf9e922652848ce30ec989c791d60d/fcda8/github-edit-file.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F14cf9e922652848ce30ec989c791d60d%2Ffcda8%2Fgithub-edit-file.png" title="Edit file on Github" alt="Edit  raw `index.html` endraw  on Github"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make the change to the &lt;code&gt;index.html&lt;/code&gt; file and click the &lt;strong&gt;Commit changes&lt;/strong&gt; button to commit them to the branch. Next, watch the output from the &lt;code&gt;refresh&lt;/code&gt; container, and you should see the output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Every 60.0s: git pull

From https://github.com/peterj/simple-http-page
   f804d4c..ad75286 master -&amp;gt; origin/master
Updating f804d4c..ad75286
Fast-forward
 index.html | 1 +
 1 file changed, 1 insertion(+)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above output indicates changes to the repository. Git pulls the updated file to the shared volume. Finally, refresh your browser where you have &lt;code&gt;http://localhost:8000&lt;/code&gt; opened, and you will notice the changes on the page:&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/0717d881ca11c08ed90f8c06f21c038d/0b533/webpage-sidecar.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F0717d881ca11c08ed90f8c06f21c038d%2F0b533%2Fwebpage-sidecar.png" title="Updated index.html page" alt="Updated  raw `index.html` endraw  page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can make more changes, and each time, the page will get updated within 60 seconds. You can delete the Pod by running &lt;code&gt;kubectl delete po website&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>patterns</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Ambassador Container Pattern</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Sat, 03 Oct 2020 20:43:25 +0000</pubDate>
      <link>https://dev.to/peterj/ambassador-container-pattern-5pp</link>
      <guid>https://dev.to/peterj/ambassador-container-pattern-5pp</guid>
      <description>&lt;p&gt;The ambassador container pattern aims to hide the primary container's complexity and provide a unified interface through which the primary container can access services outside of the Pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/6b85afac5d45b65c9fad81b6f6364267/3c024/ambassador-pattern.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JqpkbGQJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://learncloudnative.com/static/6b85afac5d45b65c9fad81b6f6364267/3c024/ambassador-pattern.png" alt="Ambassador Pattern" title="Ambassador Pattern"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These outside or external services might present different interfaces and have other APIs. Instead of writing the code inside the main container that can deal with these external services' multiple interfaces and APIs, you implement it in the ambassador container. The ambassador container knows how to talk to and interpret responses from different endpoints and pass them to the main container. The main container only needs to know how to talk to the ambassador container. You can then re-use the ambassador container with any other container that needs to talk to these services while maintaining the same internal interface.&lt;/p&gt;

&lt;p&gt;Another example would be where your main containers need to make calls to a protected API. You could design your ambassador container to handle the authentication with the protected API. Your main container will make calls to the ambassador container. The ambassador will attach any needed authentication information to the request and make an authenticated request to the external service.&lt;/p&gt;

&lt;p&gt;&lt;a href="///static/e5302ccd36a459c74d22b0c4063e8703/e9c9b/tmdb-calls.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--riWxypxN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://learncloudnative.com/static/e5302ccd36a459c74d22b0c4063e8703/e9c9b/tmdb-calls.png" alt="Calls through ambassador to TMDB" title="Calls through ambassador to TMDB"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To demonstrate how the ambassador pattern works, we will use &lt;a href="https://www.themoviedb.org/"&gt;The Movie DB (TMBD)&lt;/a&gt;. Head over to the website and register (it's free) to get an API key.&lt;/p&gt;

&lt;p&gt;The Movie DB website offers a REST API where you can get information about the movies. We have implemented an ambassador container that listens on path &lt;code&gt;/movies&lt;/code&gt;, and whenever it receives a request, it will make an authenticated request to the API of The Movie DB.&lt;/p&gt;

&lt;p&gt;Here's the snippet from the code of the ambassador container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func TheMovieDBServer(w http.ResponseWriter, r *http.Request) {
    apiKey := os.Getenv("API_KEY")
    resp, err := http.Get(fmt.Sprintf("https://api.themoviedb.org/3/discover/movie?api_key=%s", apiKey))
    // ...
    // Return the response
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will read the &lt;code&gt;API_KEY&lt;/code&gt; environment variable and then make a GET request to the URL. Note if you try to request to URL without the API key, you'll get the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl https://api.themoviedb.org/3/discover/movie
{"status_code":7,"status_message":"Invalid API key: You must be granted a valid key.","success":false}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have pushed the ambassador's Docker image to &lt;code&gt;startkubernetes/ambassador:0.1.0&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Just like with the sidecar container, the ambassador container is just another container that's running in the Pod. We will test the ambassador container by calling &lt;code&gt;curl&lt;/code&gt; from the main container.&lt;/p&gt;

&lt;p&gt;Here's how the YAML file looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: themoviedb
spec:
  containers:
    - name: main
      image: radial/busyboxplus:curl
      args:
        - sleep
        - "600"
    - name: ambassador
      image: startkubernetes/ambassador:0.1.0
      env:
        - name: API_KEY
          valueFrom:
            secretKeyRef:
              name: themoviedb
              key: apikey
      ports:
        - name: http
          containerPort: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we can create the Pod, we need to create a Secret with the API key. Let's do that first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create secret generic themoviedb --from-literal=apikey=&amp;lt;INSERT YOUR API KEY HERE&amp;gt;
secret/themoviedb created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now store the Pod YAML in &lt;code&gt;ambassador-container.yaml&lt;/code&gt; file and create it with &lt;code&gt;kubectl apply -f ambassador-container.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When Kubernetes creates the Pod (you can use &lt;code&gt;kubectl get po&lt;/code&gt; to see the status), you can use the &lt;code&gt;exec&lt;/code&gt; command to run the &lt;code&gt;curl&lt;/code&gt; command inside the &lt;code&gt;main&lt;/code&gt; container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl exec -it themoviedb -c main -- curl localhost:8080/movies

{"page":1,"total_results":10000,"total_pages":500,"results":[{"popularity":2068.491,"vote_count":
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since containers within the same Pod share the network, we can make a request against &lt;code&gt;localhost:8080&lt;/code&gt;, which corresponds to the port on the ambassador container.&lt;/p&gt;

&lt;p&gt;You could imagine running an application or a web server in the main container, and instead of making requests to the &lt;code&gt;api.themoviedb.org&lt;/code&gt; directly, you are making requests to the ambassador container.&lt;/p&gt;

&lt;p&gt;Similarly, if you had any other service that needed access to the &lt;code&gt;api.themoviedb.org&lt;/code&gt; you could add the ambassador container to the Pod and solve access like that.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>patterns</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Start Kubernetes for Beginners</title>
      <dc:creator>Peter Jausovec</dc:creator>
      <pubDate>Thu, 01 Oct 2020 20:43:25 +0000</pubDate>
      <link>https://dev.to/peterj/start-kubernetes-for-beginners-34ne</link>
      <guid>https://dev.to/peterj/start-kubernetes-for-beginners-34ne</guid>
      <description>&lt;p&gt;I am excited to announce my latest course on Kubernetes called &lt;strong&gt;Start Kubernetes&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this course?
&lt;/h2&gt;

&lt;p&gt;I have started working with Kubernetes years ago, and I remember the confusion when I heard about Pods and Services and how things work in Kubernetes. I've been using Marathon/Mesos at that point, so the concepts were a bit familiar, but still foreign. I noticed I was able to grasp the concepts quicker through practical exercises and examples.&lt;/p&gt;

&lt;p&gt;Talking about complexity, even today, if you want to deploy and run an application inside Kubernetes, you have to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Deployment representing your application&lt;/li&gt;
&lt;li&gt;Design the Pods, set the resource limits, requests, and security settings&lt;/li&gt;
&lt;li&gt;Create a Service, so that you can access the application&lt;/li&gt;
&lt;li&gt;Define any Configuration and Secrets your application will use &lt;/li&gt;
&lt;li&gt;Configure or reconfigure Ingress&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's on top of being familiar with how Pods, Deployments, Services, Ingress, and other Kubernetes resources work. Of course, there are other knobs and buttons you can turn and push to fine-tune each one of these resources. No one can deny there is a lot of complexity and new terminology.&lt;/p&gt;

&lt;p&gt;I have created this course to try, and &lt;strong&gt;flatten the learning curve of Kubernetes&lt;/strong&gt; , explaining the individual concepts and resources, show how everything fits together, and then bring it together with practical examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  Course overview
&lt;/h2&gt;

&lt;p&gt;There are three parts to the course: the e-book, the video course, and the practical exercises/labs.&lt;/p&gt;

&lt;h3&gt;
  
  
  The book
&lt;/h3&gt;

&lt;p&gt;This whole course started as a long &lt;a href="https://www.learncloudnative.com/blog/2020-05-26-getting-started-with-kubernetes-part-1/" rel="noopener noreferrer"&gt;Kubernetes article&lt;/a&gt; and it ended up as a &lt;strong&gt;230+ page book&lt;/strong&gt; where I cover the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes Resources&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An overview of Pods, ReplicaSets, Deployments, Services, Ingress, Namespaces, Jobs, CronJobs, and things like labels, selectors, and annotations&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to configure your Kubernetes applications with ConfigMaps and how to store and use Secrets&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stateful Workloads&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Introduction to Volumes, Persistent Volumes, and Persistent Volume Claims, and how to run stateful workloads in Kubernetes using StatefulSets&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Organizing Containers&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Init, sidecar, ambassador, and adapter container patterns with practical examples&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Application Health&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Explanation of liveness, startup, and readiness probes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Security in Kubernetes&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Services accounts, using RBAC, security contexts, and Pod security policies&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Scaling and Resources&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Autoscaling Pods using HorizontalPodAutoscaler, defining resource requests and limits; using affinity, taints, and tolerations&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Extending Kubernetes&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using custom resource definitions (CRDs), implementing controllers/operators&lt;/p&gt;

&lt;h3&gt;
  
  
  Videos
&lt;/h3&gt;

&lt;p&gt;I enjoy writing, but I also enjoy making videos, so I decided to turn the book into the video course. If you're a visual learner as I am, you will enjoy this. I have recorded &lt;strong&gt;23 videos&lt;/strong&gt; that walk you through different Kubernetes topics, starting with basic resources, such as Pods and Deployments, and role-based access control and creating your own custom resources and controllers.&lt;/p&gt;

&lt;p&gt;The full list of videos is below. You can also check out the first four videos on my &lt;a href="https://www.youtube.com/watch?v=B1_jgR3zuvA" rel="noopener noreferrer"&gt;YouTube channel&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;About Pods (3:37)&lt;/li&gt;
&lt;li&gt;Creating Pods (5:16)&lt;/li&gt;
&lt;li&gt;About ReplicaSets (2:38)&lt;/li&gt;
&lt;li&gt;Creating ReplicaSets (12:20)&lt;/li&gt;
&lt;li&gt;Deployments (10:32)&lt;/li&gt;
&lt;li&gt;Deployment Strategies (11:03)&lt;/li&gt;
&lt;li&gt;Services (17:35)&lt;/li&gt;
&lt;li&gt;Service Types (9:20)&lt;/li&gt;
&lt;li&gt;About Ingress (3:13)&lt;/li&gt;
&lt;li&gt;Creating Ingress (18:30)&lt;/li&gt;
&lt;li&gt;Jobs (9:38)&lt;/li&gt;
&lt;li&gt;CronJob (6:11)&lt;/li&gt;
&lt;li&gt;About Configuration (2:10)&lt;/li&gt;
&lt;li&gt;ConfigMaps (17:23)&lt;/li&gt;
&lt;li&gt;Secrets (10:15)&lt;/li&gt;
&lt;li&gt;Volumes (6:24)&lt;/li&gt;
&lt;li&gt;About PV and PVC (2:30)&lt;/li&gt;
&lt;li&gt;Creating PV and PVC (6:40)&lt;/li&gt;
&lt;li&gt;Running MongoDB with StatefulSet (9:26)&lt;/li&gt;
&lt;li&gt;Service Accounts (10:07)&lt;/li&gt;
&lt;li&gt;Role-Based Access Control (RBAC) (9:56)&lt;/li&gt;
&lt;li&gt;Resource Requests, Limits, and Quotas (13:42)&lt;/li&gt;
&lt;li&gt;CRDs and Controllers (18:21)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical exercises
&lt;/h3&gt;

&lt;p&gt;Reading and watching can become tedious, so I have created 41 practical exercises. You will install the Start Kubernetes Labs on your Kubernetes cluster, and go through the exercises from your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/34676741c6d38eaf8a59c5100910f9aa/c367c/startk8s-exercises.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F34676741c6d38eaf8a59c5100910f9aa%2F914ae%2Fstartk8s-exercises.png" title="Start Kubernetes Exercises" alt="Start Kubernetes Exercises"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have grouped tasks/exercises into seven categories, and each category has from 3 to 10 tasks. Tasks have a detailed description of what needs to be solved, and you can use the built-in terminal on the webpage to solve the exercises. From the terminal, you have access to the Kubernetes cluster via the CLI.&lt;/p&gt;

&lt;p&gt;&lt;a href="/static/8b26d8ba4872af0e37a7895ded91abec/2a08f/startk8s-mash.png"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flearncloudnative.com%2Fstatic%2F8b26d8ba4872af0e37a7895ded91abec%2F914ae%2Fstartk8s-mash.png" title="Start Kubernetes Exercises" alt="Start Kubernetes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can solve the exercises in order, or jump from one to another - however you feel most comfortable learning. There's no timers, scores, or leaderboards. If you're still stuck, you can hop on the Slack channel, explain the issue you're running into, and we will solve it together. The point of these exercises is to learn and get better at Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gumroad.com/l/kubernetes/y17xld5?utm_source=devto" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flynsmv2lepbot4fhs8mn.png" alt="How to get it"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
