<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rastko Vukašinović</title>
    <description>The latest articles on DEV Community by Rastko Vukašinović (@metaphorical).</description>
    <link>https://dev.to/metaphorical</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/metaphorical"/>
    <language>en</language>
    <item>
      <title>Internal and external connectivity in Kubernetes space</title>
      <dc:creator>Rastko Vukašinović</dc:creator>
      <pubDate>Sun, 11 Aug 2019 22:54:37 +0000</pubDate>
      <link>https://dev.to/metaphorical/internal-and-external-connectivity-in-kubernetes-space-1mj9</link>
      <guid>https://dev.to/metaphorical/internal-and-external-connectivity-in-kubernetes-space-1mj9</guid>
      <description>&lt;h1&gt;
  
  
  Services and networking — from ClusterIP to Ingress
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AyvSj52Q7s3V70LaT59QoCA%402x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AyvSj52Q7s3V70LaT59QoCA%402x.jpeg" alt="intro"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you are making your way through all the stages of your app’s development and you (inevitably) get to consider using Kubernetes, it is time to understand how your app components connect to each other and to outside world when deployed to Kubernetes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Knowledge you will get from this article also covers “services &amp;amp; networking” part of &lt;a href="https://www.cncf.io/certification/ckad/" rel="noopener noreferrer"&gt;CKAD exam&lt;/a&gt;, which currently takes 13% of the certification exam curriculum.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Services
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes services&lt;/strong&gt; provide networking between different components within the cluster and with outside world (open internet, other applications, networks etc).&lt;/p&gt;

&lt;p&gt;There are different kinds of services, and here we’ll cover some:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;NodePort&lt;/em&gt; — service that exposes Pod through port on the node&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;ClusterIP&lt;/em&gt; — service that creates virtual IP within the cluster to enable different services within the cluster to talk to each other.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;LoadBalancer&lt;/em&gt; — creates (provisions) load balancer to a set of servers in kubernetes cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  NodePort
&lt;/h3&gt;

&lt;p&gt;NodePort service maps (exposes) port on the Pod to a port on the Node. There are actually 3 ports involved in the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;NodePort&lt;/em&gt; service exposes deployment (set of pods) to the outside of the k8s node&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;targetPort&lt;/em&gt; — port on the Pod (where your app listens). This is optional parameter, if not present, Port is taken&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Port&lt;/em&gt; — port on the service itself (usually same as pod port)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;NodePort&lt;/em&gt; — port on the node that is used to access web server externally. By standard, NodePort can be in the range between 30000 and 32767. This is optional parameter, if not present, random available port in valid range will be assigned.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Creating the NodePort service
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s-nodeport-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
 &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
   &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
   &lt;span class="na"&gt;NodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30666&lt;/span&gt;
&lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Linking pods to a service
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Selectors&lt;/strong&gt; are the way to refer (link) service to certain set of pods.&lt;br&gt;
As set of pods gets selected based on the selector (in almost all cases, pods from same deployment), &lt;strong&gt;service starts sending traffic to all of them in random manner effectively acting as load balancer&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If mentioned pods are distributed across the nodes, service will be created across the nodes to be able to link all the pods. In case of multi node service, service exposes same port on all nodes.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  ClusterIP
&lt;/h3&gt;

&lt;p&gt;In case of application consisting of multiple tiers deployed to different sets of pods, way to establish communication between different tiers inside the cluster is necessary.&lt;/p&gt;

&lt;p&gt;For example, we have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5 pods of API number 1&lt;/li&gt;
&lt;li&gt;2 pods of API number 2&lt;/li&gt;
&lt;li&gt;1 pod of redis&lt;/li&gt;
&lt;li&gt;10 pods of frontend app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of above mentioned 18 pods have their own distinct IP addresses, but making communication that way would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unstable since pods can die and be recreated with new IP any time.&lt;/li&gt;
&lt;li&gt;Inefficient since we would have to load-balance within the integration part of each app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ClusterIP service provides us with unified interfaces to access each group of pods — it provides a group of pods with internal name/IP.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-1&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9090&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9090&lt;/span&gt;
&lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-1&lt;/span&gt;
 &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ClusterIP&lt;/strong&gt; is default type of service, so if service type is not specified, k8s assumes ClusterIP.&lt;/p&gt;

&lt;p&gt;When this service gets created, other applications within the cluster can access the service through service IP or service name.&lt;/p&gt;

&lt;h3&gt;
  
  
  LoadBalancer
&lt;/h3&gt;

&lt;p&gt;In short, LoadBalancer type of service is provisioning external load balancer in cloud space — depending on provider support.&lt;/p&gt;

&lt;p&gt;Deployed load balancer will act as NodePort, but will have more advanced load balancing features and will also act as if you got additional proxy in front of NodePort in order to get new IP and some standard web port mapping (30666 &amp;gt; 80). As you see, it’s features position it as the main way to expose service directly to outside world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2ANcCpQOB5yFdb7rx-DkWdgw%402x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2ANcCpQOB5yFdb7rx-DkWdgw%402x.jpeg" alt="Load balancer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Main downside of this approach is that any service you expose needs it’s own load balancer, which can, after a while, have significant impact on complexity and price.&lt;/p&gt;

&lt;p&gt;Let’s briefly review the possibilities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lb1&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
 &lt;span class="na"&gt;externalTrafficPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Local&lt;/span&gt;  
 &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;  
 &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;    
   &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;    
   &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;    
   &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;  
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;443&lt;/span&gt;    
   &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;443&lt;/span&gt;    
   &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;    
   &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;  
 &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
   &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lb1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above creates external load balancer and provisions all the networking setups needed for it to load balance traffic to nodes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note from k8s docs&lt;/strong&gt;: With the new functionality, the external traffic will not be equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you want to add AWS ELB as external load balancer, you need to add following annotations to load balancer service metadata:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
 &lt;span class="s"&gt;service.beta.kubernetes.io/aws-load-balancer-backend-protocol:"tcp"&lt;/span&gt;    
 &lt;span class="s"&gt;service.beta.kubernetes.io/aws-load-balancer-proxy-protocol:"*"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Ingress
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Real life cluster setup
&lt;/h3&gt;

&lt;p&gt;When getting into space where we are managing more than one web server with multiple different sets of pods, above mentioned services turn out to be quite complex to manage in most of the real life cases.&lt;/p&gt;

&lt;p&gt;Let’s review example we had before — 2 APIs, redis and frontend, and imagine that APIs have more consumers then just frontend service so they need to be exposed to open internet.&lt;/p&gt;

&lt;p&gt;Requirements are as following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;frontend lives on &lt;em&gt;&lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;API 1 is search api at &lt;em&gt;&lt;a href="http://www.example.com/api/search" rel="noopener noreferrer"&gt;www.example.com/api/search&lt;/a&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;API 2 is general (everything else) api that lives on &lt;em&gt;&lt;a href="http://www.example.com/api" rel="noopener noreferrer"&gt;www.example.com/api&lt;/a&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setup needed using above services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ClusterIP&lt;/strong&gt; service to make components easily accessible to each other within the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NodePort&lt;/strong&gt; service to expose some of the services outside of node, or maybe&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoadBalacer&lt;/strong&gt; service if in the cloud, or&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;proxy server&lt;/strong&gt; like nginx, to connect and route everything properly (30xxx ports to port 80, different services to paths on the proxy etc)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deciding on where to do SSL implementation and maintaining it across&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  So
&lt;/h3&gt;

&lt;p&gt;ClusterIP is necessary, we know it has to be there — it is the only one handling internal networking, so it is as simple as it can be.&lt;br&gt;
External traffic however is different story, we have to set up at least one service per component plus one or multiple supplementary services (load balancers and proxies) in order to achieve requirements.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Number of configs / definitions to be maintained skyrockets, entropy rises, infrastructure setup drowns in complexity…
&lt;/h2&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;Kubernetes cluster has &lt;strong&gt;ingress&lt;/strong&gt; as a solution to above complexity. Ingress is, essentially, layer 7 load balancer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Layer 7&lt;/strong&gt; load balancer is name for type of load balancer that covers layers 5,6 and 7 of networking, which are &lt;strong&gt;session&lt;/strong&gt;, &lt;strong&gt;presentation&lt;/strong&gt; and &lt;strong&gt;application&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ingress can provide load balancing, SSL termination and name-based virtual hosting.&lt;/p&gt;

&lt;p&gt;It covers HTTP, HTTPS.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For anything other then HTTP and HTTPS service will have to be published differently through special ingress setup or via a NodePort or LoadBalancer, but that is now a single place, one time configuration.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  Ingress setup
&lt;/h4&gt;

&lt;p&gt;In order to setup ingress we need two components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ingress controller&lt;/strong&gt; — component that manages ingress based on provided rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingress resources&lt;/strong&gt; — Ingress HTTP rules&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Ingress controller
&lt;/h4&gt;

&lt;p&gt;There are few options you can choose from, among them nginx, GCE (google cloud) and Istio. Only two are officially supported by k8s for now — nginx and GCE.&lt;/p&gt;

&lt;p&gt;We are going to go with &lt;strong&gt;nginx&lt;/strong&gt; as the ingress controller solution. For this we, of course, need new deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-ingress-controller&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
 &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-ingress&lt;/span&gt;
 &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-ingress&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-ingress-controller&lt;/span&gt;
     &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/kubernetes-ingress-controller/nginx-ingress-controller&lt;/span&gt;
     &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/nginx-ingress-controller&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;configMap=$(POD_NAMESPACE)/ingress-config&lt;/span&gt;
     &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POD_NAME&lt;/span&gt;
       &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;fieldRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.name&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POD_NAMESPACE&lt;/span&gt;
       &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fieldRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.namespace&lt;/span&gt;
     &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
       &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;
       &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy ConfigMap in order to control ingress parameters easier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-configuration&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, with basic deployment in place and ConfigMap to make it easier for us to control parameters of the ingress, we need to setup the service to expose ingress to open internet (or some other smaller network).&lt;/p&gt;

&lt;p&gt;For this we setup node port service with proxy/load balancer on top (bare-metal /on-prem example) or load balancer service (Cloud example).&lt;/p&gt;

&lt;p&gt;In both mentioned cases, there is a need for Layer 4 and Layer 7 load balancer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NodePort and possibly custom load balancer on top as L4 and Ingress as L7.&lt;/li&gt;
&lt;li&gt;LoadBalancer as L4 and Ingress as L7.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Layer 4 load balancer&lt;/em&gt; — Directing traffic from network layer based on IP addresses or TCP ports, also referred to as transport layer load balancer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;NodePort for ingress yaml, to illustrate the above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-ingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
 &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;-targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
 &lt;span class="na"&gt;-targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;433&lt;/span&gt;
 &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;433&lt;/span&gt;
 &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https&lt;/span&gt;
&lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-ingress&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This NodePort service gets deployed to each node containing ingress deployment, and then load balancer distributes traffic between nodes&lt;/p&gt;

&lt;p&gt;What separates ingress controller from regular proxy or load balancer is additional underlying functionality that monitors cluster for ingress resources and adjusts nginx accordingly. In order for ingress controller to be able to do this, service account with right permissions is needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;matadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-ingress-serviceaccount&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Above service account needs specific permissions on cluster and namespace in order for ingress to operate correctly, for particularities of permission setup on &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noopener noreferrer"&gt;RBAC&lt;/a&gt; enabled cluster &lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/rbac/" rel="noopener noreferrer"&gt;look at this document in nginx ingress official docs&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When we have all the permissions set up, we are ready to start working on our application ingress setup.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ingress resources
&lt;/h4&gt;

&lt;p&gt;Ingress resources configuration lets you fine tune incoming traffic (or fine-route).&lt;/p&gt;

&lt;p&gt;Let’s first take simple API example. Assuming that we have just one set of pods deployed and exposed through service named simple-api-service on port 8080, we can create &lt;em&gt;simple-api-ingress.yaml&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;simple-api-ingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;simple-api-service&lt;/span&gt;
  &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we &lt;strong&gt;kubectl create -f simple-api-ingress.yaml&lt;/strong&gt; we setup an ingress that routes all incoming traffic to simple-api-service.&lt;/p&gt;

&lt;h4&gt;
  
  
  Rules
&lt;/h4&gt;

&lt;p&gt;Rules are providing configuration to route incoming data based on certain conditions. For example, routing traffic to different services within the cluster based on subdomain or a path.&lt;/p&gt;

&lt;p&gt;Let us now get to initial example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;frontend lives on &lt;strong&gt;&lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt;&lt;/strong&gt; and everything &lt;strong&gt;not /api&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;api 1 is search api at &lt;strong&gt;&lt;a href="http://www.example.com/api/search" rel="noopener noreferrer"&gt;www.example.com/api/search&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;api 2 is general (everything else) api that lives on &lt;strong&gt;&lt;a href="http://www.example.com/api" rel="noopener noreferrer"&gt;www.example.com/api&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since everything is on the same domain, we can handle it all through one rule:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proper-api-ingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;-http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/api/search&lt;/span&gt;
    &lt;span class="s"&gt;backend&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;search-api-service&lt;/span&gt;
     &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8081&lt;/span&gt;
   &lt;span class="na"&gt;-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/api&lt;/span&gt;
    &lt;span class="s"&gt;backend&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-service&lt;/span&gt;
     &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
   &lt;span class="na"&gt;-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="s"&gt;backend&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend-service&lt;/span&gt;
     &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8082&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is also a &lt;strong&gt;default&lt;/strong&gt; backend that is used to serve default pages (like 404s) and it can be deployed separately. In this case we will not need it since frontend will cover 404s.&lt;/p&gt;

&lt;p&gt;You can read more at &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/concepts/services-networking/ingress/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F2000%2F1%2AkwzSoapGCmltYvOKW45-Yw%402x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F2000%2F1%2AkwzSoapGCmltYvOKW45-Yw%402x.jpeg" alt="full setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Bonus — More rules, subdomains and routing
&lt;/h4&gt;

&lt;p&gt;And, what if we changed the example to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;frontend lives on &lt;strong&gt;app.example.com&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;api 1 is search api at &lt;strong&gt;api.example.com/search&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;api 2 is general (everything else) api that lives on &lt;strong&gt;api.example.com&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is also possible, with the introduction of a new structure in the rule definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proper-api-ingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;-host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api.example.com&lt;/span&gt;
  &lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/search&lt;/span&gt;
    &lt;span class="s"&gt;backend&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;search-api-service&lt;/span&gt;
     &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8081&lt;/span&gt;
   &lt;span class="na"&gt;-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="s"&gt;backend&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-service&lt;/span&gt;
     &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
 &lt;span class="na"&gt;-host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app.example.com&lt;/span&gt;
  &lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;-path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="s"&gt;backend&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend-service&lt;/span&gt;
     &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8082&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note (out of scope)&lt;/strong&gt;: You can notice from the last illustration that there are multiple ingress pods, which implies that ingress can scale, and it can. Ingress can be scaled like any other deployment, you can also have it auto scale based on internal or external metrics (external, like number of requests handled is probably the best choice).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note 2 (out of scope)&lt;/strong&gt;: Ingress can, in some cases, be deployed as DaemonSet, to assure scale and distribution across the nodes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Wrap
&lt;/h4&gt;

&lt;p&gt;This was a first pass through the structure and usage of k8s services and networking capabilities that we need in order to structure communication inside and outside of the cluster.&lt;/p&gt;

&lt;p&gt;I, as always, tried to provide to the point and battle tested guide to reality… What is written above should give you enough knowledge to deploy ingress and setup basic set of rules to route traffic to your app and give you context for further fine tuning of your setup.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important piece of advice:&lt;/strong&gt; Make sure to keep all the setups as a code in files, in your repo — infrastructure as a code is essential part of making your application reliable.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>services</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Local Kubernetes setup with minikube on Mac OS X</title>
      <dc:creator>Rastko Vukašinović</dc:creator>
      <pubDate>Fri, 09 Aug 2019 23:00:23 +0000</pubDate>
      <link>https://dev.to/metaphorical/local-kubernetes-setup-with-minikube-on-mac-os-x-331a</link>
      <guid>https://dev.to/metaphorical/local-kubernetes-setup-with-minikube-on-mac-os-x-331a</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes, container registry, Helm…
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/setup/minikube/"&gt;Minikube&lt;/a&gt; is ideal tool to setup &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; (k8s from now on) locally to test and experiment with your deployments.&lt;/p&gt;

&lt;p&gt;In this guide I will try to help you get it up and running on your local machine, drop some tips on where and how particular stuff should be done and also make it helm capable (I assume when you use k8s that at some point you will want to learn about and use Helm, etcd, istio etc).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;This is your local k8s environment scaffolding guide.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Minikube installation
&lt;/h2&gt;

&lt;p&gt;Minikube works with virtual machine, and for this it can use various options depending on your preference and operating system. My preference in this case is Oracle’s VirtualBox.&lt;/p&gt;

&lt;p&gt;You can use brew to install everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ brew cask install virtualbox minikube
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this case you could get some kind of inconclusive installation error related to virtualbox installation, especially on Mojave and probably everything afterwards.&lt;/p&gt;

&lt;p&gt;Whatever it says, it is most probably a new security feature in MacOS X that is in your way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go to System Preferences &amp;gt; Security &amp;amp; Privacy&lt;/strong&gt; and on &lt;strong&gt;General&lt;/strong&gt; screen you will see one (or few) messages about some software needing approval to install. You should carefully review the list if there is more than one and allow installation of software you need — in this case software by Oracle.&lt;/p&gt;

&lt;p&gt;After that is done you can re-run the command above and when it is done you should be ready for next steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running and accessing the cluster
&lt;/h2&gt;

&lt;p&gt;Starting it would be as easy as&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube start
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In order to optimally utilize your local machine’s resources I would suggest stopping it when you do not need it any more… With VirtualBox in center of it, it will go through you laptop’s battery pretty quickly. &lt;/p&gt;

&lt;p&gt;Starting it again later will get you back where you left off:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube stop
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Kubernetes dashboard is also available to you (while minikube is running):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube dashboard
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I will assume you have kubectl installed locally and that you are already using it for some remote clusters so you got multiple contexts. In this case, you need to list contexts and switch to minikube one (in following commands assuming default name that is, ofc, “minikube”)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl config get-contexts
$ kubectl config use-context minikube
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you are in the context of your local k8s cluster that runs on minikube and you can do all the k8s things in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingress controller
&lt;/h2&gt;

&lt;p&gt;To run your deployments that have ingress (and I assume most of them will), you will need ingress add-on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube addons enable ingress
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Make sure that you setup ingress based on your local hosts. It basically means that &lt;strong&gt;whatever you set as host in your ingress rules needs to be set up in your /etc/hosts file:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[minikube ip] your.host
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Where “[minikube ip]” should be replaced with actual minikube ip. It also works with multiple, space separated local hosts after minikube ip.&lt;br&gt;
Here is shortcut to do it in bash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo "$(minikube ip) local.host" | sudo tee -a /etc/hosts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Container registry — i.e. Docker registry
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Reality of the real container registry usage in local environment is a rough one, so I will provide easy, quick and dirty option that makes it quite easy to deploy your local work to your local k8s, but deprives you of really important experience of using proper container registry.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Local container registry
&lt;/h3&gt;

&lt;p&gt;Get your local docker context to point to minikube context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ eval $(minikube docker-env)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;To revert: &lt;strong&gt;$ eval $(docker-machine env -u)&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  When in minikube context, to start local docker registry:
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, now you have local registry to push stuff to (as long as your docker is in context of minikube).&lt;/p&gt;

&lt;h4&gt;
  
  
  You can now do:
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker build . -t &amp;lt;your_tag&amp;gt;
$ docker tag &amp;lt;your_tag&amp;gt; localhost:5000/&amp;lt;your_tag&amp;gt;:&amp;lt;version&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;At this point you can use &lt;strong&gt;localhost:5000/:&lt;/strong&gt; as image in your deployment and that is it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using remote container repo
&lt;/h2&gt;

&lt;p&gt;To use remote container repo locally you need to provide way to authenticate, which is through k8s secrets.&lt;/p&gt;

&lt;p&gt;For local secrets management for ECR, GCR and Docker registry I recommend using minikube addon called registry-creds. I do not consider it safe enough to be used anywhere but in local env.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ minikube addons configure registry-creds
$ minikube addons enable registry-creds
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note on ECR setup&lt;/strong&gt;: Make sure that, if you are setting it for AWS ECR, and you do not have role arn you want to use (you usually wont have and it is optional), you set it as something random as “changeme” or smt… It requires value, if you just press enter (since it is optional) deployment of creds pod will fail and make your life miserable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In case of AWS ECR, that will let you pull from your repo directly setting url as container image and adding pull secret named awsecr-cred:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;imagePullSecrets:
      - name: awsecr-cred
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I have to note here that running this locally worked quite chaotically for me and every session was new experience and new hack to make it work… Not a happy path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helm
&lt;/h2&gt;

&lt;p&gt;Helm is package manager for k8s, and is often used for configuration management across deployments. With high popularity of the tool and raising adoption, I want to end this guide with the note about adding helm to your local k8s env.&lt;/p&gt;

&lt;p&gt;It is quite easy at this point, just have minikube up and:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ brew install kubernetes-helm
$ helm init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This should be deprecated information pretty soon, but in current case helm uses backend called &lt;strong&gt;Tiller&lt;/strong&gt; and that is what gets installed/deployed during &lt;strong&gt;helm init&lt;/strong&gt; execution.&lt;/p&gt;

&lt;p&gt;You should check tiller deployment with: &lt;em&gt;$ kubectl describe deploy tiller-deploy — namespace=kube-system&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Valuable read:&lt;/strong&gt; &lt;a href="https://docs.helm.sh/using_helm/"&gt;https://docs.helm.sh/using_helm/&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Now you have full k8s local environment able to accept all of your test deployments before you decide to put them in the cloud (or “raw iron” server somewhere).
&lt;/h4&gt;

&lt;h1&gt;
  
  
  HAPPY SCALING
&lt;/h1&gt;

</description>
      <category>kubernetes</category>
      <category>macos</category>
      <category>minikube</category>
    </item>
  </channel>
</rss>
