<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kamesh Pemmaraju</title>
    <description>The latest articles on DEV Community by Kamesh Pemmaraju (@kpemmaraju).</description>
    <link>https://dev.to/kpemmaraju</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kpemmaraju"/>
    <language>en</language>
    <item>
      <title>How to Set Up and Run Kafka on K8s</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Thu, 14 May 2020 16:29:23 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/how-to-set-up-and-run-kafka-on-k8s-4dm9</link>
      <guid>https://dev.to/kpemmaraju/how-to-set-up-and-run-kafka-on-k8s-4dm9</guid>
      <description>&lt;p&gt;Apache Kafka is a leading open-source distributed streaming platform first developed at LinkedIn. It consists of several APIs such as the Producer, the Consumer, the Connector and the Streams. Together, those systems act as high-throughput, low-latency platforms for handling real-time data. This is why Kafka is preferred among several of the top-tier tech companies such as Uber, Zalando and AirBnB.&lt;/p&gt;

&lt;p&gt;Quite often, we would like to deploy a fully-fledged Kafka cluster in Kubernetes, just because we have a collection of microservices and we need a resilient message broker in the center. We also want to spread the Kafka instances across nodes, to minimize the impact of a failure.&lt;/p&gt;

&lt;p&gt;In the following tutorial, the Platform9 technical team presents an example Kafka deployment within the Platform9 Free Tier Kubernetes platform, backed up by some DigitalOcean droplets.&lt;/p&gt;

&lt;p&gt;Let’s get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the Platform9 Free Tier Cluster
&lt;/h2&gt;

&lt;p&gt;Below are the brief instructions to get you up and running with a working Kubernetes Cluster from Platform9:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Signup with &lt;a href="https://platform9.com/signup/"&gt;Platform9&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click the Create Cluster button and inspect the instructions. We need a server to host the Cluster.&lt;/li&gt;
&lt;li&gt;Create a few Droplets with at least 3gb Ram and 2vCPU’s. Follow instructions to install the &lt;strong&gt;pf9cli tool&lt;/strong&gt; and prepping the nodes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code class="language-plaintext highlighter-rouge"&gt;$ bash &amp;lt;(curl -sL &lt;a href="http://pf9.io/get_cli"&gt;http://pf9.io/get_cli&lt;/a&gt;)&lt;/code&gt;&lt;br&gt;
&lt;code class="language-plaintext highlighter-rouge"&gt;$ pf9ctl cluster prep-node -i   &lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Switch to the Platform9 UI and click the refresh button. You should see the new nodes in the list. Assign the first node as a master and the other ones as workers.&lt;/li&gt;
&lt;li&gt;Leave the default values in the next steps. Then create the cluster.&lt;/li&gt;
&lt;li&gt;Wait until the cluster becomes healthy. It will take at least 20 minutes to finish.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the &lt;strong&gt;API Access&lt;/strong&gt; tab and select to download the kubeconfig button:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iSoTAly8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i7pcubs4qse5o7xfipak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iSoTAly8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i7pcubs4qse5o7xfipak.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once downloaded, export the config and test the cluster health:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;br&gt;
$ export KUBECONFIG=/Users/itspare/Theo/Projects/platform9/example.yaml&lt;br&gt;
$ kubectl cluster-info


&lt;p&gt;Kubernetes master is running at &lt;a href="https://134.122.106.235"&gt;https://134.122.106.235&lt;/a&gt;&lt;br&gt;
CoreDNS is running at &lt;a href="https://134.122.106.235/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy"&gt;https://134.122.106.235/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy&lt;/a&gt;&lt;br&gt;
Metrics-server is running at &lt;a href="https://134.122.106.235/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy"&gt;https://134.122.106.235/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy&lt;/a&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;    

&lt;h2&gt;
  
  
  Creating Persistent Volumes
&lt;/h2&gt;

&lt;p&gt;Before we install Helm and the Kafka chart, we need to create some persistent volumes for storing Kafka replication message files.&lt;/p&gt;

&lt;p&gt;This step is crucial to be able to enable persistence in our cluster because without that, the topics and messages would disappear after we shutdown any of the servers, as they live in memory.&lt;/p&gt;

&lt;p&gt;In our example, we are going to use a local file system, Persistent Volume (PV), and we need one persistent volume for each Kafka instance; so if we plan to deploy three instances, we need three PV’s.&lt;/p&gt;

&lt;p&gt;Create and apply first the Kafka namespace and the PV specs:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ cat namespace.yml
---
apiVersion: v1
kind: Namespace
metadata:
  name: kafka

$ kubectl apply -f namespace.yml
namespace/kafka created


$ cat pv.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv-volume-2
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv-volume-3
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

$ kubectl apply -f pv.yml

&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;If you are using the Kubernetes UI, you should be able to see the PV volumes on standby:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XynIUtGS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8hag1ucjnsicvnyycb6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XynIUtGS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8hag1ucjnsicvnyycb6d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Helm
&lt;/h2&gt;

&lt;p&gt;We begin by installing Helm on our computer and installing it in Kubernetes, as it’s not bundled by default.&lt;/p&gt;

&lt;p&gt;First, we download the install script:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get &amp;gt; install-helm.sh
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Make the script executable with chmod:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ chmod u+x install-helm.sh
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Create the tiller service account:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ kubectl -n kube-system create serviceaccount tiller
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Next, bind the tiller serviceaccount to the cluster-admin role:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Now we can run helm init:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ helm init --service-account tiller
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Now we are ready to install the Kafka chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the Helm Chart
&lt;/h2&gt;

&lt;p&gt;In the past, trying to deploy Kafka on Kubernetes was a good exercise. You had to deploy a working Zookeeper Cluster, role bindings, persistent volume claims and apply correct configuration.&lt;/p&gt;

&lt;p&gt;Hopefully for us, with the use of the &lt;a href="https://github.com/helm/charts/tree/0ca37cc106467190bd705aff647d2c7361e1d6f1/incubator/kafka"&gt;Kafka Incubator Chart&lt;/a&gt;, the whole process is mostly automated (with a few quirks here and there).&lt;/p&gt;

&lt;p&gt;We add the Helm chart:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Export the chart values in a file:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ curl https://raw.githubusercontent.com/helm/charts/master/incubator/kafka/values.yaml &amp;gt; config.yml
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Carefully inspect the configuration values, particularly around the parts about persistence and about the number of Kafka stateful sets to deploy.&lt;/p&gt;

&lt;p&gt;Then install the chart:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ helm install --name kafka-demo --namespace kafka incubator/kafka -f values.yml --debug
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Check the status of the deployment&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ helm status kafka-demo
LAST DEPLOYED: Sun Apr 19 14:05:15 2020
NAMESPACE: kafka
STATUS: DEPLOYED

RESOURCES:
==&amp;gt; v1/ConfigMap
NAME                  DATA  AGE
kafka-demo-zookeeper  3     5m29s

==&amp;gt; v1/Pod(related)
NAME                    READY  STATUS   RESTARTS  AGE
kafka-demo-zookeeper-0  1/1    Running  0         5m28s
kafka-demo-zookeeper-1  1/1    Running  0         4m50s
kafka-demo-zookeeper-2  1/1    Running  0         4m12s
kafka-demo-zookeeper-0  1/1    Running  0         5m28s
kafka-demo-zookeeper-1  1/1    Running  0         4m50s
kafka-demo-zookeeper-2  1/1    Running  0         4m12s

==&amp;gt; v1/Service
NAME                           TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
kafka-demo                     ClusterIP  10.21.255.214         9092/TCP                    5m29s
kafka-demo-headless            ClusterIP  None                  9092/TCP                    5m29s
kafka-demo-zookeeper           ClusterIP  10.21.13.232          2181/TCP                    5m29s
kafka-demo-zookeeper-headless  ClusterIP  None                  2181/TCP,3888/TCP,2888/TCP  5m29s

==&amp;gt; v1/StatefulSet
NAME                  READY  AGE
kafka-demo            3/3    5m28s
kafka-demo-zookeeper  3/3    5m28s

==&amp;gt; v1beta1/PodDisruptionBudget
NAME                  MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
kafka-demo-zookeeper  N/A            1                1                    5m29s
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;During this phase, you may want to navigate to the Kubernetes UI and inspect the dashboard for any issues. Once everything is complete, then the pods and Persistent Volume Claims should be bound and green.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5lReTs9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/11opfsstpiohr2hea5lh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5lReTs9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/11opfsstpiohr2hea5lh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can test the Kafka cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Kafka Cluster
&lt;/h2&gt;

&lt;p&gt;We are going to deploy a test client that will execute scripts against the Kafka cluster.&lt;/p&gt;

&lt;p&gt;Create and apply the following deployment:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ cat testclient.yml

apiVersion: v1
kind: Pod
metadata:
  name: testclient
  namespace: kafka
spec:
  containers:
  - name: kafka
    image: solsson/kafka:0.11.0.0
    command:
      - sh
      - -c
      - "exec tail -f /dev/null"

$ kubectl apply -f testclient
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Then, using the testclient, we create the first topic, which we are going to use to post messages:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper kafka-demo-zookeeper:2181 --topic messages --create --partitions 1 --replication-factor 1
Created topic "messages".
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Here we need to use the correct hostname for zookeeper cluster and the topic configuration.&lt;/p&gt;

&lt;p&gt;Next, verify that the topic exists:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper kafka-demo-zookeeper:2181 --list
Messages
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Now we can create one consumer and one producer instance so that we can send and consume messages.&lt;/p&gt;

&lt;p&gt;First create one or two listeners, each on its own shell:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-consumer.sh --bootstrap-server kafka-demo:9092 --topic messages --from-beginning
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Then create the producer session and type some messages. You will be able to see them propagate to the consumer sessions:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list kafka-demo:9092 --topic messages
&amp;gt;Hi
&amp;gt;How are you?
&amp;gt;Hope you're well
&amp;gt;



Hi
How are you?
Hope you're well
&lt;/code&gt;&lt;/pre&gt;    

&lt;h2&gt;
  
  
  Destroying the Helm Chart
&lt;/h2&gt;

&lt;p&gt;To clean up our resources, we just destroy the Helm Chart and delete the PVs we created earlier:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;
$ helm delete kafka-demo --purge
$ kubectl delete -f pv.yml -n kafka
&lt;/code&gt;&lt;/pre&gt;    

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Stay put for more tutorials showcasing common deployment scenarios within Platform9’s fully-managed Kubernetes platform.&lt;/p&gt;


&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>kafka</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to set-up an NGINX Ingress Controller on PMKFT</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Thu, 30 Apr 2020 20:56:58 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/setting-up-an-nginx-ingress-controller-on-pmkft-1mlk</link>
      <guid>https://dev.to/kpemmaraju/setting-up-an-nginx-ingress-controller-on-pmkft-1mlk</guid>
      <description>&lt;p&gt;The vast majority of Kubernetes clusters are used to host containers that process incoming requests from microservices to full web applications. Having these incoming requests come into a central location, then get handed out via services in Kubernetes, is the most secure way to configure a cluster. That central incoming point is an ingress controller.&lt;/p&gt;

&lt;p&gt;The most common product used as an ingress controller for privately-hosted Kubernetes clusters is NGINX. NGINX has most of the features enterprises are looking for, and will work as an ingress controller for Kubernetes regardless of which cloud, virtualization platform, or Linux operating system Kubernetes is running on.&lt;/p&gt;

&lt;p&gt;In the following tutorial, the Platform9 technical team presents a how-to, step-by-step guide for setting up an NGINX Ingress Controller on the &lt;a href="https://platform9.com/signup/"&gt;free version of Platform9 Managed Kubernetes&lt;/a&gt; - a SaaS-managed solution that allows anyone to instantly deploy open-source Kubernetes on-premises, AWS, or Azure.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Steps
&lt;/h2&gt;

&lt;p&gt;The first step required to use NGINX as an Ingress controller on a Platform9 managed Kubernetes cluster, is to have a running Kubernetes cluster.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PIMklijC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nzmofibzm6dg7eij2zp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PIMklijC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nzmofibzm6dg7eij2zp4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case the cluster we will be using is called “ingress-test” and it is listed as healthy. It is a single node cluster running on an Ubuntu 16.04 server.&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;% ssh root@64.227.56.189
Welcome to Ubuntu 16.04.6 LTS &lt;span class="o"&gt;(&lt;/span&gt;GNU/Linux 4.4.0-173-generic x86_64&lt;span class="o"&gt;)&lt;/span&gt;
root@pmkft:~# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
64.227.56.189   Ready    master   10h   v1.14.8
root@pmkft:~# kubectl get namespaces
NAME              STATUS   AGE
default           Active   11h
kube-node-lease   Active   11h
kube-public       Active   11h
kube-system       Active   11h
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Running “kubectl get nodes” and “kubectl get namespaces” confirm that authentication is working, the cluster nodes are ready, and there are no NGINX Ingress controllers configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mandatory Components for an NGINX Ingress Controller
&lt;/h2&gt;

&lt;p&gt;An ingress controller, because it is a core component of Kubernetes, requires configuration to more moving parts of the cluster than just deploying a pod and a route.&lt;/p&gt;

&lt;p&gt;In the case of NGINX, its recommended configuration has three ConfigMaps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base Deployment&lt;/li&gt;
&lt;li&gt;TCP configuration&lt;/li&gt;
&lt;li&gt;UDP configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A service account to run the service is within the cluster, and that service account will be assigned a couple of roles.&lt;/p&gt;

&lt;p&gt;A cluster role is assigned to the service account, which allows it to get, list, and read the configuration of all services and events. This could be limited if you were to have multiple ingress controllers. But in most cases, that is overkill.&lt;/p&gt;

&lt;p&gt;A namespace-specific role is assigned to the service account to read and update all the ConfigMaps and other items that are specific to the NGINX Ingress controller’s own configuration.&lt;/p&gt;

&lt;p&gt;The last piece is the actual pod deployment into its own namespace to make it easy to draw boundaries around it for security and resource quotas.&lt;/p&gt;

&lt;p&gt;The deployment specifies which ConfigMaps will be referenced, the container image and command line that will be used, and any other specific information around how to run the actual NGINX Ingress controller.&lt;/p&gt;

&lt;p&gt;NGINX has a single file they maintain in GitHub linked to from the &lt;a href="https://kubernetes.github.io/ingress-nginx/"&gt;Kubernetes documentation&lt;/a&gt; that has all this configuration spelled out in YAML and ready to deploy.&lt;/p&gt;

&lt;p&gt;To apply this configuration, the command to run is:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/mandatory.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Which will generate the following output:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;
  
  
  Exposing the NGINX Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Once the base configuration is in place, the next step is to expose the NGINX Ingress Controller to the outside world to allow it to start receiving connections. This could be through a load-balancer like on AWS, GCP, or Azure. The other option when deploying on your own infrastructure, or a cloud provider with less capabilities, is to create a service with a NodePort to allow access to the Ingress Controller.&lt;/p&gt;

&lt;p&gt;Using the NGINX-provided service-nodeport.yaml file, which is located in GitHub, will define a service that runs on ports 80 and 443. It can be applied using a single command line, as done before.&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;root@pmkft:~# kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/provider/baremetal/service-nodeport.yaml
service/ingress-nginx created
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;
  
  
  Validate the NGINX Ingress Controller
&lt;/h2&gt;

&lt;p&gt;The final step is to make sure the Ingress controller is running.&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;root@pmkft:~# kubectl get pods &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/name&lt;span class="o"&gt;=&lt;/span&gt;ingress-nginx
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx   nginx-ingress-controller-6c7686c6b4-stnq7   1/1     Running   0          6m36s
root@pmkft:~# kubectl get services ingress-nginx &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ingress-nginx
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
ingress-nginx   NodePort   10.21.83.193   &amp;lt;none&amp;gt;        80:30757/TCP,443:31353/TCP   34m
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Install via Helm&lt;/strong&gt;&lt;br&gt;
Platform9 supports helm3 and is available to anyone who wants to deploy using that method, which is often much easier to manage.&lt;/p&gt;

&lt;p&gt;To install an &lt;strong&gt;NGINX Ingress controller&lt;/strong&gt; using Helm, use the chart &lt;code class="language-plaintext highlighter-rouge"&gt;stable/nginx-ingress&lt;/code&gt;, which is available in the official repository. To install the chart with the release name ingress-nginx:&lt;br&gt;
&lt;code class="language-plaintext highlighter-rouge"&gt;helm install stable/nginx-ingress --name ingress-nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If the kubernetes cluster has RBAC enabled, then run:&lt;br&gt;
&lt;code class="language-plaintext highlighter-rouge"&gt;helm install stable/nginx-ingress --name ingress-nginx --set rbac.create=true&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exposing Services using NGINX Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Now that an ingress controller is running in the cluster, you will need to create services that leverage it using either host, URI mapping, or even both.&lt;/p&gt;

&lt;p&gt;Sample of a host-based service mapping through an ingress controller using the type “Ingress”:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;host1.domain.ext&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Using a URI involves the same basic layout, but specifying more details in the “paths” section of the yaml file. When TLS encryption is required, then you will need to have certificates stored as secrets inside Kubernetes. This can be done manually or with an open source tool like cert-manager. The yaml file needs a little extra information to enable TLS (mapping from port 443 to port 80 is done in the ingress controller):&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;kubernetes.io/ingress.class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-nginx&lt;/span&gt;
    &lt;span class="s"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;host1.domain.ext&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;host2.domain.ext&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-kubernetes-tls&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;host1.domain.ext&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;WIth a fully-functioning cluster and ingress controller, even a single node one, you are ready to start building and testing applications just like you would in your production environment, with the same ability to test your configuration files and application traffic routing. You just have some capacity limitations that won’t happen on true multi-node clusters.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to get your first container up on K8s using the Free Version of Platform9 Managed Kubernetes</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Wed, 08 Apr 2020 19:29:31 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/get-your-first-container-up-on-k8s-using-the-free-version-of-platform9-managed-kubernetes-3d5m</link>
      <guid>https://dev.to/kpemmaraju/get-your-first-container-up-on-k8s-using-the-free-version-of-platform9-managed-kubernetes-3d5m</guid>
      <description>&lt;p&gt;Kubernetes in the leading Container Orchestration platform that allows you to apply fast and streamlined infrastructure workloads using a declarative API.&lt;/p&gt;

&lt;p&gt;In the following tutorial, the Platform9 technical team shows you how to follow a step-by-step guide for signing in with Platform9 Managed Kubernetes Platform, creating a new cluster and deploying an example application. Then we will see how to scale-up/down our application instances and how to roll out a new updated instance of our app.&lt;/p&gt;

&lt;p&gt;Let’s get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sign-up
&lt;/h2&gt;

&lt;p&gt;To gain the benefits of Platform9, we need to register a new account.&lt;/p&gt;

&lt;p&gt;Head over to Sign up Page located here: &lt;a href="https://bit.ly/3a3YMwu"&gt;SignUp Page&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill in your details in the Form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aOlv9nx1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uj5wgfh0q0isa3no0lox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aOlv9nx1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uj5wgfh0q0isa3no0lox.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will be asked to provide a verification code sent to your email, and a secure password:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2H0VBESN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/82igaceubidr5s8re056.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2H0VBESN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/82igaceubidr5s8re056.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Platform9 Managed Kubernetes (PMK) provides us with a pure-play open source Kubernetes delivered as a SaaS managed service.&lt;/p&gt;

&lt;p&gt;Once there, you will be presented with the Dashboard Screen. Initially, we have no cluster configured, so we need to create one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up A Cluster
&lt;/h2&gt;

&lt;p&gt;First, let's get familiarised with the possible pages as shown on the sidebar section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure: This section lists an overview of all the clusters, nodes and cloud providers.&lt;/li&gt;
&lt;li&gt;Pods/Deployments/Services: This section lists all pods, deployments and services.&lt;/li&gt;
&lt;li&gt;Storage Classes: This section lists all the storage classes.&lt;/li&gt;
&lt;li&gt;Namespaces: This section lists all cluster namespaces.&lt;/li&gt;
&lt;li&gt;RBAC: This section shows information about configured RBAC roles, and role bindings.&lt;/li&gt;
&lt;li&gt;API Access: This section indicates information about API access tokens and kubeconfig settings.&lt;/li&gt;
&lt;li&gt;Users/Roles: This section displays information about users and defined roles.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you are familiar with the UI Dashboard, let’s create a new K8s cluster where we are going to deploy an example application container.&lt;/p&gt;

&lt;p&gt;Click on the Dashboard Link and then click on Create Your Cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9NQz7pTz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/93p2rwr213czihuuw1xa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9NQz7pTz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/93p2rwr213czihuuw1xa.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the next screen, choose the Cloud Provider. Here we can select BareOS, which can be a local or remote VM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DhBA2kSu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e0fjvjnhb9l86zfovc1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DhBA2kSu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e0fjvjnhb9l86zfovc1d.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the instructions to prepare some Kubernetes nodes. Pay attention to the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For this tutorial, I used two Scaleway Instances and bootstrapped a cluster node on each VM.

&lt;ul&gt;
&lt;li&gt; We can do the same process using VirtualBox using Ubuntu Xenial images. Just make sure you have the hardware requirements.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;We used the following commands to prep each node:&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;pf9ctl cluster prep-node &lt;span class="nt"&gt;-u&lt;/span&gt; root &lt;span class="nt"&gt;-i&lt;/span&gt; &amp;lt;NodeIP&amp;gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &amp;lt;sshKey&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If the node preparation progress gets stuck at 95%, then enter CTRL+C to unblock it. It should then show the success message in the console.&lt;/p&gt;

&lt;p&gt;Once you have prepared the instances, go into the UI again and refresh the screen. After a while you will be able to proceed with the master and the worker selection screens, as depicted in the images below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k7e1AIl1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ihe6xpsu7t9gefq26c06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k7e1AIl1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ihe6xpsu7t9gefq26c06.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YMdKkwnL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5tg1foegzxwx53whx4wb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YMdKkwnL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5tg1foegzxwx53whx4wb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the end you should be having a basic cluster configured and prepped, and you will be able to see the Node Stats and system pods as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z80pzN2x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pq5zsjz2q75hoae86ym7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z80pzN2x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pq5zsjz2q75hoae86ym7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, before we proceed, make sure the cluster is healthy by inspecting the Master Pods in the Pods/Deployments page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XCRKHLcF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jzpvf5n684q9gcgzgwi3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XCRKHLcF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jzpvf5n684q9gcgzgwi3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that we are ready to issue our first deployment.&lt;/p&gt;

&lt;p&gt;Issuing a Deployment Once we have our cluster ready, we can download the &lt;code class="language-plaintext highlighter-rouge"&gt;kubeconfig&lt;/code&gt; file and use it as our context. This way, we can use the &lt;code class="language-plaintext highlighter-rouge"&gt;kubectl CLI&lt;/code&gt; tool as well, for the remainder of the tutorial.&lt;/p&gt;

&lt;p&gt;Navigate to the API Access section and click on the cluster name under the Download kubeconfig section. This will download a yml file. We need to use this as the cluster config:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--25JOmNf0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d9err92hfmw9eyn7qrgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--25JOmNf0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d9err92hfmw9eyn7qrgh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a new shell, export the KUBECONFIG environmental variable pointing to the downloaded file location:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/test.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;then issuing the get-clusters command will show us the demo cluster:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl config get-clusters
NAME
&lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now lets see how we can issue our first deployment. Click on &lt;strong&gt;Pods/Deployments&lt;/strong&gt; and select &lt;strong&gt;Deployment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Fill in the &lt;strong&gt;New Resource&lt;/strong&gt; form by selecting the cluster and the default namespace.&lt;/p&gt;

&lt;p&gt;For the &lt;strong&gt;Deployment Description&lt;/strong&gt; we can use a simple Node.js Application as an example. Here is the spec:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;joerx/express-hello-src:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/headers&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
        &lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/headers&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mT3BFPg7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s72labxtinxobnv5hg6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mT3BFPg7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s72labxtinxobnv5hg6w.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a small breakdown of the above specification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;apiVersion: We specified the version of the deployment spec here. We used &lt;a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html"&gt;this reference site&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;kind: We specified a deployment type.&lt;/li&gt;
&lt;li&gt;metadata: We defined a name for our deployment as nodejs-deployment.&lt;/li&gt;
&lt;li&gt;spec: here we define our deployment specification. We assign a label, a label selector, one replica instance and the container image information. We also added some health probes to detect status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When saved, on the dashboard we can see the phases of the deployment. If you click on the name, you should be redirected to the Kubernetes Dashboard. Once it’s ready, though, we cannot directly access it because the server is exposed only inside the cluster. We need to add a service specification to expose it in public. We can use the &lt;strong&gt;Services-&amp;gt;Add Service&lt;/strong&gt; Button to do that.&lt;/p&gt;

&lt;p&gt;Here is the spec:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nodejs&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Here we expose the Node.js application using a service named &lt;strong&gt;nodejs-service&lt;/strong&gt; and assigns TCP port within a range 30000-32767 with the container port 3000, which our application runs. The key thing to note is the &lt;strong&gt;type: NodePort&lt;/strong&gt;, which will expose the port in the node public IP address.&lt;/p&gt;

&lt;p&gt;Now in order to check our application, we need to find the actual IP of the VM node. In this example mine is: 51.158.108.219, so we need to open a browser link to the following path:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://51.158.108.219:30180/"&gt;http://51.158.108.219:30180/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Indeed, as you can see in the following picture, the server has responded with our IP address:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4vV1mTwk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ttni09yk4m49nlh34g7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4vV1mTwk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ttni09yk4m49nlh34g7j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling a Deployment
&lt;/h2&gt;

&lt;p&gt;The current deployment spins up one instance of the Application. If we want to scale it up, we can use the &lt;code class="language-plaintext highlighter-rouge"&gt;Kubectl CLI&lt;/code&gt; tool; or, we can use the Kubernetes UI, instead. If we click on the deployment link, it will redirect us to the Kubernetes UI. The first time it will ask for a token, which is present inside the kubeconfig yml file we downloaded earlier.&lt;/p&gt;

&lt;p&gt;We just only need to select the deployment in the Deployments list on the right-hand side with the three dots. Select Scale and then in the model, change the number from 1 to 3:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bk_v57RD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xa7ryi32tgn4iytbpx1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bk_v57RD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xa7ryi32tgn4iytbpx1g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bCFpcii1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/abpes53bpe7fplzsbad1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bCFpcii1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/abpes53bpe7fplzsbad1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, we can see the pods increasing from one to three in the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Oiz3Lkdi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/t4au4zod7a9a2jopvbrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Oiz3Lkdi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/t4au4zod7a9a2jopvbrf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that when using this approach, there are some limitations, as we currently have only two nodes and three instances; so, we are able to reach only two out of three pods. In subsequent tutorials, we can see how different types of services can overcome this limitation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying a new version of the App
&lt;/h2&gt;

&lt;p&gt;Similarly, if we wanted to deploy a newer version of the Application, we just need to change the deployed image reference to point to a new image. Using the kubectl command, this can be done as easy as issuing the following command:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image deployment &amp;lt;deployment&amp;gt; &amp;lt;container&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;image&amp;gt; &lt;span class="nt"&gt;--record&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image nodejs-deployment  &lt;span class="nv"&gt;nodejs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;joerx/express-hello-src:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Before we do that though, it’s important to define a &lt;strong&gt;Rolling update strategy&lt;/strong&gt; so that we can limit the number of images that are being replaced at the same time. We need to add the following section in the deployment spec section:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Here we want a maximum of one pod unavailable and a maximum of one extra pod update at the same time. This effectively means that there will be, at most, four pods during the update process, as we currently have three pods running (3 + 1 with the new version)&lt;/p&gt;

&lt;p&gt;Also, note one more thing – we started with the following image:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-plaintext highlighter-rouge"&gt;joerx/express-hello-src:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Ideally, we should always be using a version tag. For example:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-plaintext highlighter-rouge"&gt;joerx/express-hello-src:1.0.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, when we want to deploy the next version – such as 1.1.0 – we can set the following docker image:&lt;/p&gt;

&lt;p&gt;&lt;code class="language-plaintext highlighter-rouge"&gt;joerx/express-hello-src:1.1.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Any subsequent updates should follow this convention. I leave it as an exercise to the reader to perform a rolling update of an image from an older version to a newer one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview and next steps
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we walked through the process of signing up with Platform9, setting up our own Kubernetes cluster and issuing an example deployment. We also learned how to navigate to the Kubernetes dashboard, scale up the deployment and expose our app via a service specification.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Deploy More Complex Microservice Apps with Managed Kubernetes</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Thu, 02 Apr 2020 15:13:15 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/how-to-deploy-more-complex-microservice-apps-with-managed-kubernetes-4i72</link>
      <guid>https://dev.to/kpemmaraju/how-to-deploy-more-complex-microservice-apps-with-managed-kubernetes-4i72</guid>
      <description>&lt;p&gt;In the following tutorial, the Platform9 technical team shows you how to deploy a more complex microservice. The idea is to help you gain familiarity with a managed Kubernetes service and to show you how you can leverage Platform9 Managed Kubernetes for more advanced scenarios.&lt;/p&gt;

&lt;p&gt;In this example, we are going to see the deployment of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Redis master&lt;/li&gt;
&lt;li&gt;Multiple Redis slaves&lt;/li&gt;
&lt;li&gt;A sample guestbook application that uses Redis as a store&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We assume that you have already set up a Platform9 cluster with at least one node, and the cluster is ready.&lt;/p&gt;

&lt;p&gt;Let’s start with the Redis parts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying and exposing a Redis Cluster
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we are going to expand our examples with deploying a more complex microservice. The idea is to make you more comfortable with the platform and to show you how you can leverage it for more advanced scenarios.&lt;/p&gt;

&lt;p&gt;Redis is a key-value in-memory store that is used mainly as a cache service. In order to set up Clustering for Data Replication, we need a Redis instance that acts as Master, together with additional instances as slaves. Then the guestbook application can use this instance to store data. The Redis master will propagate the writes to the slave nodes.&lt;/p&gt;

&lt;p&gt;We can initiate a Redis Master deployment in a few different ways: either using the kubectl tool, the Platform9 UI or the Kubernetes UI. For convenience, we use the kubectl tool as it’s the most commonly understood in tutorials.&lt;/p&gt;

&lt;p&gt;First, we need to create a Redis Cluster Deployment. Looking at their &lt;a href="https://redis.io/topics/cluster-tutorial"&gt;documentation here&lt;/a&gt;, to set up a cluster, we need some configuration properties. We can leverage kubernetes configmaps to store and reference them in the deployment spec.&lt;/p&gt;

&lt;p&gt;We need to save a script and a redis.conf file that is going to be used to configure the master and slave nodes.&lt;/p&gt;

&lt;p&gt;Create the following config redis-cluster.config.yml&lt;/p&gt;

&lt;p&gt;With these values:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat redis-cluster.config.yml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-cluster-config&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;update-ip.sh&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;#!/bin/sh&lt;/span&gt;
    &lt;span class="s"&gt;sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${IP}/" /data/nodes.conf&lt;/span&gt;
    &lt;span class="s"&gt;exec "$@"&lt;/span&gt;
  &lt;span class="s"&gt;redis.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|+&lt;/span&gt;
    &lt;span class="s"&gt;cluster-enabled yes&lt;/span&gt;
    &lt;span class="s"&gt;cluster-config-file /data/nodes.conf&lt;/span&gt;
    &lt;span class="s"&gt;appendonly yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We define a script that will insert an IP value to the node.conf file. This is to fix an issue with Redis as &lt;a href="https://github.com/antirez/redis/issues/4645"&gt;referenced here&lt;/a&gt;. We use this script every time we deploy a new redis image.&lt;/p&gt;

&lt;p&gt;Then we have the redis.conf, which applies the minimal cluster configuration.&lt;/p&gt;

&lt;p&gt;Apply this spec into the cluster:&lt;br&gt;
&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; redis-cluster.config.yml&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then verify that it exists in the list of configmaps:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get configmaps&lt;/code&gt;&lt;/p&gt;



&lt;p&gt;Next we need to define a spec for the redis cluster instances. We can use a Deployment or a StatefulSet to define 3 instances:&lt;/p&gt;

&lt;p&gt;Here is the spec: redis-cluster.statefulset.yml&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat redis-cluster.statefulset.yml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StatefulSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-cluster&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-cluster&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-cluster&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-cluster&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:5.0.7-alpine&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;client&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16379&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gossip&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/conf/update-ip.sh"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;redis-server"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/conf/redis.conf"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IP&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;fieldRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;status.podIP&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;conf&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/conf&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data&lt;/span&gt;
          &lt;span class="na"&gt;readOnly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;conf&lt;/span&gt;
        &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-cluster-config&lt;/span&gt;
          &lt;span class="na"&gt;defaultMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0755&lt;/span&gt;
  &lt;span class="na"&gt;volumeClaimTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ReadWriteOnce"&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In the above step we defined a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An IP environment variable that we need in the update-ip.sh script that we defined in the configmap earlier. This is the pod-specific IP address using the Downward API.&lt;/li&gt;
&lt;li&gt;Some shared volumes including the configmap that we defined earlier.&lt;/li&gt;
&lt;li&gt;Two container ports – 6379 and 16379 – for the gossip protocol.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this spec we can deploy the Redis cluster instances:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; redis-cluster.statefulset.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once we verify that we have the deployment ready, we need to perform the last step, which is bootstrapping the cluster. Consulting the documentation here for &lt;a href="https://redis.io/topics/cluster-tutorial#creating-the-cluster"&gt;creating the cluster&lt;/a&gt;, we need to ssh into one of the instances and run the redis-cli cluster create command. For example taken from the docs:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;redis-cli &lt;span class="nt"&gt;--cluster&lt;/span&gt; create 127.0.0.1:7000 127.0.0.1:7001 &lt;span class="se"&gt;\&lt;/span&gt;
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--cluster-replicas&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To do that in our case, we need to get the local pod IPs of the instances and feed them to that command.&lt;/p&gt;

&lt;p&gt;We can query the IP using this command:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redis-cluster &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{range.items[*]}{.status.podIP}:6379 '&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So if we save them in a variable or a file, we can pipe them at the end of the redis-cli command:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;POD_IPS &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redis-cluster &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{range.items[*]}{.status.podIP}:6379 '&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then we can run the following command:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; redis-cluster-0 &lt;span class="nt"&gt;--&lt;/span&gt; redis-cli &lt;span class="nt"&gt;--cluster&lt;/span&gt; create &lt;span class="nt"&gt;--cluster-replicas&lt;/span&gt; 1 &lt;span class="nv"&gt;$POD_IPS&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If everything is OK, you will see the following prompt. Enter ‘yes’ to accept and continue:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;Can I &lt;span class="nb"&gt;set &lt;/span&gt;the above configuration? &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="s1"&gt;'yes'&lt;/span&gt; to accept&lt;span class="o"&gt;)&lt;/span&gt;: &lt;span class="nb"&gt;yes&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Nodes configuration updated
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Assign a different config epoch to each node
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Sending CLUSTER MEET messages to &lt;span class="nb"&gt;join &lt;/span&gt;the cluster
Waiting &lt;span class="k"&gt;for &lt;/span&gt;the cluster to &lt;span class="nb"&gt;join&lt;/span&gt;
........

&lt;span class="o"&gt;[&lt;/span&gt;OK] All nodes agree about slots configuration.
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Check &lt;span class="k"&gt;for &lt;/span&gt;open slots...
&lt;span class="o"&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/span&gt; Check slots coverage...
&lt;span class="o"&gt;[&lt;/span&gt;OK] All 16384 slots covered.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then we can verify the cluster state by running the cluster info command:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; redis-cluster-0 &lt;span class="nt"&gt;--&lt;/span&gt; redis-cli cluster info

cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:28
cluster_stats_messages_pong_sent:34
cluster_stats_messages_sent:62
cluster_stats_messages_ping_received:29
cluster_stats_messages_pong_received:28
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:62
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Before we continue deploying the guestbook app, we need to offer a unified service frontend for the Redis Cluster so that it’s easily discoverable in the cluster.&lt;/p&gt;

&lt;p&gt;Here is the service spec: redis-cluster.service.yml&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat redis-cluster.service.yml&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-master&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;client&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16379&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;16379&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gossip&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-cluster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We expose the cluster as redis-master here, as the guestbook app will be looking for a host service to connect to with that name.&lt;/p&gt;

&lt;p&gt;Once we apply this service spec, we can move on to deploying and exposing the Guestbook Application:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; redis-cluster.service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;
  
  
  Deploying and exposing a GuestBook Application
&lt;/h2&gt;

&lt;p&gt;The guestbook application is a simple php script that shows a form to submit a message. Initially it will attempt to connect to either the redis-master host or the redis-slave hosts. It needs the &lt;strong&gt;GET_HOSTS_FROM&lt;/strong&gt; environment variable set pointing to the file with the following variables: &lt;strong&gt;REDIS_MASTER_SERVICE_HOST&lt;/strong&gt;: of the master &lt;strong&gt;REDIS_SLAVE_SERVICE_HOST&lt;/strong&gt;: of the slave&lt;/p&gt;

&lt;p&gt;First, let’s define the deployment spec bellow:&lt;/p&gt;

&lt;p&gt;php-guestbook.deployment.yml&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat php-guestbook.deployment.yml&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;guestbook&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;guestbook&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;guestbook&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;php-redis&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/google-samples/gb-frontend:v6&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;150m&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;150Mi&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GET_HOSTS_FROM&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_MASTER_SERVICE_HOST&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;redis-master"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_SLAVE_SERVICE_HOST&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;redis-master"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The code of the gb-frontend image is &lt;a href="https://github.com/kubernetes/examples/blob/master/guestbook/php-redis/guestbook.php"&gt;located here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next is the associated service spec:&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;guestbook-lb&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;guestbook&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note: &lt;strong&gt;NodePort&lt;/strong&gt; will assign a random port over the public IP of the Node. In either case, we get a public host:port pair where we can inspect the application. Here is a screenshot of the app after we deployed it:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KS-U5Cgn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sdhdxvj6n1liv6611olw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KS-U5Cgn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sdhdxvj6n1liv6611olw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleaning up
&lt;/h2&gt;

&lt;p&gt;Once we have finished experimenting with the application, we can clean up the resources and all the servers by issuing kubectl delete statements. A convenient way is to delete by labels. For example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete statefulset redis-cluster&lt;br&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete service redis-master&lt;br&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deployment guestbook&lt;br&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete service guestbook-lb&lt;br&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete configmap redis-cluster-config&lt;br&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;redis-cluster&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Deploying multi-container systems in Kubernetes with Platform9 is no different than usual. The big difference is that you gain a quality of server and a maintenance-free Kubernetes experience with first-class support for troubleshooting issues. That’s not to mention that you can host the cluster on your own bare-metal server or AWS, eliminating any vendor lock-ins. For more information visit &lt;a href="https://bit.ly/2V7NnFV"&gt;Platform9 Managed Kubernetes page&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>microservices</category>
    </item>
    <item>
      <title>How to Get Started with Platform9 Managed Kubernetes on a Windows Machine</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Wed, 25 Mar 2020 15:06:17 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/getting-started-with-platform9-managed-kubernetes-on-a-windows-machine-9p3</link>
      <guid>https://dev.to/kpemmaraju/getting-started-with-platform9-managed-kubernetes-on-a-windows-machine-9p3</guid>
      <description>&lt;p&gt;The following is a tutorial from the Platform9 technical team.&lt;/p&gt;

&lt;p&gt;Platform9 Managed Kubernetes (PMK) is a Kubernetes offering with several distinct advantages, including its ability to spin up a Kubernetes cluster with physical or virtual machines (VM) in a short period of time. This is done with relative ease using what Platform9 calls &lt;a href="https://bit.ly/2Ue9tHg"&gt;BareOS clusters&lt;/a&gt;. These are the clusters that are formulated via the process of preparing on-premises or public cloud VMs as nodes that can then be used to create clusters via the PMK UI. &lt;/p&gt;

&lt;p&gt;This tutorial, created by the Platform9 technical team, demonstrates this process by creating a single-node Kubernetes cluster using a VM. But the real lessons will be those gleaned from the set-up of your Windows host machine to support this process. The goal of this tutorial is to show you how to create a VM that runs Ubuntu 16.04 (which is supported by PMK), and then show you how to use that VM to create a Kubernetes cluster via the PMK UI. &lt;/p&gt;

&lt;p&gt;To complete this tutorial, you will need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A computer running a recent version of the Windows operating system (a machine running Windows 10 is used for this tutorial), Oracle VM VirtualBox for spinning up your virtual machine,&lt;/li&gt;
&lt;li&gt;A download of the Linux ISO for Ubuntu 16.04, and&lt;/li&gt;
&lt;li&gt;A free-forever account with &lt;a href="https://bit.ly/3a3YMwu"&gt;Platform9 Managed Kubernetes&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, now that you know what you’re dealing with, let’s get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Oracle VirtualBox on Windows
&lt;/h2&gt;

&lt;p&gt;The first step in creating your virtual machine is to set up the software that will create and manage it. We’re using Oracle VM VirtualBox, which is very intuitive and, best of all, available as open source. To download the most recent version (6.1.2) for Windows, simply visit the &lt;a href="https://www.virtualbox.org/wiki/Downloads"&gt;downloads page&lt;/a&gt; and click on “Windows hosts” below “VirtualBox 6.1.2 Platform Packages.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ck7IBCT5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8xgsdx1m5ap5qyp7dd6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ck7IBCT5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8xgsdx1m5ap5qyp7dd6n.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the download has finished, you can begin installing and configuring the software. When you open the executable to begin installing Oracle VM VirtualBox, you will be met with a Setup Wizard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nke9z11W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1023m1nejtruee8uiowf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nke9z11W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1023m1nejtruee8uiowf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code class="language-plaintext highlighter-rouge"&gt;Next&lt;/code&gt; on the initial page to progress to the setup portion of the installation. This step will allow you to change the way features are installed and choose the location. For our purposes, the default options will suffice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rnX5knBX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1ft3ka3ogynukjtk71xt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rnX5knBX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1ft3ka3ogynukjtk71xt.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code class="language-plaintext highlighter-rouge"&gt;Next &amp;gt;&lt;/code&gt; to continue. For this tutorial, we will allow the features to be installed in the default manner and at the default location (C:\Program Files\Oracle\VirutalBox).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--joXNGaHe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pif98alvah1a56zitov3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--joXNGaHe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pif98alvah1a56zitov3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Setup Wizard will give you the option to create shortcuts, start menu entries, etc. Again, we will leave the configuration as it is and click &lt;code class="language-plaintext highlighter-rouge"&gt;Next&lt;/code&gt; to progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bbdGeOKn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kgplceatztuojyv78qit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bbdGeOKn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kgplceatztuojyv78qit.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the next screen, click &lt;code class="language-plaintext highlighter-rouge"&gt;Yes &amp;gt;&lt;/code&gt; to proceed with the installation. After this, you will be met with one more screen, and you simply click &lt;code class="language-plaintext highlighter-rouge"&gt;Install&lt;/code&gt;. Within minutes, Oracle VM VirtualBox will be completely installed on your machine. Then, you can proceed to the next step - setting up your virtual machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Your Virtual Machine
&lt;/h2&gt;

&lt;p&gt;To begin setting up your Ubuntu instance, you first need to open VirtualBox. When you do, you will be met with the following window:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w1fR7od5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6wtxwr14gq7jo4dejy20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w1fR7od5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6wtxwr14gq7jo4dejy20.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the main toolbar running across the top of the Oracle VM VirtualBox Manager window, select &lt;code class="language-plaintext highlighter-rouge"&gt;Machine &amp;gt;&lt;/code&gt; and &lt;code class="language-plaintext highlighter-rouge"&gt;New&lt;/code&gt;. This will open a new window where you can begin defining the specifications for your VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S8lk9ryB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a5clvgdi5rucnmqawm9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S8lk9ryB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a5clvgdi5rucnmqawm9h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can simply choose the name &lt;strong&gt;TestMachine&lt;/strong&gt; and follow along with the same setup as dictated by the image above. I allocated 2 GB of RAM for the VM and chose the option to &lt;code class="language-plaintext highlighter-rouge"&gt;Create a virtual hard disk now&lt;/code&gt;. After clicking &lt;code class="language-plaintext highlighter-rouge"&gt;Create &amp;gt;&lt;/code&gt; a virtual hard disk now, be sure to select the hard disk file type, VDI (VirtualBox Disk Image):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m6H4edS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uyfe2tfyscplp06ayqzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m6H4edS8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/uyfe2tfyscplp06ayqzv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking &lt;code class="language-plaintext highlighter-rouge"&gt;Create&lt;/code&gt; will complete the initial setup of your virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZyQsAfw0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d8s4rkbwzldoyora69rv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZyQsAfw0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d8s4rkbwzldoyora69rv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Downloading the Ubuntu 16.04 ISO
&lt;/h2&gt;

&lt;p&gt;You won’t get very far without downloading the &lt;a href="http://releases.ubuntu.com/16.04/"&gt;Ubuntu 16.04 ISO&lt;/a&gt;. To do so, visit the Ubuntu 16.04 releases page and scroll down to find the correct image. For our purposes, let’s go with “ubuntu-16.04.6-desktop-amd64.iso.” Click on the image to begin downloading.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Ubuntu 16.04 on Your VM
&lt;/h2&gt;

&lt;p&gt;Now that you have your Ubuntu 16.04 image, let’s install the OS on your virtual machine. Go back to the Oracle VM VirtualBox Manager and select &lt;code class="language-plaintext highlighter-rouge"&gt;TestMachine&lt;/code&gt; in the left pane. Then click the button to “Start” your VM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8wZG5zrS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/opfspsz5d995ohtdrgrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8wZG5zrS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/opfspsz5d995ohtdrgrk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, the VM will start and you will be met with a prompt to request the start-up disk. Browse to the Ubuntu 16.04 ISO that you downloaded in the previous step and click &lt;code class="language-plaintext highlighter-rouge"&gt;Start&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4VITYmjC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x5vcaenppcuiycw4zhx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4VITYmjC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x5vcaenppcuiycw4zhx8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s install Ubuntu 16.04 on your VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5yERtoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a86u0ycdu5fgvwk2nq2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y5yERtoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a86u0ycdu5fgvwk2nq2n.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your preferred language and click &lt;code class="language-plaintext highlighter-rouge"&gt;Install Ubuntu&lt;/code&gt;. On the next screen, you will be asked if you would like to download updates or install third-party software during the installation. For the purposes of this tutorial, we will leave the boxes unchecked and click &lt;code class="language-plaintext highlighter-rouge"&gt;Continue&lt;/code&gt;. On the next screen, select the option to “Erase disk and install Ubuntu,” and then click &lt;code class="language-plaintext highlighter-rouge"&gt;Install&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Aj6cbZDb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zxt8joc63jo3hit0aw5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Aj6cbZDb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zxt8joc63jo3hit0aw5z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next two steps simply verify your location and keyboard type. Complete these as appropriate and progress to the final step, where you will name the machine and set up your account credentials:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TNmxmAWG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7pqi3v50dd5mgzbfn6ki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TNmxmAWG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7pqi3v50dd5mgzbfn6ki.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this step, Ubuntu will complete the installation process, and you will have officially set up a virtual machine running Ubuntu 16.04 with VirtualBox.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Single-Node Kubernetes Cluster with PMK
&lt;/h2&gt;

&lt;p&gt;The process for setting up a single-node Kubernetes cluster with your VM via PMK is fairly straightforward. But first, you will need to sign up for the &lt;a href="https://bit.ly/3a3YMwu"&gt;PMK Free Tier&lt;/a&gt;. Next, you must configure your VM so that you can utilize it as a node within your BareOS cluster. You can do this by logging into your Ubuntu VM, opening a terminal, and executing the following commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Command to download CLI installer: &lt;code class="language-plaintext highlighter-rouge"&gt;curl -O &lt;a href="https://raw.githubusercontent.com/platform9/express-cli/master/cli-setup.sh"&gt;https://raw.githubusercontent.com/platform9/express-cli/master/cli-setup.sh&lt;/a&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Command to run the CLI installer: &lt;code class="language-plaintext highlighter-rouge"&gt;bash ./cli-setup.sh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;During the installation process, your Platform9 management URL and username will be requested. You must provide these to move forward.&lt;/li&gt;
&lt;li&gt;Finally, run the following command to prepare your node for use within your cluster: &lt;code class="language-plaintext highlighter-rouge"&gt;pf9ctl cluster prep node&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When this process completes successfully, you will receive output similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x7i7OQT7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5csoplqeax676jz0sand.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x7i7OQT7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5csoplqeax676jz0sand.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the node will be visible in your Platform9 management console, where it can be viewed by navigating to Infrastructure -&amp;gt; Nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5lOeMHRm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gmppa4sm6tcr7dl6aioo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5lOeMHRm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gmppa4sm6tcr7dl6aioo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, by navigating to Infrastructure -&amp;gt; Clusters, you can click Add Cluster and follow the directions there, as well as the &lt;a href="https://bit.ly/2UheOO8"&gt;Platform9 documentation to add a BareOS cluster&lt;/a&gt;, which will add your VM as a node to complete the deployment of your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---I-utAjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6r4d3hyau0711suc633p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---I-utAjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6r4d3hyau0711suc633p.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>kubernetes</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>How to Create a Single Node Cluster on VirtualBox VM on Mac OS with Platform9 Managed Kubernetes</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Fri, 20 Mar 2020 20:33:30 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/how-to-create-a-single-node-cluster-on-virtualbox-vm-on-mac-os-with-platform9-managed-kubernetes-47p2</link>
      <guid>https://dev.to/kpemmaraju/how-to-create-a-single-node-cluster-on-virtualbox-vm-on-mac-os-with-platform9-managed-kubernetes-47p2</guid>
      <description>&lt;p&gt;The following tutorial from the Platform9 technical team enables you to run a simple single node Kubernetes cluster on your Mac laptop or desktop using &lt;a href="https://bit.ly/3a3YMwu"&gt;Platform9 Managed Kubernetes (PMK)&lt;/a&gt; - now free to anyone who wants to instantly deploy open-source Kubernetes on-premises, AWS, or Azure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This type of setup is useful for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers wanting to test and validate changes locally,&lt;/li&gt;
&lt;li&gt;DevOps engineers who want a test PMK cluster to learn and experiment with Kubernetes and PMK.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Here is a summary of the steps involved:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install virtualbox on your Mac OS machine (laptop or desktop)&lt;/li&gt;
&lt;li&gt;Create a Ubuntu 16.04 VM in virtualbox&lt;/li&gt;
&lt;li&gt;Create a single node PMK cluster using this VM&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Install VirtualBox on Mac OS
&lt;/h2&gt;

&lt;p&gt;Install VirtualBox on Mac OS (via Download)&lt;/p&gt;

&lt;p&gt;On Mac OS there are two popular ways to install VirtualBox.&lt;/p&gt;

&lt;p&gt;The first is to (download the latest edition of VirtualBox](&lt;a href="https://www.virtualbox.org/wiki/Downloads"&gt;https://www.virtualbox.org/wiki/Downloads&lt;/a&gt;) for your platform. At the time of this writing it is &lt;a href="https://download.virtualbox.org/virtualbox/6.1.2/VirtualBox-6.1.2-135662-OSX.dmg"&gt;version 6.1.2 for Mac OS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FlLmgzGf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/t8wyeznsz15sq5b0ivjz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FlLmgzGf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/t8wyeznsz15sq5b0ivjz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the file is downloaded, it will be in the ‘Downloads’ folder. Mount the dmg file by double-clicking on it, which will launch another window that has the actual installer. That installer is called VirtualBox.pkg.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---bMemvqT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1varudch7eogryvmpkqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---bMemvqT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1varudch7eogryvmpkqc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launch VirtualBox.pkg and follow through the four steps in the installer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1 - Click “continue” to let it check for prerequisites.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7HBAz53H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mcrbqzxrwbijedad7lgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7HBAz53H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mcrbqzxrwbijedad7lgk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2 - Continue the install&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XhsQt4Ip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ue6ztayxz4e1haerkf7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XhsQt4Ip--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ue6ztayxz4e1haerkf7s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3 - Click “Install” on the bottom left; then, it will prompt for a password for admin access to update the network configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_9CgqvJM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6px6jyozqjfxogiop099.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_9CgqvJM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6px6jyozqjfxogiop099.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4 - Click “Close” and it is installed and ready to run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xZjVWMQF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3en6cylzzyzqmf918vne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xZjVWMQF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3en6cylzzyzqmf918vne.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install VirtualBox on Mac OS (via Homebrew)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On Mac OS there is a developer focused community project called Homebrew (&lt;a href="https://brew.sh/"&gt;https://brew.sh/&lt;/a&gt;) that provides a locally-installed utility that can be used to install and update projects provided by the community.&lt;/p&gt;

&lt;p&gt;VirtualBox is available via this method and can be installed with a single command line.&lt;/p&gt;

&lt;p&gt;MacBook-Air&lt;/p&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;~ % brew cask &lt;span class="nb"&gt;install &lt;/span&gt;virtualbox
Updating Homebrew...
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Auto-updated Homebrew!
Updated 1 tap &lt;span class="o"&gt;(&lt;/span&gt;homebrew/cask&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
No changes to formulae.

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Caveats
To &lt;span class="nb"&gt;install &lt;/span&gt;and/or use virtualbox you may need to &lt;span class="nb"&gt;enable &lt;/span&gt;its kernel extension &lt;span class="k"&gt;in&lt;/span&gt;:
  System Preferences → Security &amp;amp; Privacy → General
For more information refer to vendor documentation or this Apple Technical Note:
  https://developer.apple.com/library/content/technotes/tn2459/_index.html

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Downloading https://download.virtualbox.org/virtualbox/6.1.2/VirtualBox-6.1.2-135662-OSX.dmg
&lt;span class="c"&gt;######################################################################## 100.0%&lt;/span&gt;
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Verifying SHA-256 checksum &lt;span class="k"&gt;for &lt;/span&gt;Cask &lt;span class="s1"&gt;'virtualbox'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Installing Cask virtualbox
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Running installer &lt;span class="k"&gt;for &lt;/span&gt;virtualbox&lt;span class="p"&gt;;&lt;/span&gt; your password may be necessary.
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Package installers may write to any location&lt;span class="p"&gt;;&lt;/span&gt; options such as &lt;span class="nt"&gt;--appdir&lt;/span&gt; are ignored.
Password:
installer: Package name is Oracle VM VirtualBox
installer: choices changes file &lt;span class="s1"&gt;'/var/folders/s2/yg_q89zd1xx14mq8fv1bshpc0000gn/T/choices20200123-15967-1nzasnl.xml'&lt;/span&gt; applied
installer: Upgrading at base path /
installer: The upgrade was successful.
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Changing ownership of paths required by virtualbox&lt;span class="p"&gt;;&lt;/span&gt; your password may be necessary
🍺  virtualbox was successfully installed!
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;
  
  
  Step 2: Retrieve Ubuntu LTS
&lt;/h2&gt;

&lt;p&gt;In a web browser, go to &lt;a href="http://releases.ubuntu.com/16.04/"&gt;http://releases.ubuntu.com/16.04/&lt;/a&gt; and scroll down to the “Server” download section to get the “64-bit PC (AMD64) server install image” image. This will take a few minutes, as it is 873Mb.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2KZ8dhX1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sxbvbfagmuvjo7p2d7i6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2KZ8dhX1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sxbvbfagmuvjo7p2d7i6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create VirtualBox Virtual Machine
&lt;/h2&gt;

&lt;p&gt;In the Application folder you will find VirtualBox.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XbpntQ2M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ojrsabgobjy8blq483l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XbpntQ2M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ojrsabgobjy8blq483l4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Launch VirtualBox. It will be a clean slate to start building your new virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x8FJX_Hd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tr4epyndyebkwzesxp26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x8FJX_Hd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tr4epyndyebkwzesxp26.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next click the blue “New” button to create a new virtual machine. Select Linux and Ubuntu for the options, and choose a name that makes sense for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oy3TAKyv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8mxf4xnxk04a60mftkvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oy3TAKyv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8mxf4xnxk04a60mftkvb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PMKFT recommends 16Gb of RAM and 4 cores per server node. Since this is a local test environment, not a full server, those are not strictly needed; but, the more memory you can allocate, the better the system will perform. Macbook Pro and iMacs routinely have 16GB of RAM, so allocating 8GB would be ideal for almost all test scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---QXgu2f_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3arxyhll16qe2y564tnm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---QXgu2f_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3arxyhll16qe2y564tnm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let it create a virtual disk (it doesn’t matter which format) and dynamically allocate the space. The more disk space you can allocate the better – 20+GB fills up fast when you start to play with container images in Kubernetes. Click “Create” after going through these screens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hQ56chA6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/izp6lbvzuwe95t70ebpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hQ56chA6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/izp6lbvzuwe95t70ebpn.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3nADgsZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6ump6a2oudhaawa0swg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3nADgsZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6ump6a2oudhaawa0swg0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MCoMzB6j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xzhgy80xl90ek5wsq66z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MCoMzB6j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xzhgy80xl90ek5wsq66z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have a completed image, ready to start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XWGRHj5N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/petdp6ybrosnqizk2nug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XWGRHj5N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/petdp6ybrosnqizk2nug.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Install Ubuntu
&lt;/h2&gt;

&lt;p&gt;The first time the new virtual machine starts up, it will want the location of the ISO file that it will install from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G1vOem0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/anvb9h1avltvp2o3ae7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G1vOem0D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/anvb9h1avltvp2o3ae7q.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your language and then choose to “Install Ubuntu Server”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6OuESvNk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r4sj1l5lskpd4naiukri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6OuESvNk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r4sj1l5lskpd4naiukri.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your language again, followed by country; tell it to not detect the keyboard layout, and select the appropriate keyboard. If you don’t really care and have a standard US English setup, then press enter four times and it will start the install.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NsRo7SVE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pmbtv3iighsk4zrwfxrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NsRo7SVE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pmbtv3iighsk4zrwfxrf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the installation completes, it will want a hostname.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HefffkhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fcs6gf3hwobpyoo101ha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HefffkhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fcs6gf3hwobpyoo101ha.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, it will want a username and password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bElBJ6Tp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zh2drqu0pig257d39ddv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bElBJ6Tp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zh2drqu0pig257d39ddv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can accept the defaults for the next few screens including ‘not to encyrpt the home directory,’ since it is a test box; and, ‘to use LVM for disks.’&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JiZrZ68i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7kmvbo1eiivuk6d1p6j8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JiZrZ68i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7kmvbo1eiivuk6d1p6j8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The screen after it will require you to select “Yes” to write the changes to the disk. The default is “No,” so you can’t just blindly continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZmDkkqAH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/br8xjum6oquhgzjemebb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZmDkkqAH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/br8xjum6oquhgzjemebb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Same with the next sequence – you can accept the defaults until the screen where it wants you to confirm, “Yes” to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oys4Rnmt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ta6sbl8a64f2pp87ei3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oys4Rnmt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ta6sbl8a64f2pp87ei3c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next screen that prompts for information asks for a proxy, which you can just leave empty to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kMHm9iYd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ht38549d6e7ti8s3k4j3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kMHm9iYd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ht38549d6e7ti8s3k4j3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select whether you want the instance to patch itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZMSRIeUt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvsytpaenzy8fvj0b6tu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZMSRIeUt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvsytpaenzy8fvj0b6tu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select whether you want any packages. The default selection is an acceptable answer; no additional packages are required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SWEoMt-N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/twekvi86tzfsd16497s0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SWEoMt-N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/twekvi86tzfsd16497s0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Allow it to install the GRUB bootloader.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sVGqF3YC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h5qdafv85lkn7pqosdye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sVGqF3YC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h5qdafv85lkn7pqosdye.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we have a fully-functional Ubuntu 16.04 LTS server that just needs to restart.&lt;/p&gt;

&lt;p&gt;Now you can use this server to create a single node PMK cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Setup SSH access for your VM
&lt;/h2&gt;

&lt;p&gt;By default, you will not be able to ssh into this VM even from the host. In order to enable ssh access, you need to install sshd in the VM and make changes to the VM’s networking configuration to allow ssh access from outside.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log into your Ubuntu server you just created on virtual box. Note: on first login you will be prompted to create a password.&lt;/li&gt;
&lt;li&gt;Install openssh-server to setup SSH access for your VM.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install openssh-server
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Ssh is now configured on the VM, but the VM is still not accessible from outside. This is because the VM we just created is configured with NAT as the default networking option. In this configuration, we need to enable port forwarding for the VM and forward port 22 (the ssh port) to the host level, so that any ssh requests for the VM can be received at the host level and routed to the VM.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop your VM&lt;/li&gt;
&lt;li&gt;In the VirtualBox UI, select the VM on the left hand panel, then click on ‘Settings’ -&amp;gt; ‘Network’&lt;/li&gt;
&lt;li&gt;You will see ‘Adapter1’ configured with ‘NAT’. Expand the ‘Advance’ menu then click ‘port forwarding’. Click + button on the right to create a new port forwarding rule for this VM.&lt;/li&gt;
&lt;li&gt;Specify the following parameters in the rule: Name: SSH Protocol: TCP Host IP: 127.0.0.1 Host Port: 2222 IP Guest: Empty Port Guest: 22&lt;/li&gt;
&lt;li&gt;Start your VM&lt;/li&gt;
&lt;li&gt;Now open a terminal window on your host machine and run the following command to ssh into the VM&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;ssh yourusername@127.0.0.1 -p 2222
&lt;/code&gt;&lt;/pre&gt;    

&lt;p&gt;Your ssh access is now configured!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Create a Single Node Kubernetes Cluster using PMK CLI
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Log into your Ubuntu server you just created on virtual box.&lt;/li&gt;
&lt;li&gt;Download and install the PMK CLI by running the following command on your Ubuntu terminal.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight"&gt;&lt;code&gt;bash &amp;lt;(curl -sL http://pf9.io/get_cli)
&lt;/code&gt;&lt;/pre&gt;    

&lt;ul&gt;
&lt;li&gt;The CLI installer will ask for your PMK credentials. Specify your PMK account url, your email address that you use to sign into PMK and your password. The account url will be in "&lt;code class="language-plaintext highlighter-rouge"&gt;&lt;a href="https://pmkft-&amp;lt;numeric"&gt;https://pmkft-&amp;amp;lt;numeric&lt;/a&gt; value&amp;gt;.platform9.io&lt;/code&gt;" format.&lt;/li&gt;
&lt;li&gt;Once the CLI install finishes, you can run the &lt;code class="language-plaintext highlighter-rouge"&gt;pf9ctl&lt;/code&gt; CLI. &lt;code class="language-plaintext highlighter-rouge"&gt;pf9ctl cluster --help&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;code class="language-plaintext highlighter-rouge"&gt;cluster bootstrap&lt;/code&gt; command lets you easily create a single node cluster. Specify the name for your cluster, and the CLI will use reasonable defaults for all the other parameters to create the cluster for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code class="language-plaintext highlighter-rouge"&gt;pf9ctl cluster bootstrap MyTestCluster&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.platform9.com/kubernetes/PMK-CLI/cluster/bootstrap/"&gt;&lt;strong&gt;Read more information about the Bootstrap command here&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will take ~5-10 minutes. Behind the scene, the CLI will create a Kubernetes cluster by making this node both the master and worker node for the cluster. It will install required Kubernetes packages and configure the cluster.&lt;/p&gt;

&lt;p&gt;Thats it! Your single node PMK cluster is now ready. You can access the cluster via &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/"&gt;&lt;strong&gt;kubectl&lt;/strong&gt;&lt;/a&gt; or use the PMK UI to access it and deploy workloads on it.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>A Free Way to Learn the Ropes with Kubernetes</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Tue, 17 Mar 2020 16:33:57 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/a-free-way-to-learn-the-ropes-with-kubernetes-3o3g</link>
      <guid>https://dev.to/kpemmaraju/a-free-way-to-learn-the-ropes-with-kubernetes-3o3g</guid>
      <description>&lt;p&gt;Since we launched Platform9 Managed Kubernetes (PMK) three years ago, we learned a lot about real-world Kubernetes deployments. Large enterprise customers like Juniper have battle-tested PMK at scale running on hundreds of bare metal nodes across data centers. Many users have benefited from our SaaS management capabilities, including automated deployments, upgrades, security patching, and SLA management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3eiVQdsa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jiiyk99tz4dftw2fk8i9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3eiVQdsa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jiiyk99tz4dftw2fk8i9.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the last 12 months, we had several successful Kubernetes deployments around the world, highlighting momentum in Kubernetes and validating our industry-leading SaaS Management model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operating Kubernetes at scale is extremely challenging
&lt;/h2&gt;

&lt;p&gt;However, we also found that a vast majority of people and companies are struggling with the complexity of operating Kubernetes in production. Kubernetes is complex and notoriously difficult to manage, particularly in on-premises or multi-cloud environments. Day 2 Operations are incredibly challenging: how do you handle upgrades to your clusters when there’s a new version or a security patch? How do you do the monitoring? HA? Scaling? Compliance? And more.&lt;/p&gt;

&lt;p&gt;The operational pain is compounded by the industry-wide talent scarcity and skills gap. Most companies are struggling to hire the much sought-after Kubernetes experts, and they lack advanced Kubernetes experience to ensure smooth operations at scale.&lt;/p&gt;

&lt;p&gt;Delivering production-grade Kubernetes in a way that doesn’t make your existing staff run for the hills (or get left holding the bag...) is tough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Many companies are still learning the ropes with Kubernetes
&lt;/h2&gt;

&lt;p&gt;Furthermore, not everybody is ready to go into production right away. For many companies, Kubernetes is still new, and they are kicking tires to figure out why, if, and when they want to use it. Companies want the room to start small, learn, test, and then scale to production on their terms.&lt;/p&gt;

&lt;p&gt;Therefore, we decided to make our enterprise Kubernetes product more accessible across the board no matter where the customer is on their Kubernetes journey. We wanted DevOps teams and developers everywhere to enjoy the freedom of using Kubernetes at their own pace and in any environment of their choice so they can innovate for the business without having to deal with the day-to-day complexities of running Kubernetes in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  New plans for PMK
&lt;/h2&gt;

&lt;p&gt;We are excited to announce the launch of two new PMK plans (“Freedom’ and ‘Growth’) that allow DevOps, ITOps, Platform Engineering, and cloud architects to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign up online and instantly create upstream open-source &lt;strong&gt;Kubernetes clusters in under 5 minutes&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy clusters in any environment ranging from developer laptops, on-premises VM’s or bare metal servers to edge infrastructure or public clouds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Eliminate the constraints of Kubernetes skills, long implementation times, or management of day-2 operational activities such as upgrades, security patching, or monitoring, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gain the flexibility to start small, learn, test, and scale to production on their terms and pace.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="http://bit.ly/39TwULv"&gt;Sign-up now to deploy your free cluster&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Freedom plan&lt;/strong&gt; is great for anyone getting started with Kubernetes and allows users to instantly install Kubernetes clusters of up to &lt;strong&gt;20 nodes (800 vCPU’s).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Growth plan&lt;/strong&gt; starts under $500/month, including an option for month-to-month payments, and provides &lt;strong&gt;99.9% SLA and 24×7 support for up to 50 nodes (2000 vCPU’s).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1pYv87-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/62jm5aetx70o1v3xvplk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1pYv87-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/62jm5aetx70o1v3xvplk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  An extensive set of core features and support options unmatched anywhere
&lt;/h2&gt;

&lt;p&gt;Before we dive into the details, first an important distinction:&lt;/p&gt;

&lt;p&gt;Those of us who want to leverage Kubernetes in the enterprise know that words like “managed” and “service” (or “as-a-service”) are often thrown around with enterprise Kubernetes solutions. But they describe VERY different levels – and philosophies – of “management,” and of “service.”&lt;/p&gt;

&lt;p&gt;What we mean is a fully-managed Kubernetes service, where Platform9 does all of the heavy lifting and ongoing operations. So &lt;strong&gt;you don’t have to deal with any of the operational complexity.&lt;/strong&gt; Don’t mistake ‘managed service’ to mean a lot of people on keyboards manually managing your environment. Platform9 delivers a public-cloud like service in on-premises, edge, and multi-cloud environments. This service is provided using a SaaS delivery model, developed with thousands of person-years of software automation engineering work. Moreover, the service is backed by our additional layer of Kubernetes certified experts and customer success teams who also monitor and remediate the environment.&lt;/p&gt;

&lt;p&gt;Both the Freedom and Growth plans use the same battle-tested and proven enterprise edition of Platform9 Managed Kubernetes (PMK) and provide the following set of core capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A SaaS Management Plane that remotely monitors, optimizes and heals your clusters and underlying infrastructure, across all of your environments&lt;/li&gt;
&lt;li&gt;Self-service, instant cluster creation (under 5 minutes) with native integrations across private and public clouds&lt;/li&gt;
&lt;li&gt;1-click, in-place cluster upgrades to the latest version of Kubernetes&lt;/li&gt;
&lt;li&gt;Automatic security patches – when a new CVE is discovered and fixed, a patch is automatically applied to all clusters&lt;/li&gt;
&lt;li&gt;Built-in monitoring and alerts to ensure cluster health, including etcd cluster quorum lost, etcd node down, etcd repair failure, infrastructure resource utilization, node storage issues, network connectivity between nodes, docker daemon down, and more&lt;/li&gt;
&lt;li&gt;Managed Observability (Prometheus and more) is included by default. Users can configure these tools for their specific needs for each cluster (connect to a different persistent storage or data visualization tool, etc. Grafana dashboard is integrated by default.)&lt;/li&gt;
&lt;li&gt;Centrally manage all clusters from a single pane of glass&lt;/li&gt;
&lt;li&gt;Control access to resources with fine-grained Kubernetes RBAC management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And much, much more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For a more detailed list of capabilities and comparison of these plans, go &lt;a href="http://bit.ly/2Ucv2qt"&gt;here&amp;gt;&amp;gt;&amp;gt;&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We believe these plans will make Kubernetes a no-brainer for DevOps, ITOps, and cloud platform teams in any company no matter how large or small, no matter where they are in their Kubernetes journey- providing everyone with a superior experience of the Kubernetes services on their infrastructure (on-prem, or in the cloud or at the edge)&lt;/p&gt;

&lt;h2&gt;
  
  
  Making it easier to migrate your apps into Kubernetes: Partnership with HyScale
&lt;/h2&gt;

&lt;p&gt;Getting stable Kubernetes clusters deployed and operational is something that most DevOps and ITOps struggle with, but what about containerization your existing complex apps? This migration can be a long and complicated endeavor in and of itself, which can further be hampered by all the new Kubernetes concepts that developers need to learn. How can we simplify this process?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.hyscale.io/"&gt;Enter HyScale&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;HyScale is an application delivery platform that abstracts the complexities of containers and Kubernetes so that your application teams can quickly deliver containers and IT teams get to drive-up Kubernetes adoption.&lt;/p&gt;

&lt;p&gt;We have partnered with HyScale to help our customers accelerate Kubernetes adoption and get developers excited about containerization and moving their apps to Kubernetes. &lt;a href="https://www.hyscale.io/blog/platform9-kubernetes-solution-with-hyscale/"&gt;Read this blog&lt;/a&gt; for more details on this partnership, including step-by-step instructions on how you can get your apps migrated over to PMK.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing your container images in a private registry, for FREE: Partnership with JFrog
&lt;/h2&gt;

&lt;p&gt;If you have used or heard of Artifactory, then you know &lt;a href="https://jfrog.com/"&gt;JFrog&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;JFrog has recently introduced the &lt;a href="https://jfrog.com/container-registry/"&gt;JFrog Container Registry&lt;/a&gt;, which is the most comprehensive and advanced container registry in the market today, and it is available for free.&lt;/p&gt;

&lt;p&gt;The need for a private registry to store and manage this software is vital whether you are producing containerized software or merely running it. A private registry can protect you from upstream changes, network failures, and generally from third-party sources you have no control over. If you are producing images, you need a private registry to version your software, track its dependencies, and allow for reproducible builds. This is where JFrog’s Container Registry (JCR) comes in. It’s easy to deploy JCR on top of PMK and get a free registry running on a free PMK cluster anywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what are you waiting for? Give our “Freedom” plan a spin. It’s free forever, no credit card required. Really!
&lt;/h2&gt;

&lt;p&gt;You can get going with a single node Kubernetes on your laptop. Here are the step-by-step tutorials for deploying on your laptop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="http://bit.ly/2ISj5B5"&gt;Deploy on Apple macOS with VirtualBox&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="http://bit.ly/2TVqyFV"&gt;Deploy on Windows OS with Virtual Box&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you have deployed your cluster, here are more tutorials for you get started with container applications:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="http://bit.ly/2WjO54P"&gt;Setup your NGINX Ingress Controller&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://bit.ly/38ZwlhM"&gt;Get your first container up and running&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://bit.ly/38ZwlhM"&gt;Deploy a complex microservices app&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Not ready to sign up yet? Want to learn more about what you are getting for signing up? No worries. Check out this demo video from our co-founder, VP of product, and the brainchild behind these new PMK plans, Madhura Maskasky:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://platform9.com/blog/announcing-new-platform9-managed-kubernetes-pmk-plans-that-start-at-zero-cost-and-scale-as-customers-grow/?wvideo=xc8iil087p"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4G6Y4Yjf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://embedwistia-a.akamaihd.net/deliveries/29d8cea46d8259b2210e91c4665942df.jpg%3Fimage_play_button_size%3D2x%26image_crop_resized%3D960x538%26image_play_button%3D1%26image_play_button_color%3D54bbffe0" width="400" height="225"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://platform9.com/blog/announcing-new-platform9-managed-kubernetes-pmk-plans-that-start-at-zero-cost-and-scale-as-customers-grow/?wvideo=xc8iil087p"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>news</category>
    </item>
    <item>
      <title>Living on the Edge: Operating Kubernetes at Telco Companies </title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Tue, 28 Jan 2020 18:43:08 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/living-on-the-edge-operating-kubernetes-at-telco-companies-2p4d</link>
      <guid>https://dev.to/kpemmaraju/living-on-the-edge-operating-kubernetes-at-telco-companies-2p4d</guid>
      <description>&lt;p&gt;As the number of connected devices increases, we have seen IT processes and functions increasingly shifting to what is called the “edge.” The edge is essentially the opposite of a centralized system. Edges are remote systems that operate closest to where the users or services that consume them are. &lt;/p&gt;

&lt;p&gt;One of the most well-defined examples of operating “at the edge” is in telecommunications companies. Today, organizations are having to massively scale and adjust the way they meet consumer needs by expanding the proliferation of their edge locations and how they operate at the edge. In this post, I help define what operating at the edge means for telco companies and present some of the biggest challenges that come along with it -- and how they can be addressed.&lt;/p&gt;

&lt;p&gt;First off, let’s try to look at this from the perspective of a real-world example and what edge deployments look like for telco companies. When a massive telecommunications organization rolls out their mobile services, the way they do that is what they call “virtual network functions” that have to run at the edge. The edge, in this case, is basically a point of presence or a location that serves a number of cell phone towers. For years, this has been a large task but one that was considered manageable. However, the advent of &lt;a href="https://en.wikipedia.org/wiki/5G"&gt;5G technology&lt;/a&gt; is changing that.&lt;/p&gt;

&lt;p&gt;What happens with 5G is that the frequency of the signal is so high (in comparison with 3G or even 4G), that it is not able to travel very far before it gets attenuated or becomes week. So, in order to generate the performance that comes with 5G, they have to install more towers -- which means more points of presence and correspondingly more edge locations. Each one of these points of presence has a certain amount of compute or processing that needs to be done -- this is its virtual network function. When you make that mobile call through the cell tower, the signal is converted into a digital signal and then it passes through a piece of software. This is all happening at the edge. So there has to be a software that's actually deployed at the edge and the underlying hardware, meaning servers, networking, storage, and everything that's required to run that function or service, needs to be deployed out there at that edge as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rTgqT3pL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lbpnbdd8z4b5n7w90wah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rTgqT3pL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lbpnbdd8z4b5n7w90wah.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this sounds complicated -- just wait. Imagine having hundreds and thousands of these locations spread out not only around the country but across the globe. The way these organizations do it is they have regions, regional &lt;a href="https://en.wikipedia.org/wiki/Point_of_presence"&gt;PoPs&lt;/a&gt;, and then city PoPs. For example, the Bay Area might have 200 of these serving different areas. There might be a few dozen for San Jose and a few others for San Francisco and so on. And each one of them has this PoP and each one of them is an edge location where a micro data center has to be deployed. Each one needs a rack of servers, compute capability, storage, and networking.&lt;/p&gt;

&lt;p&gt;When you have this scenario with hundreds of micro data centers serving millions of customers across the globe with a product that is mission-critical for many of the people using it -- you can imagine the pressure on delivering very high availability. The challenge then becomes how can they provision something new and ensure uptime? Even more so, it becomes, how can we keep things operating smoothly on a day-in and day-out basis? They also have to have an effective way of getting new micro data centers up and running very quickly as they grow their networks to remain competitive.&lt;/p&gt;

&lt;p&gt;This is where tools like SaaS-managed Kubernetes platforms come in - that is, solutions that have been designed from the ground up to manage -- remotely and centrally -- any number of locations out of the box. It has built-in remote monitoring, central management, and zero-touch operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  SaaS-managed Kubernetes Explained
&lt;/h2&gt;

&lt;p&gt;Think about the alternative for a company if they didn’t have the ability for remote management of their edge infrastructure and applications across thousands of locations. If something goes wrong at one of those locations -- say, the point of presence location fails and there's a network connectivity issue, or one of the servers goes down -- what do you do? Without remote managed access, you have to send a person down there. The technician goes into the PoP location, opens up the server, figures out what's going wrong, troubleshoots it, and then maybe a few hours later it's up and running. So that's a manual, operationally intensive and costly process.&lt;/p&gt;

&lt;p&gt;With SaaS-managed Kubernetes, all that is built-in. In the above scenario, you just log in, bring up a PoP location, the servers automatically get discovered and they get registered with the central SaaS platform. From there, it is a zero-touch operation and they can diagnose and solve the problem much more quickly and with significant cost savings.&lt;/p&gt;

&lt;p&gt;Edge deployments are increasingly being delivered using containers and Kubernetes. Software developers, product engineering and DevOps teams have been driving the adoption of Kubernetes for edge use cases. &lt;a href="https://platform9.com/managed-kubernetes/"&gt;Platform9 Managed Kubernetes&lt;/a&gt; (PMK) is an example of a solution that enables enterprises to easily run Kubernetes-as-a-Service at scale in their edge environments with no operational burden. The solution ensures fully automated Day-2 edge operations with 99.9% SLA  using a unique SaaS Management Plane that remotely monitors, optimizes and heals your Kubernetes clusters and underlying infrastructure. With automatic security patches, upgrades, proactive monitoring, troubleshooting, auto-healing, and more — users can run innovative new applications in their edge environments. &lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next for the Edge?
&lt;/h2&gt;

&lt;p&gt;In general, Kubernetes edge deployments are in the early phase of the hype cycle and are expected to grow rapidly in 2020. In fact, edge computing is expected to account for a major share of enterprise computing. According to leading analyst firms, there could be more than 20 times as many smart devices at the edge of the network as in conventional IT roles. Furthermore, the amount of enterprise-generated data created and processed outside of a traditional centralized data center could reach 75 percent by 2025.&lt;/p&gt;

&lt;p&gt;The variety of edge applications and the scale at which they are being deployed is mind-boggling. A recent survey &lt;a href="https://platform9.com/blog/six-kubernetes-takeaways-for-it-ops-teams-from-the-2019-gartner-infrastructure-operations-cloud-strategies-conference/"&gt;highlighted the diversity of use cases being deployed&lt;/a&gt;. The use cases highlighted here include edge locations owned by the company (e.g. retail stores, cruise liners, oil and gas rigs, manufacturing facilities), and in the case of on-premises software companies, their end customers’ data centers. Edge deployments typically need to support heterogeneity of location, remote management and autonomy at scale; enable developers; and integrate well with public cloud and/or core data centers.&lt;/p&gt;

&lt;p&gt;For many companies -- not just in telecommunications but across verticals of all kinds -- living on the edge can be just that. Many challenges need to be addressed including the need to figure out consistent and scalable edge operations that can manage dozens or hundreds of pseudo-data centers with low or no touch, usually with no staff and little access. But with the right partners and tools for success, moving your technology to the edge can mean driving increased value, speeding up the velocity of innovation, and delivering the kinds of customers experiences that set your organization apart. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>saas</category>
    </item>
    <item>
      <title>6 Kubernetes Takeaways from KubeCon 2019, San Diego</title>
      <dc:creator>Kamesh Pemmaraju</dc:creator>
      <pubDate>Mon, 02 Dec 2019 23:41:40 +0000</pubDate>
      <link>https://dev.to/kpemmaraju/6-kubernetes-takeaways-from-kubecon-2019-san-diego-232h</link>
      <guid>https://dev.to/kpemmaraju/6-kubernetes-takeaways-from-kubecon-2019-san-diego-232h</guid>
      <description>&lt;p&gt;We recently returned from KubeCon + CloudNativeCon 2019 held in sunny San Diego Nov 18-21. KubeCon San Diego 2019 drew more than 12,000 attendees, a 50% increase since the last event in Barcelona, just 6 months ago.&lt;/p&gt;

&lt;p&gt;While at the event, we interacted with more than 1,800 attendees and had more than 1,300 attendees complete our survey at the booth to give us an insight into their use of Kubernetes.&lt;/p&gt;

&lt;p&gt;Here are the six most important takeaways from the survey results:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) Massive Increase in Planned Scale of Kubernetes Deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tp2HEKbK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cie904exe0imczbwa7hz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tp2HEKbK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cie904exe0imczbwa7hz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 1: How many Kubernetes clusters will your organization run in six months?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is moving to the mainstream given the scale of Kubernetes clusters that companies plan to run. &lt;strong&gt;More than 406 of the survey respondents said that they will be running 50 or more clusters in production within the next 6 months!&lt;/strong&gt; That’s an astonishing number of companies planning to run Kubernetes at such a massive scale.&lt;/p&gt;

&lt;p&gt;What’s more, in some of our in-person conversations, we discovered that companies are running hundreds of nodes in just one or two clusters.  The scale is both in terms of the number of nodes as well as the number of clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) Monitoring, Upgrades, and Security Patching are the Biggest Challenges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C6vHK5nS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bhaybunyisol9hvem6j4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C6vHK5nS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bhaybunyisol9hvem6j4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 2: What are your current challenges running k8s? (pick all that apply)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Running at a massive scale presents unique challenges. It’s pretty easy to deploy one or two Kubernetes clusters for proof of concepts or development/testing. Managing it reliably at scale in production is quite another matter, especially when you have dozens of clusters and hundreds of nodes.&lt;/p&gt;

&lt;p&gt;Kubernetes’ complexity and abstraction is necessary and it is there for good reasons, but it also makes it hard to troubleshoot when things don’t work as expected. &lt;strong&gt;No wonder then that 48.86% of the survey respondents indicated that monitoring at scale is their biggest challenge&lt;/strong&gt;, followed by upgrades (44.3%) and security patching (34.09%).&lt;/p&gt;

&lt;p&gt;DevOps, PlatformOps, and ITOps teams need observability, metrics, logging, service mesh and they need to get to the bottom of failures quickly to ensure uptime and SLAs. The rise of Prometheus, fluentd, Istio is a direct response to these needs. However, now they need to keep up to date with these and other constantly changing CNCF projects that continue to evolve at a rapid pace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) 24×7 Support and 99.9% Uptime SLA is Critical for Large Scale Deployments&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bLqndCaL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/d5nna6twe51yzvvw463e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bLqndCaL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/d5nna6twe51yzvvw463e.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 3: What best describes the Kubernetes uptime SLA your end-users require?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The scale at which companies are running Kubernetes is a good proxy for a significant number of production applications that are serving end-users. &lt;strong&gt;65% of the respondents indicated their end-users require 99.9% or higher uptime.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DevOps, Platform Ops, and IT Ops teams need to architect the entire solution from the bare metal up (especially in on-premises data centers or edge environments) so they can confidently ensure workload availability while performing complex day-2 operations such as upgrades, security patching, infrastructure troubleshooting failures at the network, storage, compute, operating system, and Kubernetes layers. All of this requires proactive support that ensures quick responses to production incidents. Not surprisingly then, &lt;strong&gt;56.16% of the respondents mentioned that their organization needs to provide a 24x7x365 follow-the-sun support model.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0PUcIKGg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/d144hjgywa3e6z1b2gsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0PUcIKGg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/d144hjgywa3e6z1b2gsh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 4: What best describes the Kubernetes support model your organization needs?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4) Kubernetes Edge Deployments Are Growing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0WdCLjRr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bakv9mqi4p9bkcfqu91h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0WdCLjRr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bakv9mqi4p9bkcfqu91h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 5: How many edge locations will run Kubernetes 6 months from now?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We wanted to find out if Kubernetes is being considered for edge computing use cases. 145 survey respondents indicated they have an edge deployment using Kubernetes. The surprising thing was the geographical scale of these deployments. &lt;strong&gt;38.5% responded that they are running Kubernetes in 100 or more locations!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The survey data also indicated that these edge locations could be characterized as “thick” edges since they are running a significant number of servers. Almost 47% of the respondents said that these locations are running 11 or more servers in each location.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kZTjHTdj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vmrijx3263jdaoyz4nxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kZTjHTdj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vmrijx3263jdaoyz4nxo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 6: What is the average number of nodes that will run at each edge location?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s quite a challenge to scale dozens or hundreds of pseudo-data centers that need to be managed with low or no touch, usually with no staff and little access. These scenarios included edge locations owned by the company (e.g. retail stores), and in the case of on-premises software companies, their end customers’ data centers. Given the large scale, traditional data center management processes won’t apply. The edge deployments should support heterogeneity of location, remote management, and autonomy at scale; enable developers; integrate well with public cloud and/or core data centers.&lt;/p&gt;

&lt;p&gt;The survey also highlighted the diversity of use cases being deployed. &lt;strong&gt;The top two applications being deployed at the edge are Edge gateways/access control (43.3%) and Surveillance and video analytics (32.28%).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qhOZ0Pf---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/5gglbyev19jkhujbwkch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qhOZ0Pf---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/5gglbyev19jkhujbwkch.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 7: What applications are running at the Edge?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Cloud Deployments Are The New Normal&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mnOV9fiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/lje4c1od700hngvsyxjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mnOV9fiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/lje4c1od700hngvsyxjw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kubernetes Multi-Cloud Deployments&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In many cases, the majority of respondents indicated that they run Kubernetes on BOTH on-premises and public cloud infrastructure. This is another evidence that multi-cloud or Hybrid Cloud deployment are becoming the new normal. With the number of different, mixed environments (private/public cloud as well as at the edge) projected to continue to grow – avoiding lock-in, ensuring portability, interoperability and consistent management of K8s across all types of infrastructure — would all become even more critical for enterprises in the years ahead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access to Kubernetes Talent for a DIY Approach Remains Challenging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An unusual aspect of KubeCon is the large booths that are operated by major enterprises (e.g., Capital One) just to recruit Kubernetes talent. Walmart made a recruitment pitch in their keynote session. This year there were 15 major enterprises in CNCF’s “end user sponsor” category who are looking for talent.&lt;/p&gt;

&lt;p&gt;A quick search on Indeed.com shows 12,503 jobs (up from 9,934 just 6 months ago!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---GDH4gQy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/holnpdc91p4ksowmap40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---GDH4gQy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/holnpdc91p4ksowmap40.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kubernetes Talent in High Demand&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Many companies are looking to hire Kubernetes talent, but it continues to be a challenge to find the requisite skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s astonishing to see the massive scale of Kubernetes deployments. The companies at KubeCon that are spearheading these large-scale Kubernetes deployments are positioning themselves to rapidly deliver superior customer experiences giving them a distinct competitive edge in a fast-moving digital economy. Not only are they using it in their core data centers and the public cloud, but they are also deploying a staggering number of Edge Computing use cases from vertical industries ranging from retail and manufacturing to automotive and 5G rollouts.&lt;/p&gt;

&lt;p&gt;Many mainstream enterprises will follow in their footsteps and see success with Kubernetes in production in the very near future. However, new challenges arise at scale that DevOps, Platform Ops, and IT Ops teams at major enterprises need to tackle going forward, especially when their end-users demand 99.9%+ uptime and round-the-clock support in mission-critical production environments.&lt;/p&gt;

&lt;p&gt;The complexity of Kubernetes makes it difficult to run and operate at scale, particularly for multi/hybrid cloud environments- spanning on-premises data centers, edge locations, public cloud infrastructure. Hiring talent and retaining them to run Kubernetes operations is getting increasingly difficult.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://platform9.com/managed-kubernetes/"&gt;Platform9 Managed Kubernetes (PMK)&lt;/a&gt; enables enterprises to easily run Kubernetes-as-a-Service at scale, leveraging their existing environments, with no operational burden. The solution ensures fully automated Day-2 operations with 99.9% SLA on any environment: in data-centers, public clouds, or at the edge. This is delivered using a unique SaaS Management Plane that remotely monitors, optimizes and heals your Kubernetes clusters and underlying infrastructure. With automatic security patches, upgrades, proactive monitoring, troubleshooting, auto-healing, and more — you can confidently run production-grade Kubernetes, anywhere.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
      <category>kubecon</category>
    </item>
  </channel>
</rss>
