<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Carlos Camacho</title>
    <description>The latest articles on DEV Community by Carlos Camacho (@ccamacho).</description>
    <link>https://dev.to/ccamacho</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ccamacho"/>
    <language>en</language>
    <item>
      <title>KubeInit External access for OpenShift/OKD deployments with Libvirt</title>
      <dc:creator>Carlos Camacho</dc:creator>
      <pubDate>Tue, 25 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/ccamacho/kubeinit-external-access-for-openshift-okd-deployments-with-libvirt-2hd9</link>
      <guid>https://dev.to/ccamacho/kubeinit-external-access-for-openshift-okd-deployments-with-libvirt-2hd9</guid>
      <description>&lt;p&gt;In this post, it will be described the basic network architecture when OKD is deployed using &lt;a href="https://github.com/kubeinit/kubeinit"&gt;KubeInit&lt;/a&gt; in a KVM host.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR;
&lt;/h2&gt;

&lt;p&gt;We will describe how to extend the basic network configuration to provide external access to the cluster services by adding an external IP to the service machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OWlC5P_d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/thumb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OWlC5P_d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/thumb.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial hypervisor status
&lt;/h3&gt;

&lt;p&gt;We check both the routing table and the network connections in the hypervisor host.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@nyctea ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 100 0 0 eno1
10.19.41.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0


[root@nyctea ~]# nmcli con show
NAME UUID TYPE DEVICE
System eno1 162499bc-a6fa-45db-ba76-1b45f0be46e8 ethernet eno1   
virbr0 4ba12c69-3a8b-42e8-a9dd-bc020fdc1a90 bridge virbr0
eno2 e19725f2-84f5-4f71-b300-469ffc99fd99 ethernet --     
enp6s0f0 7348301f-8cae-4ab1-9061-97d7a344699c ethernet --     
enp6s0f1 8a96c226-959a-4218-b9f7-c3ab6ee3d02b ethernet --    

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As it is possible to see there are two physical network interfaces (eno1, and eno2) for which only one is actually connected.&lt;/p&gt;

&lt;h4&gt;
  
  
  Initial network architecture
&lt;/h4&gt;

&lt;p&gt;The following picture represents the default network layout for a usual deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6uFa7Yyd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/arch01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6uFa7Yyd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/arch01.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The default deployment will install a multi-master cluster, with one worker node (up to 10). From the above figure is possible to see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All cluster nodes are connected to the 10.0.0.0/24 network. This will be the cluster management network, and the one will use to access the nodes within the hypervisor.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 10.0.0.0/24 network is defined as a Virtual Network Switch implementing both NAT and DCHP for any interface connected to the &lt;code&gt;kimgtnet0&lt;/code&gt; network.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All bootstrap, master, and worker nodes are installed with Fedora CoreOS as is the required OS for OKD &amp;gt; 4.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The services machine has installed CentOS 8 with BIND, HAProxy, and NFS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using DHCP, we assign the following IP mapping based on the MAC address of each node (defined in the Ansible inventory).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; # Master
 okd-master-01 ansible_host=10.0.0.1 mac=52:54:00:aa:6c:b1
 okd-master-02 ansible_host=10.0.0.2 mac=52:54:00:59:0e:e4
 okd-master-03 ansible_host=10.0.0.3 mac=52:54:00:b4:39:45

 # Worker
 okd-worker-01 ansible_host=10.0.0.4 mac=52:54:00:61:22:5a
 okd-worker-02 ansible_host=10.0.0.5 mac=52:54:00:21:fd:fd
 okd-worker-03 ansible_host=10.0.0.6 mac=52:54:00:4c:0a:81
 okd-worker-04 ansible_host=10.0.0.7 mac=52:54:00:54:ff:ac
 okd-worker-05 ansible_host=10.0.0.8 mac=52:54:00:4a:6b:f6
 okd-worker-06 ansible_host=10.0.0.9 mac=52:54:00:40:22:52
 okd-worker-07 ansible_host=10.0.0.10 mac=52:54:00:6c:0a:03
 okd-worker-08 ansible_host=10.0.0.11 mac=52:54:00:0b:14:f8
 okd-worker-09 ansible_host=10.0.0.12 mac=52:54:00:f5:6e:e5
 okd-worker-10 ansible_host=10.0.0.13 mac=52:54:00:5c:26:4f

 # Service
 okd-service-01 ansible_host=10.0.0.100 mac=52:54:00:f2:46:a7

 # Bootstrap
 okd-bootstrap-01 ansible_host=10.0.0.200 mac=52:54:00:6e:4d:a3

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The previous deployment can be used for any purpose but it has one limitation, this limitation is that the endpoints do not have external access. This means that i.e. &lt;a href="https://console-openshift-console.apps.watata.kubeinit.localcan"&gt;https://console-openshift-console.apps.watata.kubeinit.localcan&lt;/a&gt; not be accessed from anywhere instead of the hypervisor itself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Extending the basic network layout
&lt;/h3&gt;

&lt;p&gt;Now it will be described a simple way to provide external access to the cluster public endpoints published in the service machine.&lt;/p&gt;

&lt;h4&gt;
  
  
  Requirements
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;An additional IP address to be mapped to the services machine from an external location.&lt;/li&gt;
&lt;li&gt;Creating a network bridge to slave the interface used for external access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a user has one extra IP (public or private) it will be enough to configure remote access to the cluster endpoints.&lt;/p&gt;

&lt;p&gt;As long as we have an extra IP it does not matter how many physical interfaces we have, as we can have multiple IP addresses configured using a single physical NIC.&lt;/p&gt;

&lt;h4&gt;
  
  
  New network layout
&lt;/h4&gt;

&lt;p&gt;This is the resulting network architecture to access remotely our freshly installed OKD cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tBMeOGlt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/arch02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tBMeOGlt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/arch02.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As is visible in the above figure there is an extra connection to the service machine, connected directly to the virtual bridge slaving a physical interface.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Our development environment has only one network card connected, in this case after we create the main switch and slave the network device, it will lose the assigned IP automatically. Do not try this using a shell as you will get dropped.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  How to enable the external interface
&lt;/h4&gt;

&lt;p&gt;To deploy this architecture please follow the next steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a virtual bridge slaving the selected physical interface.&lt;/li&gt;
&lt;li&gt;Adjust the deployment command.&lt;/li&gt;
&lt;li&gt;Run &lt;a href="https://github.com/kubeinit/kubeinit"&gt;KubeInit&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Adjust your local Domain Name System (DNS) resolver.&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  Step 1 (creating the virtual bridge)
&lt;/h5&gt;

&lt;h6&gt;
  
  
  Using CentOS8 cockpit
&lt;/h6&gt;

&lt;p&gt;We create an initial bridge using the CentOS cockpit, after losing the IP it will be recovered/reconfigured automatically(don’t try this from the CLI as you will lose access).&lt;/p&gt;

&lt;p&gt;In this case,&lt;/p&gt;

&lt;p&gt;Create a bridge called kiextbr0 connected to eno1:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2fwfeHsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/cockpit_00.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2fwfeHsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/cockpit_00.PNG" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on: Networking -&amp;gt; Add Bridge&lt;/p&gt;

&lt;p&gt;Then adjust the bridge configuration options (bridge name and the interface to slave).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yAPz_HR0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/cockpit_01.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yAPz_HR0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/cockpit_01.PNG" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Write: &lt;code&gt;kiextbr0&lt;/code&gt; as the bridge name, and select your network interface &lt;code&gt;eno1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Go to the dashboard and verify that everything is OK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mPL9I4o0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/cockpit_02.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mPL9I4o0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/cockpit_02.PNG" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check that the bridge is created correctly and has the IP configured correctly.&lt;/p&gt;

&lt;h6&gt;
  
  
  Manual bridge creation
&lt;/h6&gt;

&lt;p&gt;As an example, you can run these steps by the CLIadjusting your interface and bridge names accordingly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nmcli connection add ifname br0 type bridge con-name br0
nmcli connection add type bridge-slave ifname enp0s25 master br0
nmcli connection modify br0 bridge.stp no
nmcli connection delete enp0s25
nmcli connection up br0

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt; If you have only one interface the connection will be dropped and you will lose connectivity.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h6&gt;
  
  
  Checking the system status
&lt;/h6&gt;

&lt;p&gt;We check again the system status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@nyctea ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 425 0 0 kilocbr0
10.19.41.0 0.0.0.0 255.255.255.0 U 425 0 0 kilocbr0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

[root@nyctea ~]# nmcli con show
NAME UUID TYPE DEVICE  
kilocbr0 1c4d60a3-06a7-429f-a689-ffba5a49efbb bridge kilocbr0
System eno1 162499bc-a6fa-45db-ba76-1b45f0be46e8 ethernet eno1    
virbr0 4ba12c69-3a8b-42e8-a9dd-bc020fdc1a90 bridge virbr0  
eno2 e19725f2-84f5-4f71-b300-469ffc99fd99 ethernet --      
eno3 65be9380-980b-4237-b27c-2479e8f8535d ethernet --      
eno4 9f5afe2d-6166-4197-a23f-e64c3b1b5ab2 ethernet --      
enp6s0f0 7348301f-8cae-4ab1-9061-97d7a344699c ethernet --      
enp6s0f1 8a96c226-959a-4218-b9f7-c3ab6ee3d02b ethernet --    

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can see we have the new bridge created successfully and it has the IP address also configured correctly.&lt;/p&gt;

&lt;h5&gt;
  
  
  Step 2 (adjusting the deployment command)
&lt;/h5&gt;

&lt;p&gt;There are a few variables that need to be adjusted in order to successfully configure the external interface.&lt;/p&gt;

&lt;p&gt;These variables are defined in the okd playbook (the location of these variables will change)but not their name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2sWCcGSG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/config_vars.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2sWCcGSG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/config_vars.PNG" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The meaning of the variables are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubeinit_bind_external_service_interface_enabled: true - This will enable the Ansible configuration of the external interface, the BIND update, and the additional interface in the service node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubeinit_bind_external_service_interface.attached: kiextbr0 - This is the virtual bridge where we will plug the &lt;code&gt;eth1&lt;/code&gt; interface of the services machine. The bridge &lt;code&gt;MUST&lt;/code&gt; be created first and slaving the physical interface we will use.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubeinit_bind_external_service_interface.dev: eth1 - This is the name of the external interface we will add to the services machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubeinit_bind_external_service_interface.ip: 10.19.41.157 - The external IP addressof the services machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubeinit_bind_external_service_interface.gateway: 10.19.41.254 - The gateway IP address of the services machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;kubeinit_bind_external_service_interface.netmask: 255.255.255.0 - The network mask of the external interface of the services machine.&lt;/p&gt;

&lt;p&gt;After we configure correctly the previous variables we can proceed to run the deployment command.&lt;/p&gt;

&lt;h5&gt;
  
  
  Step 3 (run the deployment command)
&lt;/h5&gt;

&lt;p&gt;Now we deploy as usual &lt;a href="https://github.com/kubeinit/kubeinit"&gt;KubeInit&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember that you can execute this deployment command before creating the bridge with the CentOS cockpit, the bridge creation has no impact on how we deploy KubeInit.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook \
    -v \
    --user root \
    -i ./hosts/okd/inventory \
    --become \
    --become-user root \
    -e "{ \
      'kubeinit_bind_external_service_interface_enabled': 'true', \
      'kubeinit_bind_external_service_interface': { \
        'attached': 'kiextbr0', \
        'dev': 'eth1', \
        'ip': '10.19.41.157', \
        'gateway': '10.19.41.254', \
        'netmask': '255.255.255.0' \
      } \
    }" \
    ./playbooks/okd.yml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  Step 4 (adjust your resolv.conf)
&lt;/h5&gt;

&lt;p&gt;You must reach the cluster external endpoints by DNS, this means, the dashboard and any other application deployed(you can add entries for any registry pointing to the service machine but this can be cumbersome).&lt;/p&gt;

&lt;p&gt;For example, configure your local DNS resolver to point to &lt;code&gt;10.19.41.157&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; [ccamacho@localhost]$ cat /etc/resolv.conf
 nameserver 10.19.41.157
 nameserver 8.8.8.8

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After that, you should be able to access the cluster without any issue and use it for any purpose you have.&lt;/p&gt;

&lt;p&gt;Voilà!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KpSU_3co--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/dashboard.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KpSU_3co--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/dashboard.PNG" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Final considerations
&lt;/h4&gt;

&lt;p&gt;Some of the very interesting changes in BIND is how we manage both external and internal views.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_cOy3mnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/bind_views.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_cOy3mnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/net/bind_views.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, we have an &lt;code&gt;internal&lt;/code&gt; and &lt;code&gt;external&lt;/code&gt; view that will behave differently depending on where the requests are originated from.&lt;/p&gt;

&lt;p&gt;If a DNS request is created trough the cluster’s external interface, the reply will be created based on the external view, in this case we only reply with the external HAProxy endpoints related to the services node, thus, we will only reply with &lt;code&gt;10.19.41.157&lt;/code&gt; as it is the only that needs to be presented externally.&lt;/p&gt;

&lt;h4&gt;
  
  
  The end
&lt;/h4&gt;

&lt;p&gt;If you like this post, please try the code, raise issues, and ask for more details, features, or anything that you feel interested in. Also, it would be awesome if you become a stargazer to catch updates and new features.&lt;/p&gt;

&lt;p&gt;This is the main project &lt;a href="https://github.com/ccamacho/kubeinit"&gt;repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy KubeIniting!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Updated 2020/08/25:&lt;/em&gt;&lt;/strong&gt; First version (draft).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Updated 2020/08/26:&lt;/em&gt;&lt;/strong&gt; Published.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>A review of the MachineConfig operator</title>
      <dc:creator>Carlos Camacho</dc:creator>
      <pubDate>Sun, 16 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/ccamacho/a-review-of-the-machineconfig-operator-30eh</link>
      <guid>https://dev.to/ccamacho/a-review-of-the-machineconfig-operator-30eh</guid>
      <description>&lt;p&gt;The latest versions of OpenShift rely on operators to completely manage the cluster and OS state,this &lt;strong&gt;state&lt;/strong&gt; includes for instance, configuration changes and OS upgrades.For example, to install additional packages or changing any configuration file to execute whatever task isrequired, the MachineConfig operator should be the one in charge of applying these changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yNHS36US--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/machineconfig/machineconfig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yNHS36US--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/machineconfig/machineconfig.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These configuration changes are executed by an instance of the‘openshift-machine-config-operator’ pod, which after this new state is reachedthe updated nodes will be automatically restarted.&lt;/p&gt;

&lt;p&gt;There are several mature and production-ready technologies allowing to automate and applyconfiguration changes to the underlying infrastructure nodes, like, Ansible,Helm, Puppet, Chef, and many others, yet, the MachineConfig operator forceusers to adopt this new method and pretty much discard any previously developedautomation infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  The MachineConfig operator is more than installing packages and updating configuration files
&lt;/h3&gt;

&lt;p&gt;I see this MachineConfig operator as a finite state machine where it’s representeda cluster-wide specific sequential logic to ensure that the cluster’s state is preserved and consistent.This notion has several and very powerful benefits, like making the cluster resilient to failuresdue to unfulfilled conditions in each of the sub-stages part of this finite state machine workflow.&lt;/p&gt;

&lt;p&gt;For instance, let the following example show a practical and objective applicationof the benefits of this approach.&lt;/p&gt;

&lt;p&gt;We assume the following architecture reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 master nodes.&lt;/li&gt;
&lt;li&gt;1 worker node.&lt;/li&gt;
&lt;li&gt;The master nodes are not schedulable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a quite simple multi-master deployment with a single worker nodefor development purposes, before the cluster was deployed themaster nodes were set as “mastersSchedulable: False” by running&lt;code&gt;"sed -i 's/mastersSchedulable: true/mastersSchedulable: False/' install_dir/manifests/cluster-scheduler-02-config.yml"&lt;/code&gt;.Now, after deploying the cluster and executing a configuration change in the worker node it willfail, and let investigate why.&lt;/p&gt;

&lt;p&gt;The following yaml file will be applied, which it’s content is correct and it should work out of the box:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; ~/99_kubeinit_extra_config_worker.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  creationTimestamp: null
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-kubeinit-extra-config-worker
spec:
  osImageURL: ''
  config:
    ignition:
      config:
        replace:
          verification: {}
      security:
        tls: {}
      timeouts: {}
      version: 2.2.0
    networkd: {}
    passwd: {}
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9iYXNoCnNldCAteAptYWluKCkgewpzdWRvIHJwbS1vc3RyZWUgaW5zdGFsbCBwb2xpY3ljb3JldXRpbHMtcHl0aG9uLXV0aWxzCnN1ZG8gc2VkIC1pICdzL2VuZm9yY2luZy9kaXNhYmxlZC9nJyAvZXRjL3NlbGludXgvY29uZmlnIC9ldGMvc2VsaW51eC9jb25maWcKfQptYWluCg==
          verification: {}
        filesystem: root
        mode: 0755
        path: /usr/local/bin/kubeinit_kubevirt_extra_config_script
EOF
oc apply -f ~/99_kubeinit_extra_config_worker.yaml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The defined MachineConfig object will create a file in&lt;code&gt;/usr/local/bin/kubeinit_kubevirt_extra_config_script&lt;/code&gt; that once executed it willinstall a package and disable SElinux in the worker nodes.&lt;/p&gt;

&lt;p&gt;Now, let’s check the state of the worker machine config pool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc get machineconfigpool/worker

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
worker rendered-worker-a9.. False True True 1 0 0 1 12h

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now it is possible to depict that the operator state is degraded and there is not much more information about it.Let’s get the status of the machine-config pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod -o wide --all-namespaces | grep machine-config

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It is possible to see that all pods are running without issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openshift-machine-config-operator etcd-quorum-guard-7bb76959df-5bj7g 1/1 Running 0 11h 10.0.0.2 okd-master-02 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator etcd-quorum-guard-7bb76959df-jdtbv 1/1 Running 0 11h 10.0.0.3 okd-master-03 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator etcd-quorum-guard-7bb76959df-sndb2 1/1 Running 0 11h 10.0.0.1 okd-master-01 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-controller-7cbb584655-bfjmh 1/1 Running 0 11h 10.100.0.20 okd-master-01 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-daemon-ctczg 2/2 Running 0 12h 10.0.0.3 okd-master-03 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-daemon-m82gz 2/2 Running 0 12h 10.0.0.2 okd-master-02 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-daemon-qfc82 2/2 Running 0 12h 10.0.0.1 okd-master-01 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-daemon-vwh4d 2/2 Running 0 11h 10.0.0.4 okd-worker-01 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-operator-c98bb964d-5vnww 1/1 Running 0 11h 10.100.0.21 okd-master-01 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-server-g75x5 1/1 Running 0 12h 10.0.0.2 okd-master-02 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-server-kpwqb 1/1 Running 0 12h 10.0.0.3 okd-master-03 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-machine-config-operator machine-config-server-n9q2r 1/1 Running 0 12h 10.0.0.1 okd-master-01 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let’s check the logs of the machine-config-daemon pod in the worker node.This pod has two containers, machine-config-daemon, and oauth-proxy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs -f machine-config-daemon-vwh4d -n machine-config-operator

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, it is possible to see the actual error in the container execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I0816 06:58:42.985762 3240 update.go:283] Checking Reconcilable for config rendered-worker-a9681850fe39078ea0f42bd017922eb7 to rendered-worker-7131e04f110c489a0ad171e719cedc24
I0816 06:58:43.849830 3240 update.go:1403] Starting update from rendered-worker-a9681850fe39078ea0f42bd017922eb7 to rendered-worker-7131e04f110c489a0ad171e719cedc24: &amp;amp;{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false}
I0816 06:58:43.852961 3240 update.go:1403] Update prepared; beginning drain
E0816 06:58:43.911711 3240 daemon.go:336] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-48g5s, openshift-dns/dns-default-h2lt5, openshift-image-registry/node-ca-9z9zt, openshift-machine-config-operator/machine-config-daemon-vwh4d, openshift-monitoring/node-exporter-m5p2n, openshift-multus/multus-lnsng, openshift-sdn/ovs-5xzqs, openshift-sdn/sdn-vplps
.
.
.
I0816 06:58:43.918261 3240 daemon.go:336] evicting pod openshift-ingress/router-default-796df5847b-9hxzx
E0816 06:58:43.928176 3240 daemon.go:336] error when evicting pod "router-default-796df5847b-9hxzx" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0816 07:08:44.981198 3240 update.go:172] Draining failed with: error when evicting pod "router-default-796df5847b-9hxzx": global timeout reached: 1m30s, retrying
E0816 07:08:44.981273 3240 writer.go:135] Marking Degraded due to: failed to drain node (5 tries): timed out waiting for the condition: error when evicting pod "router-default-796df5847b-9hxzx": global timeout reached: 1m30s

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The log shows that the machine config operator failed to drain the worker node before applying theconfiguration and executing the restart, as the router-default pod was not able to berescheduled in another node. Not being able to schedule again this pod&lt;code&gt;violates the pod's disruption budget&lt;/code&gt;, thus, the operator is now degraded.&lt;/p&gt;

&lt;p&gt;Let’s check the router-default pod status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pod -o wide --all-namespaces | grep "router-default"

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It is possible to see that the pod is pending to be scheduled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openshift-ingress router-default-796df5847b-9hxzx 1/1 Running 0 12h 10.0.0.4 okd-worker-01 &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;
openshift-ingress router-default-796df5847b-h8bm4 0/1 Pending 0 12h &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; &amp;lt;none&amp;gt; &amp;lt;none&amp;gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let’s check it’s status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc describe pod router-default-796df5847b-h8bm4 -n openshift-ingress

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, it is possible to confirm that the pod is &lt;code&gt;Pending&lt;/code&gt; as there is not any available node to schedule it back again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name: router-default-796df5847b-h8bm4
Namespace: openshift-ingress
.
.
.
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Warning FailedScheduling &amp;lt;unknown&amp;gt; default-scheduler 0/4 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match node selector.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We check the nodes status&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc get nodes

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Again, it is possible to see that the MachineConfig operator tried to drain the node but it failedwhen rescheduling its pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME STATUS ROLES AGE VERSION
okd-master-01 Ready master 12h v1.18.3
okd-master-02 Ready master 12h v1.18.3
okd-master-03 Ready master 12h v1.18.3
okd-worker-01 Ready,SchedulingDisabled worker 12h v1.18.3

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Why did this happen?
&lt;/h3&gt;

&lt;p&gt;Master nodes are not schedulable to handle workloads, this was configured when the cluster was deployed.So pretty much the operator didn’t have enough room to reschedule the pods in other nodes.&lt;/p&gt;

&lt;p&gt;The benefits of this approach (using the MachineConfig operator) are uncountable as the operator is smart enough toavoid services breaking when it finds that a configuration change is not able toget the system back to a consistent state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zXf7RBtA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/machineconfig/rabbithole.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zXf7RBtA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/machineconfig/rabbithole.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  But, not everything is as perfect as it sounds…
&lt;/h3&gt;

&lt;p&gt;Ignition files are used to apply these configuration changes and it’s json representation is not human-readable at all, for this we use Fedora CoreOS Configuration (FCC) files in YAML format, then internally these yaml files are converted into an Ignition (JSON) file by the Fedora CoreOS Config Transpiler, which is a tool that produces a JSON Ignition file from the YAML FCC file.&lt;/p&gt;

&lt;p&gt;There is a huge limitation in the resources that can be defined, it is only supported storage, system.d services, and users, so, for executing anything the user will have to render a script that must be called once by a systemd service after the node restarts. This, after using for many many years technologies like Ansible, Puppet, or Chef, it looks like a hacky and dirty approach for users to apply their custom configurations.&lt;/p&gt;

&lt;p&gt;Another thing, debugging, if there is a problem with your MachineConfig object you might see only this &lt;strong&gt;degraded&lt;/strong&gt; state, forcing you to dig into the containers logs and hopefully find the source of any issue you might have.&lt;/p&gt;

&lt;p&gt;I believe there is a lot of room for improvements in the MachineConfig operator, I would love to see an Ansible interface to be able to plug-in my configuration changes by the openshift-machine-config-operator pod. Also, it was showed that the operator improves the system’s resiliency by impeding that a configuration change breaks what we defined as a &lt;strong&gt;cluster’s consistent state&lt;/strong&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The easiest and fastest way to deploy an OKD 4.5 cluster in a Libvirt/KVM host</title>
      <dc:creator>Carlos Camacho</dc:creator>
      <pubDate>Fri, 31 Jul 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/ccamacho/the-easiest-and-fastest-way-to-deploy-an-okd-4-5-cluster-in-a-libvirt-kvm-host-1h44</link>
      <guid>https://dev.to/ccamacho/the-easiest-and-fastest-way-to-deploy-an-okd-4-5-cluster-in-a-libvirt-kvm-host-1h44</guid>
      <description>&lt;p&gt;Long story short… &lt;strong&gt;A single command to deploy an OKD 4.5 cluster in ~30 minutes (3 controllers, 1 to 10 workers, 1 service, and 1 bootstrap node), forget about following endless and outdated documentation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qaDM_SJh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/okd-libvirt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qaDM_SJh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/okd-libvirt.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wrote so much automation in the meantime I worked/learned/practiced in OpenStack/RHOSP/Kubernetes/Openshift/OKDin the last 2Y, but suddenly I “lost” the machine where I hosted all these valuable code snippets.&lt;/p&gt;

&lt;p&gt;With all this… I had to quickly invest some time to put together all that code. The first part is related to K8s/OKDand I created a small project called KubeInit “The KUBErnetes INITiator” to share it with the world.&lt;/p&gt;

&lt;p&gt;The first (and only for now) playbook will deploy in a single command a fully operational OKD 4.5 cluster with 3master nodes, 1 compute nodes (configurable from 1 to 10 nodes), 1 services node, and 1 dummy bootstrap node. The services node has installed HAproxy, Bind, Apache HTTPd, and NFS to host some of the external required cluster services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ahfRAkS---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/fast.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ahfRAkS---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/fast.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A huge amount of RAM (the smallest amount I was able to deploy is with 64GB with a smaller configuration), configure this in the &lt;a href="https://github.com/kubeinit/kubeinit/blob/master/hosts/okd/inventory#L8"&gt;inventory file&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Be able to log in as &lt;code&gt;root&lt;/code&gt; in the hypervisor node.&lt;/li&gt;
&lt;li&gt;Reach the hypervisor node using the hostname &lt;code&gt;nyctea&lt;/code&gt; &lt;a href="https://github.com/kubeinit/kubeinit/blob/master/hosts/okd/inventory#L56"&gt;you can change this in the inventory&lt;/a&gt; or add an entry in your &lt;code&gt;/etc/hosts&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it, super simple…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/kubeinit/kubeinit.git
cd kubeinit
ansible-playbook \
    --user root \
    -v -i ./hosts/okd/inventory \
    --become \
    --become-user root \
    ./playbooks/okd.yml

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should get something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ccamacho@wakawaka kubeinit]$ time ansible-playbook \
                                  --user root \
                                  -i ./hosts/okd/inventory \
                                  --become \
                                  --become-user root \
                                  ./playbooks/okd.yml

Using /etc/ansible/ansible.cfg as config file
PLAY [Main deployment playbook for OKD] ********************************************
TASK [Gathering Facts] *************************************************************
ok: [hypervisor-01]
.
.
.
"NAME STATUS ROLES AGE VERSION",
"okd-master-01 Ready master 16m v1.18.3",
"okd-master-02 Ready master 15m v1.18.3",
"okd-master-03 Ready master 12m v1.18.3",
"okd-worker-01 Ready worker 6m12s v1.18.3"
]}]}}

PLAY RECAP *************************************************************************
hypervisor-01: ok=83 changed=39 unreachable=0 failed=0 skipped=6 rescued=0 ignored=3   

real 33m49.483s
user 2m30.920s
sys 0m19.678s

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;A ready to use OKD 4.5 cluster in ~30 minutes, yeah!&lt;/p&gt;

&lt;p&gt;What you can do now is log-in into your hypervisor and check the cluster status from the service machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@nyctea
ssh root@10.0.0.100
# This is now the service node (check the Ansible inventory for IPs and other details)
export KUBECONFIG=~/install_dir/auth/kubeconfig
oc get pvc -n openshift-image-registry
oc get pv
oc get clusteroperator image-registry
oc get nodes

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The root password of the services machine is &lt;a href="https://github.com/kubeinit/kubeinit/blob/master/playbooks/okd.yml#L54"&gt;defined as a variable in the playbook&lt;/a&gt;, but the public key of the hypervisor root user is deployed across all the cluster nodes, so, you should be able to connect to any node from the hypervisor machine using certs-based authentication. Connect as the &lt;code&gt;root&lt;/code&gt; user for the services machine (because is CentOS based) or as the &lt;code&gt;core&lt;/code&gt; user to any other node (CoreOS based), using the IP addresses defined in the inventory file.&lt;/p&gt;

&lt;p&gt;There is some reasoning for this password-based access to the services node. Sometimes we need to connect to the services machine when we deploy for debugging purposes, in this case, if we don’t set a password for the user we won’t be able to login using the console. Instead, for all the CoreOS nodes, once they are bootstrapped correctly/automatically there is no need to log-in using the console, just wait until they are deployed to connect to them using SSH.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ygkru3KM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/happy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ygkru3KM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/kubeinit/happy.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The code is not perfect by any means but is a good example of how to use a libvirt host to run your OKD cluster and it’s incredibly easy to improve and add other roles and scenarios.&lt;/p&gt;

&lt;p&gt;Next steps, I’ll clean all the lint nits around…&lt;/p&gt;

&lt;p&gt;This is the GitHub repository &lt;a href="https://github.com/kubeinit/kubeinit/"&gt;https://github.com/kubeinit/kubeinit/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Please if you like it, add some comments, test it, use it, hack it, break it, or become a stargazer ;)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Badgeboard - GitHub actions, where is my CI dashboard!</title>
      <dc:creator>Carlos Camacho</dc:creator>
      <pubDate>Wed, 04 Dec 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/ccamacho/badgeboard-github-actions-where-is-my-ci-dashboard-1c8e</link>
      <guid>https://dev.to/ccamacho/badgeboard-github-actions-where-is-my-ci-dashboard-1c8e</guid>
      <description>&lt;p&gt;A widely used term in the agile world is the information radiator, which refers to display the project’s critical information as simple as possible. These information radiators improve the team’s communication by amplifying pieces of data to get a better notion of self-awareness.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR;
&lt;/h2&gt;

&lt;p&gt;If you just want to go straight to the solution of how to convert SVG badges to a widget-based CI dashboard, just go to the &lt;a href="https://github.com/pystol/badgeboard"&gt;Badgeboard&lt;/a&gt;repository or open the &lt;a href="https://badgeboard.pystol.org"&gt;demo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Otherwise, continue reading.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v0-fuzXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/01_build_monitor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v0-fuzXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/01_build_monitor.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are beginning to apply agile methodologies in your team, a good information radiator can be for example a CI status dashboard.&lt;/p&gt;

&lt;p&gt;The purpose of this information radiators, as the name implies, is to radiate information. It is something that people know about it and can see it easily. Keep in mind that a good information radiator will adapt to the needs of the project throughout its life, so try not to invest too much time in its initial design, and make sure that it can be easily changed/fixed/used/improved.&lt;/p&gt;

&lt;p&gt;Some features of these information radiators:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;It reflects the now&lt;/strong&gt; : Information radiators always show what is going on (if things are going north or south). They help us see what matters now to the team and what to focus on in most of the cases when we hit i.e. regressions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimum maximum value information&lt;/strong&gt; : Simple and highly valuable. The more information, the less focus on important information, and more effort to maintain the panel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Must be alive&lt;/strong&gt; : This information artifact should be updated each time. As soon as reality changes, the artifact status should also change.&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  CI Dashboards
&lt;/h1&gt;

&lt;p&gt;CI dashboards are a graphical representation of the continuous integration test results, usually HTML based and displaying in colors (red, yellow and green) the actual tests running results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_wqJ2n9B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/02_intro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_wqJ2n9B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/02_intro.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  GitHub badges
&lt;/h1&gt;

&lt;p&gt;We can see the status badges as a brief summary of the CI pipeline status. Badges&lt;a href="https://docs.gitlab.com/ee/user/project/badges.html"&gt;1&lt;/a&gt; are a unified way to present condensed pieces of information about your projects. They are also considered as any visual token of achievement, affiliation, authorization, or other trust relationship.&lt;/p&gt;

&lt;p&gt;They consist of a small image and additionally a URL that the image points to. Examples for badges can be the pipeline status, test coverage, or ways to contact the project maintainers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yL3Xz1fD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/03_badges.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yL3Xz1fD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/03_badges.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  What now?
&lt;/h1&gt;

&lt;p&gt;We introduce a tool to convert SVG badges to CI dasboards (&lt;a href="https://github.com/pystol/badgeboard"&gt;Badgeboard&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub actions -&amp;gt; No CI dashboard by default :(
&lt;/h2&gt;

&lt;p&gt;I really liked the big dashboard view printed on a big screen so everyone can see it in a quick and easy manner. So, if we start using i.e. GitHub actions we lose the ability to have this graphical representation towards a badge based view.&lt;/p&gt;

&lt;h2&gt;
  
  
  Yehi!!! Here we have badgeboard!!!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/pystol/badgeboard"&gt;Badgeboard&lt;/a&gt;is an awesome information radiator to show the status of the badges you have in your project as a widget-based dashboard, in particular it’s the main CI dashboard of&lt;a href="https://github.com/pystol/pystol"&gt;Pystol&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Is a very simple tool that converts the information inside any SVG badge you define from any source in a widget-based dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U2YUAdlQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/04_badgeboard.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U2YUAdlQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/04_badgeboard.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Just &lt;a href="https://badgeboard.pystol.org/"&gt;open the index.html&lt;/a&gt;file and see how the dashboard is rendered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;None! Just clone the repo and open the index.html file in your favorite browser.&lt;/p&gt;

&lt;p&gt;Once you have a copy, make the adjustments to the configuration file located in &lt;strong&gt;assets/data_source/badges_list.js&lt;/strong&gt; to use your own badges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Due to CORS restrictions, badgeboard uses a &lt;a href="https://cors-anywhere.herokuapp.com/"&gt;proxy&lt;/a&gt;to add cross-origin headers when building the widgets panel. Check additional information about the CORS proxy on &lt;a href="https://www.npmjs.com/package/cors-anywhere"&gt;NPM&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;We capture the badges list (SVG files) and we read the color information from a single pixel, from there, depending on the color of the pixel the widget is painted with its corresponding color.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ENl01Hvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/05_measure.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ENl01Hvg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/05_measure.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This would be the usual view of the project badges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P8VuHH6t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/06_badges.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P8VuHH6t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/badgeboard/06_badges.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding your badges and colors
&lt;/h2&gt;

&lt;p&gt;Use the &lt;strong&gt;coordinates_testing.html&lt;/strong&gt; file to determine based on the SVG coordinates the RGB color to be used in the JS configuration file.&lt;/p&gt;

&lt;p&gt;To do so, copy the link to your badge, find the badge example in the file, replace it with yours, open the file in a browser, get the console logs and move around the mouse over the badge to see the coordinates and the RBG color that matches it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding custom color badges
&lt;/h2&gt;

&lt;p&gt;To add new colors, edit the &lt;strong&gt;assets/css/custom.css&lt;/strong&gt; file and add new color definitions for the widgets. Once you define the new color, in the configuration file called &lt;strong&gt;assets/data_source/badges_list.js&lt;/strong&gt; use the new color like in the following example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;colors:[['&amp;lt;new_color_definition','&amp;lt;matching_rgb_from_the _badge&amp;gt;'],['status-good','48,196,82']],
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;If the board does not render correctly (No widgets at all) it’s for sure that you refreshed too many times the page. We use a &lt;strong&gt;CORS&lt;/strong&gt; proxy to add cross-origin headers when building the widgets panel.&lt;/p&gt;

&lt;p&gt;The requests it can handled are limited in order to avoid crashing the container, so we can all use it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Please read the requirements&lt;/strong&gt; and use your own &lt;a href="https://www.npmjs.com/package/cors-anywhere"&gt;NPM proxy&lt;/a&gt;so these restrictions go away.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;We use both &lt;a href="https://github.com/smashing/smashing"&gt;smashing&lt;/a&gt;and &lt;a href="https://github.com/ducksboard/gridster.js"&gt;gridster&lt;/a&gt;to create the dashboard and its widgets.&lt;/p&gt;

&lt;h2&gt;
  
  
  License
&lt;/h2&gt;

&lt;p&gt;Badgeboard is part of &lt;a href="https://github.com/pystol/pystol"&gt;Pystol&lt;/a&gt;and &lt;a href="https://github.com/pystol/pystol"&gt;Pystol&lt;/a&gt; is open source software licensed under the &lt;a href="https://dev.toLICENSE"&gt;Apache license&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Next steps
&lt;/h1&gt;

&lt;p&gt;It would be awesome to get some feedback around the tool, so, please feel free to file issues, pull requests or comments in this post or in the &lt;a href="https://github.com/pystol/badgeboard"&gt;Badgeboard&lt;/a&gt;repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  List of TO-DOs
&lt;/h2&gt;

&lt;p&gt;There are still some bits to fix in &lt;a href="https://github.com/pystol/badgeboard"&gt;Badgeboard&lt;/a&gt;for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;del&gt;Make the link from the widgets to work.&lt;/del&gt;&lt;/li&gt;
&lt;li&gt;Move common hardcoded bits into variables for an easier update.&lt;/li&gt;
&lt;li&gt;Improve the documentation.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Updated 2019/12/04:&lt;/strong&gt; Initial version.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>Oil painting and Minikube - Installing Minikube in Centos 7</title>
      <dc:creator>Carlos Camacho</dc:creator>
      <pubDate>Sun, 13 Oct 2019 00:00:00 +0000</pubDate>
      <link>https://dev.to/ccamacho/oil-painting-and-minikube-installing-minikube-in-centos-7-gj9</link>
      <guid>https://dev.to/ccamacho/oil-painting-and-minikube-installing-minikube-in-centos-7-gj9</guid>
      <description>&lt;p&gt;Today I got some time to do some oil painting and reading about techy stuff :)&lt;/p&gt;

&lt;p&gt;This post is a brief summary of the deployment steps for installing Minikube in a Centos 7 baremetal machine, and, to show you my painting (check the fedora!).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4AwwwQws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/Terraza-En-Grecia-by-Carlos-Camacho.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4AwwwQws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.anstack.com/static/Terraza-En-Grecia-by-Carlos-Camacho.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following steps need to run in the Hypervisor machine in which you will like to have your Minikube deployment.&lt;/p&gt;

&lt;p&gt;You need to execute them one after the other, the idea of this recipe is to have something just for copying/pasting.&lt;/p&gt;

&lt;p&gt;The usual steps are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;01 - Prepare the hypervisor node.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s install some dependencies. Same Hypervisor node, same &lt;code&gt;root&lt;/code&gt; user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt

sudo mkdir -p /home/docker/
sudo ln -sf /home/docker/ /var/lib/docker

# Install some packages
sudo yum install dnf -y
sudo dnf update -y
sudo dnf groupinstall "Virtualization Host" -y
sudo dnf install libvirt qemu-kvm virt-install virt-top libguestfs-tools bridge-utils -y
sudo dnf install git lvm2 lvm2-devel -y
sudo dnf install libvirt-python python-lxml libvirt curl-y
sudo dnf install binutils qt gcc make patch libgomp -y
sudo dnf install glibc-headers glibc-devel kernel-headers -y
sudo dnf install kernel-devel dkms bash-completion -y
sudo dnf install nano wget -y
sudo dnf install python3-pip -y
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;02 - Check that the kernel modules are OK.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check the kernel modules are OK
sudo lsmod | grep kvm
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;03 - Enable libvirtd, disable SElinux xD and firewalld.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Enable libvirtd
sudo systemctl start libvirtd
sudo systemctl enable libvirtd

# Disable selinux &amp;amp; stop firewall as needed.
setenforce 0
perl -pi -e 's/SELINUX\=enforcing/SELINUX\=disabled/g' /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;04 - Install Minikube.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Install minikube
/usr/bin/curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 &amp;amp;&amp;amp; chmod +x minikube
cp -p minikube /usr/local/bin &amp;amp;&amp;amp; rm -f minikube

# Create the repo for kubernetes
cat &amp;lt;&amp;lt; EOF &amp;gt; /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Install kubectl
sudo dnf install kubectl -y
source &amp;lt;(kubectl completion bash)
echo "source &amp;lt;(kubectl completion bash)" &amp;gt;&amp;gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;05 - Create the toor user (from the Hypervisor node, as root).&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo useradd toor
echo "toor:toor" | sudo chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
  | sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo su - toor

cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, follow as the &lt;code&gt;toor&lt;/code&gt; user and prepare the Hypervisor node for Minikube.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;06 - Install Docker.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will like to also use docker in the Hypervisor node for creating images and debugging purposes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install docker
sudo dnf install docker -y
sudo usermod --append --groups dockerroot toor
sudo tee /etc/docker/daemon.json &amp;gt;/dev/null &amp;lt;&amp;lt;-EOF
{
    "live-restore": true,
    "group": "dockerroot"
}
EOF
sudo systemctl start docker
sudo systemctl enable docker
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;07 - Finish the Minikube configuration.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add to bashrc in toor user
source &amp;lt;(kubectl completion bash)
echo "source &amp;lt;(kubectl completion bash)" &amp;gt;&amp;gt; ~/.bashrc

# We add toor to the libvirtd group
sudo usermod --append --groups libvirt toor
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;08 - Start Minikube.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --memory=65536 --cpus=4 --vm-driver kvm2
export no_proxy=$no_proxy,$(minikube ip)
nohup kubectl proxy --address='0.0.0.0' --port=8001 --disable-filter=true &amp;amp;
sleep 30
minikube addons enable dashboard
nohup minikube dashboard &amp;amp;
minikube addons open dashboard
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Minikube instance should be reachable from the following URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://machine%5C_ip:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/"&gt;http://machine\_ip:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# To stop/delete
kubectl delete deploy,svc --all
minikube stop
minikube delete
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;09 - Minikube cheat sheet.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# set &amp;amp; get current context of cluster
 kubectl config use-context minikube
 kubectl config current-context

# fetch all the kubernetes objects for a namespace
 kubectl get all -n kube-system

# display cluster details
 kubectl cluster-info

# set custom memory and cpu 
 minikube config set memory 4096
 minikube config set cpus 2

# fetch cluster ip
 minikube ip

# ssh to the minikube vm
 minikube ssh

# display addons list and status
 minikube addons list

# exposes service to vm &amp;amp; retrieves url 
 minikube service elasticsearch
 minikube service elasticsearch --url
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Updated 2019/10/13:&lt;/strong&gt; Initial version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Updated 2019/10/15:&lt;/strong&gt; Install also docker in the hypervisor.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
    </item>
  </channel>
</rss>
