<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: gianarb</title>
    <description>The latest articles on DEV Community by gianarb (@gianarb).</description>
    <link>https://dev.to/gianarb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gianarb"/>
    <language>en</language>
    <item>
      <title>How bare metal provisioning works in theory</title>
      <dc:creator>gianarb</dc:creator>
      <pubDate>Thu, 08 Oct 2020 14:09:26 +0000</pubDate>
      <link>https://dev.to/gianarb/how-bare-metal-provisioning-works-in-theory-1e4e</link>
      <guid>https://dev.to/gianarb/how-bare-metal-provisioning-works-in-theory-1e4e</guid>
      <description>&lt;p&gt;I am sure you heard about bare metal. Clouds are made of bare metal, for example.&lt;/p&gt;

&lt;p&gt;The art of bringing to life an unanimated piece of metal like a server to something useful is something I am learning since I joined &lt;a href="https://metal.equinix.com"&gt;Equinix Metal&lt;/a&gt; in May.&lt;/p&gt;

&lt;p&gt;Let me make a comparison with something you are probably familiar with. Do you know why Kubernetes is hard? Because there is not one Kubernetes. It is a glue of an unknown number of pieces working together to help you deploy your application.&lt;/p&gt;

&lt;p&gt;Bare Metal is almost the same, hundreds of different providers, server size, architectures, chips that in some way you have to bring to life.&lt;/p&gt;

&lt;p&gt;We have to work with some common concepts we have. When a server boots, it runs a BIOS that looks in different places for something to run:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It looks for a hard drive&lt;/li&gt;
&lt;li&gt;It looks for external storage like a USB stick or a CD-Rom&lt;/li&gt;
&lt;li&gt;It looks for help from your network (netbooting)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Options one and two are not realistic if the end goal is to get to a handsfree, reliable solution. I am sure cloud providers do not have people running around with a USB stick containing operating systems and firmware.&lt;/p&gt;

&lt;h2&gt;
  
  
  Netbooting
&lt;/h2&gt;

&lt;p&gt;I spoke about &lt;a href="https://gianarb.it/blog/first-journeys-with-netboot-ipxe"&gt;my first experience netbooting Ubuntu&lt;/a&gt; on my blog. That article is efficient with reproducible code. Here the theory.&lt;/p&gt;

&lt;p&gt;When it comes to netbooting, you have to know what PXE means. Preboot Execution Environment is a standardized client/server environment that boots when no operating system is found, and it helps an administrator boot an operating system remotely. Don't think about this OS as the one you have in your laptop, I mean, technically it is, but the one your run there or in a server is persisted, that's why you can have files that survive a reboot. &lt;/p&gt;

&lt;p&gt;The one you start with PXE runs in memory, and from there, you have to figure out how to get the persisted OS you will run in your machine.&lt;/p&gt;

&lt;p&gt;When the in-memory operation system is up and running, you can do everything you are capable of with Ubuntu, Alpine, CentOS, or Debian. In practice, what people tend to do is to run applications and scripts to format a disk with the right partition and to install the end operation system.&lt;/p&gt;

&lt;p&gt;Pretty cool. PXE is kind of old, and for that reason, it is burned into a lot of different NICs. You will hear a lot more about iPXE, a "new" PXE implementation. What is cool about those is the &lt;code&gt;chain&lt;/code&gt; function. From one PXE/iPXE environment, you can chain another PXE/iPXE environment. That's how you get from PXE (that usually runs by default in a lot of hardware (if you have a NUC you run it)) to iPXE.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chain --autofree https://boot.netboot.xyz/ipxe/netboot.xyz.lkrn
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;iPXE supports a lot more protocols usable to download OS from such as TFTP, FTP, HTTP/S, NFC...&lt;/p&gt;

&lt;p&gt;This is an example of iPXE script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!ipxe
dhcp net0

set base-url http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/current/legacy-images/netboot/ubuntu-installer/amd64/
kernel ${base-url}/linux console=ttyS1,115200n8
initrd ${base-url}/initrd.gz
boot
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The first command, &lt;code&gt;dhcp net0&lt;/code&gt;, gets an IP for your hardware from the DHCP. &lt;code&gt;kernel&lt;/code&gt; and &lt;code&gt;initrd&lt;/code&gt; set the kernel and the initial ramdisk to run in memory.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;boot&lt;/code&gt; starts the &lt;code&gt;kernel&lt;/code&gt; and the &lt;code&gt;initrd&lt;/code&gt; you just set.&lt;/p&gt;

&lt;p&gt;There is more; this is what I find myself using more often.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure
&lt;/h3&gt;

&lt;p&gt;To netboot successfully, you need to distribute a couple of things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An iPXE script&lt;/li&gt;
&lt;li&gt;The operating system you want to run (kernel and initrd)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Workflow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Server starts&lt;/li&gt;
&lt;li&gt;There is nothing to boot in the HD&lt;/li&gt;
&lt;li&gt;Starts netbooting&lt;/li&gt;
&lt;li&gt;It makes a DHCP request to get network configuration, and  the DHCP returns the TFTP address with the location of the iPXE binary&lt;/li&gt;
&lt;li&gt;iPXE starts and makes another DHCP request; the response contains the URL of the iPXE scripts with the commands you saw above&lt;/li&gt;
&lt;li&gt;At this point, iPXE runs the script, downloads the kernel, and the initrd with the protocol you specified, and it runs the in-memory operating system.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Pretty cool!&lt;/p&gt;

&lt;h2&gt;
  
  
  The in-memory operating system
&lt;/h2&gt;

&lt;p&gt;The in-memory operating system can be as smart as you like; you can build your one, for example, starting from Ubuntu or Alpine. Size counts here because it has to fit in memory.&lt;/p&gt;

&lt;p&gt;When the operating system starts, it runs as PID 1, what is called &lt;code&gt;init.&lt;/code&gt; It is an executable located in the ramdisk called &lt;code&gt;/init.&lt;/code&gt; That script can be as complicated as you like. It can be a problematic binary that downloads from a centralized location commands to execute, or it can be bash scripts that format the local disk and installs the final operating system.&lt;/p&gt;

&lt;p&gt;What I am trying to say is that you have to make the in-memory operating system useful for your purpose. If you use native Alpine or Ubuntu, the init script will start a bash shell, not that useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  DHCP
&lt;/h2&gt;

&lt;p&gt;As you saw, the DHCP plays an important role. It is the first point of contact between unanimated hardware and the world. If you can control what the DHCP can do, you can, for example, register and monitor the healthiness of a server.&lt;/p&gt;

&lt;p&gt;Imagine you are at your laptop, and you are expecting a hundred new servers in one of your datacenters, monitoring the DHCP requests. You will know when they are plugged into the network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers what?
&lt;/h2&gt;

&lt;p&gt;Containers are a comfortable way to distribute and run applications without having to know how to run them. Think about this scenario. Your in-memory operating system at boot runs Docker. The &lt;code&gt;init&lt;/code&gt; script at this point can pull and run a Docker container with your logic for partitioning the disk and installing an operating system, or it runs some workload and exit leaving space for the next boot (a bit like serverless, but with servers, way cooler).&lt;/p&gt;

&lt;p&gt;Or the Docker Container can run a more complex application that reaches a centralized server that dispatches a list of actions to execute via a REST or gRPC API. Those actions can be declared and stored from you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The chain of tools and interactions to get from a piece of metal to something that runs some workload is not that long. Controlling all the steps and the tools along the way gives provides the ability to provisioning cold servers from zero to something that developers know better how to use.&lt;/p&gt;

&lt;p&gt;Ok, I lied to you. This is not just theory. This is how &lt;a href="https://tinkerbell.org"&gt;Tinkerbell&lt;/a&gt; works.&lt;/p&gt;

</description>
      <category>baremetal</category>
      <category>devops</category>
      <category>opensource</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Do I need Cluster API?</title>
      <dc:creator>gianarb</dc:creator>
      <pubDate>Fri, 02 Oct 2020 10:15:02 +0000</pubDate>
      <link>https://dev.to/gianarb/do-i-need-cluster-api-5g70</link>
      <guid>https://dev.to/gianarb/do-i-need-cluster-api-5g70</guid>
      <description>&lt;p&gt;Let's try to answer this question that I get a lot those days: "Do I need Cluster API?"&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. &lt;a href="https://cluster-api.sigs.k8s.io/"&gt;by Cluster API website&lt;/a&gt;?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If I have to think about what Kubernetes is good at, or what I like about it, those are the first things that show up in my mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Kubernetes API is very flexible; you can create Custom Resource Definition, the metadata are great, versioning is embedded in.&lt;/li&gt;
&lt;li&gt;The ability to outsource my dreams to an entity that not only realizes (or does it best to get there) them, but keeps them up for is superb.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last point in practice is this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Please, please please, I want a namespace&lt;/span&gt;
&lt;span class="c"&gt;# for all my cool resources&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace cool-project 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Do not worry, I will take care of it
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Cluster API does the same but for the Kubernetes cluster. So the first consideration to do to answer this question effectively is: "How dynamic is my system?". The possible answers are two: &lt;strong&gt;low&lt;/strong&gt; or &lt;strong&gt;high&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There aren't numbers to decide if you have a dynamic or static environment. It is a combination of how frequently you release code in production, how many times you re-build your environment. The more it changes more dynamic it is. &lt;/p&gt;

&lt;p&gt;Like everything in the DevOps land, Cluster API works for both use cases, but I think there is also a wrong answer. A low dynamic environment is risky because you can't freeze the time, and if you don't move or you move slowly, you won't be able to solve unknowns issues that WILL show up quickly; it is just a matter of time.&lt;/p&gt;

&lt;p&gt;So if your answer is &lt;strong&gt;low&lt;/strong&gt; try to think about: "do I feel safe with this way of working?" if the answer is &lt;strong&gt;yes&lt;/strong&gt;, I guess you don't need Cluster API. At least there are better solutions for your problem.&lt;/p&gt;

&lt;p&gt;What's the problem Cluster API solves? It makes Kubernetes Cluster provisioning and management more reliable, in the same way, it improves our ability to recover from failures when running applications in a Pods.&lt;/p&gt;

&lt;p&gt;To be fair, your applications do not get smarter because they run in a Pod, but because Kubernetes puts you in the position to exercise your applications for failure. This is the best way we have right now to improve reliability.&lt;/p&gt;

&lt;p&gt;Cluster API does the same with Kubernetes clusters themself, we treat clusters as pets because we are scared to recover them, but this is not a great position; it is like not deploying on Friday to avoid bad weekends. If you are afraid to deploy on Friday, it will be painful for you to handle unforeseen issues when you are on holiday or not ready to address the unforeseen problems (what?).&lt;br&gt;
So do you run a not that dynamic environment, but you want a framework that helps you to more reliability swap, update, replace Kubernetes clusters? &lt;strong&gt;You should have a look at Cluster API.&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you run a dynamic environment because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you are at scale, and you have multiple Kubernetes cluster spread across datacenters&lt;/li&gt;
&lt;li&gt;You have to spin up Kubernetes clusters on-demand for developers or customer&lt;/li&gt;
&lt;li&gt;who knows, because you like to be cool and speak at KubeCon&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;You should have a look at Cluster API&lt;/strong&gt; because it is designed to bring the experience your provider has when it comes to Kubernetes provisioning to you in a reliable, repeatable, and programmatic fashion.&lt;/p&gt;

&lt;p&gt;I often say, and it applies here as well &lt;strong&gt;you don't need to change your old car (or Cluster API)&lt;/strong&gt; if it works well for you, and you feel good about driving it. Just be sure to pay the insurance, clean it up from time to time, and keep up with a regular vehicle inspection.&lt;/p&gt;

&lt;p&gt;Now that you have a better clue about why there is a good amount of documentation and articles about how:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html"&gt;Cluster API official Quick Start&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/cluster-api-provider-packet#initialize-the-cluster"&gt;Packet Cluster API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.kubernauts.io/deploy-k8s-using-k8s-with-cluster-api-and-capa-on-aws-107669808367"&gt;Deploy K8s using K8s with Cluster API and CAPA on AWS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have other articles about how to use Cluster API that you would like to see listed here or if you want to chat about Cluster API and other whys I am &lt;a href="https://twitter.com/gianarb"&gt;@gianarb&lt;/a&gt; on twitter.&lt;/p&gt;

&lt;p&gt;Write sustainable code. Enjoy&lt;br&gt;
Gianluca&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
