<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Seandon Mooy</title>
    <description>The latest articles on DEV Community by Seandon Mooy (@erulabs).</description>
    <link>https://dev.to/erulabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/erulabs"/>
    <language>en</language>
    <item>
      <title>Storage on Kubernetes</title>
      <dc:creator>Seandon Mooy</dc:creator>
      <pubDate>Sun, 28 Jul 2019 10:50:38 +0000</pubDate>
      <link>https://dev.to/erulabs/storage-on-kubernetes-22ik</link>
      <guid>https://dev.to/erulabs/storage-on-kubernetes-22ik</guid>
      <description>&lt;p&gt;Kubernetes is fantastic for stateless apps. Deploy your app, scale it up to hundreds of instances, be happy. But how do you manage &lt;strong&gt;storage&lt;/strong&gt;? How do we ensure every one of our hundreds of apps gets a piece of reliable, fast, cheap storage?&lt;/p&gt;

&lt;p&gt;Let's take a typical Kubernetes cluster, with a handful of nodes (Linux servers) powering a few replicas of our app:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-2.jpg" alt="storage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice our sad, unused disk drives! Kubernetes sure brings a lot of wins, but are we even sysadmins anymore if we don't manage enormous RAID arrays?&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Volume Claims
&lt;/h3&gt;

&lt;p&gt;In Kubernetes, we define &lt;strong&gt;PersistentVolumeClaims&lt;/strong&gt; to ask our system for storage. To put it simply, an App "claims" a bit of storage, and the system responds in a configurable way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-0.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, most Cloud providers are eager to harness the simplicity of Kubernetes by "replying" to your storage request by attaching Cloud Storage (eg: Amazon's EBS).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-1.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are a number of downsides to this strategy for consumers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance &amp;amp; Cost&lt;/strong&gt;: EBS Volumes (and other cloud provider's equivilent) performance depends on its size - meaning a &lt;em&gt;smaller disk&lt;/em&gt; is the same thing as a &lt;em&gt;slower disk&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Characteristics&lt;/strong&gt;: Cloud Providers typically offer HDD, SDD, and a "provisioned IO" option. This limits sysadmins in their storage system designs - Where is the tape backup? What about NVMe? How is the disk attached to the server my code is running on?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portability / Lock-in&lt;/strong&gt;: EBS is EBS and Google Persistent Disks are Google Persistent Disks. Cloud vendors are aggressively trying to lock you into their platforms - and typically hide the filesystem tools we know and love behind a Cloud-specific snapshotting system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So... What we need then is an &lt;strong&gt;Open Source&lt;/strong&gt; storage orchestration system, which would run on any Kubernetes cluster, and can transform piles of drives into pools of storage, available to our Pods:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-2.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter Rook.io
&lt;/h3&gt;

&lt;p&gt;What is &lt;strong&gt;Rook&lt;/strong&gt;? From the website, Rook is "an Open-Source, Cloud-Native Storage for Kubernetes" with "Production ready File, Block and Object Storage". Marketing speak aside, Rook is an open-source version of AWS EBS and S3, which you can install on your own clusters. It's also the backend for &lt;a href="https://kubesail.com" rel="noopener noreferrer"&gt;KubeSail&lt;/a&gt;'s storage system, and it's how we carve up massive RAID arrays to power our users apps!&lt;/p&gt;

&lt;h3&gt;
  
  
  Rook and Ceph
&lt;/h3&gt;

&lt;p&gt;Rook is a system which sits in your cluster and responds to requests for storage, but it is not itself a storage system. Although this makes things a bit more complex, it should make you feel &lt;strong&gt;good&lt;/strong&gt;, because while Rook is quite new, the storage system it uses under the hood are battle hardened, and &lt;em&gt;far from beta&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We'll be using &lt;a href="https://ceph.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Ceph&lt;/strong&gt;&lt;/a&gt;, which is &lt;a href="https://en.wikipedia.org/wiki/Ceph_(software)#History" rel="noopener noreferrer"&gt;about 15 years old&lt;/a&gt; and is in use and developed by companies like Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, and SanDisk. It's very far from Kubernetes-hipster, despite how cool the Rook project looks!&lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;Rook&lt;/strong&gt; at version 1.0, and &lt;strong&gt;Ceph&lt;/strong&gt; powering some of the world's most important datasets, I'd say it's about time we get confident and take back control of our data-storage! I'll be building RAID arrays in the 2020s and no one can stop me! &lt;em&gt;Muahaha&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Install
&lt;/h3&gt;

&lt;p&gt;I won't focus on initial install here, since &lt;a href="https://rook.io/docs/rook/v1.0/ceph-quickstart.html" rel="noopener noreferrer"&gt;the Rook guide is quite nice&lt;/a&gt;. If you want to learn more about the Rook project, I recommend &lt;a href="https://www.youtube.com/watch?v=pwVsFHy2EdE" rel="noopener noreferrer"&gt;this KubeCon video&lt;/a&gt; as well. Once you've got Rook installed, we'll create a few components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;CephCluster&lt;/strong&gt;, which maps nodes and their disks to our new storage system.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;CephBlockPool&lt;/strong&gt;, which defines how to store data, including how many replicas we want.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;StorageClass&lt;/strong&gt;, which defines a way of using storage from the CephBlockPool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's start with the &lt;strong&gt;CephCluster&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#?filename=ceph-cluster.yaml&amp;amp;mini=true&amp;amp;noApply=true&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ceph.rook.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CephCluster&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mon&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;allowMultiplePerNode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;useAllNodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;useAllDevices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;osdsPerDevice&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
    &lt;span class="na"&gt;directories&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/opt/rook&lt;/span&gt;
    &lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-storage-node-1"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-storage-node-2"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-storage-node-3"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important things to note here is that a &lt;strong&gt;CephCluster&lt;/strong&gt; is fundamentally a map of which &lt;strong&gt;Nodes&lt;/strong&gt; and which drives or directories on those Nodes will be used to store data. Many tutorials will suggest &lt;code&gt;useAllNodes: true&lt;/code&gt;, which we strongly recommend against. Instead, we recommend managing a smaller subset pool of "storage workers" - this allows you to use different system types (with say, very slow drives) later without accidentally/unknowingly adding it to the storage pool. We'll be assuming &lt;code&gt;/opt/rook&lt;/code&gt; is a mount-point, but Rook is capable of using unformatted disks as well as directories.&lt;/p&gt;

&lt;p&gt;One other note is that a &lt;code&gt;mon&lt;/code&gt; is Rook's monitoring system. We strongly suggest running at least three and ensuring &lt;code&gt;allowMultiplePerNode&lt;/code&gt; is false.&lt;/p&gt;

&lt;p&gt;Now our cluster looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-3.jpg" alt="storage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By the way, you'll want to take a look at the pods running int the &lt;code&gt;rook-ceph&lt;/code&gt; namespace! You'll find your OSD (Object-Storage-Device) pods, as well as monitoring and agent pods living in that namespace. Let's create our &lt;strong&gt;CephBlockPool&lt;/strong&gt; and a &lt;strong&gt;StorageClass&lt;/strong&gt; that uses it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#?filename=ceph-blockpool.yaml&amp;amp;mini=true&amp;amp;noApply=true&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ceph.rook.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CephBlockPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage-pool&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook-ceph&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;failureDomain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;osd&lt;/span&gt;
  &lt;span class="na"&gt;replicated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storage.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StorageClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage&lt;/span&gt;
&lt;span class="na"&gt;provisioner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rook.io/block&lt;/span&gt;
&lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage-pool&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the replicated size is set to 2, meaning we'll have 2 copies of all our data.&lt;/p&gt;

&lt;p&gt;We'll wait for our CephCluster to get settled - you'll want to take a look at the &lt;code&gt;CephCluster&lt;/code&gt; object you created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl -n rook-ceph get cephcluster rook-ceph -o json | jq .status.ceph.health
"HEALTH_OK"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we're ready to make a request for storage!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-4.jpg" alt="storage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can request storage in a number of standard ways now, from this point there is zero "Rook-specific" code or assumptions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#?filename=ceph-pvc.yaml&amp;amp;mini=true&amp;amp;noApply=true&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-data&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="c1"&gt;# Use our new ceph storage:&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-storage&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1000Mi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Rook will see our &lt;strong&gt;PersistentVolumeClaim&lt;/strong&gt; and create a &lt;strong&gt;PersistentVolume&lt;/strong&gt; for us! Let's take a look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get pv --watch
NAME     CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS  CLAIM              STORAGECLASS    AGE
pvc-...  1000Mi    RWO           Delete          Bound   default/test-data  my-storage      13m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So there we go! Successful, fairly easy to use, Kubernetes-native storage system. We can bring our own disks, we can use any cloud provider... Freedom!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkubesail.com%2Fblog-images%2Fblog-storage-rook-6.jpg" alt="storage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion: Should you use it?
&lt;/h3&gt;

&lt;p&gt;Well, TLDR: Yes.&lt;/p&gt;

&lt;p&gt;But the real question you should ask is, am I trying to build the most awesome data-center-in-my-closet the world has seen since 2005? Am I trying to learn enterprise-ready DevOps skills? If the answer is yes, I strongly recommend playing with Rook.&lt;/p&gt;

&lt;p&gt;At KubeSail we use Rook internally, and offer a &lt;code&gt;standard&lt;/code&gt; storage class which provides extremely fast NVMe storage - even on our &lt;em&gt;free&lt;/em&gt; tier! We manage multiple clusters with extremely large and busy Ceph clusters, and so far have been extremely happy in terms of stability and performance. While playing with piles of harddrives does make me nostalgic for the good ol' days, our platform allows you to have a rock-solid setup for your apps out of the box.&lt;/p&gt;

&lt;p&gt;We strongly prefer Open Source tools that avoid vendor lock-in. You should be able to install and configure all of our tools at KubeSail on your own clusters, and we'll continue to write about our stack in a continuing series of blog posts. However, if all you want is super fast, super cheap storage attached to your Docker containers on multiple servers, checkout our Hosted Platform at &lt;a href="https://kubesail.com" rel="noopener noreferrer"&gt;KubeSail.com&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;If you have any questions about the article, Rook, Kubernetes, or really just about anything else, send us a message &lt;a href="https://twitter.com/kubesail" rel="noopener noreferrer"&gt;on Twitter&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>storage</category>
      <category>linux</category>
    </item>
    <item>
      <title>Free Kubernetes namespace for learning and hobbies!</title>
      <dc:creator>Seandon Mooy</dc:creator>
      <pubDate>Tue, 19 Feb 2019 07:20:51 +0000</pubDate>
      <link>https://dev.to/erulabs/free-kubernetes-namespace-for-learning-and-hobbies-4cfl</link>
      <guid>https://dev.to/erulabs/free-kubernetes-namespace-for-learning-and-hobbies-4cfl</guid>
      <description>&lt;p&gt;A friend and I just launched &lt;a href="https://kubesail.com"&gt;KubeSail.com&lt;/a&gt; to make it easy to learn and use Kubernetes right away - still a lot to do but the free tier should be helpful for launching small projects or learning Kubernetes!&lt;/p&gt;

&lt;p&gt;We've included a big enough free tier to host a small app or gameserver (factorio runs like a charm!), and a few examples that hopefully help people get excited! There is a ton to do and we have big plans, but we'd love any feedback!&lt;/p&gt;

&lt;p&gt;Thanks everyone!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>showdev</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
