<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abhinav Maharjan</title>
    <description>The latest articles on DEV Community by Abhinav Maharjan (@abhinav_maharjan_c2417dc1).</description>
    <link>https://dev.to/abhinav_maharjan_c2417dc1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abhinav_maharjan_c2417dc1"/>
    <language>en</language>
    <item>
      <title>Kubernetes Home lab: Setting up Storage (Rook and Ceph)</title>
      <dc:creator>Abhinav Maharjan</dc:creator>
      <pubDate>Tue, 06 May 2025 01:01:41 +0000</pubDate>
      <link>https://dev.to/abhinav_maharjan_c2417dc1/kubernetes-home-lab-setting-up-storage-rook-and-ceph-b8h</link>
      <guid>https://dev.to/abhinav_maharjan_c2417dc1/kubernetes-home-lab-setting-up-storage-rook-and-ceph-b8h</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post, I’ll walk you through setting up storage for a Kubernetes homelab using Rook and Ceph. This setup provides scalable, resilient storage that can simulate production-grade environments for learning and testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes Engine: Rancher Kubernetes Engine 2(RKE2)&lt;/li&gt;
&lt;li&gt;Master Node: 1 node (control plane)&lt;/li&gt;
&lt;li&gt;Worker Nodes: 3 nodes&lt;/li&gt;
&lt;li&gt;Storage: Each worker node has a 100 GB unpartitioned disk (used by Ceph for storage)&lt;/li&gt;
&lt;li&gt;Environment: Kubernetes homelab running on virtual machines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Rook and Ceph?
&lt;/h2&gt;

&lt;p&gt;When deploying and managing Kubernetes clusters, one of the challenges often faced is how to manage persistent storage. Containers are not ideal for applications that require persistent storage, especially when that storage needs to survive restarts. This is where the storage solution like rook and ceph comes into play.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rook
&lt;/h3&gt;

&lt;p&gt;Rook is an opensource storage orchestrator which automates deployment and management of ceph storage to provide self-managing, self-scaling, and self-healing storage services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ceph
&lt;/h3&gt;

&lt;p&gt;Ceph is a distributed storage system that provides file, block and object storage. In a typical Ceph cluster, data is distributed across multiple nodes in a way that ensures redundancy and high availability. Ceph automatically replicates data and manages failures, making it ideal for mission-critical applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnho611hy3kjp8y8g5qi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnho611hy3kjp8y8g5qi4.png" alt="Rook Ceph Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pod&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Rook Operator&lt;/td&gt;
&lt;td&gt;Manages the lifecycle of the Ceph cluster inside Kubernetes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CSI Plugins&lt;/td&gt;
&lt;td&gt;Handle the mounting and unmounting of volumes to pods.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Provisioner&lt;/td&gt;
&lt;td&gt;Automatically creates new Ceph volumes when PVCs are made.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OSD (Object Storage Daemon)&lt;/td&gt;
&lt;td&gt;Stores the data physically and handles replication.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mgr&lt;/td&gt;
&lt;td&gt;Manages the overall Ceph cluster, handles monitoring, and exposes the Ceph Dashboard.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mon&lt;/td&gt;
&lt;td&gt;Keeps track of the cluster's health and makes sure all parts of the storage system agree on the current state.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;rook-ceph-exporter&lt;/td&gt;
&lt;td&gt;Exports Ceph metrics for Prometheus and monitoring.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;rbdplugin&lt;/td&gt;
&lt;td&gt;Manages block storage (RBD) and allows Kubernetes to mount block devices as persistent volumes.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes Version: v1.28 to 1.32 are supported.&lt;/li&gt;
&lt;li&gt;When installing a Ceph cluster, allocate storage (HDD or SSD) without creating any partitions. Ceph will manage and combine all the storage across the drives as a single pool. I have allocated 100 GB HDD storage across three worker nodes.&lt;/li&gt;
&lt;li&gt;Before installation we need to install lvm2 on every kubernetes node.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y lvm2 -- CentOS or RHEL based system
sudo apt-get install -y lvm2 -- for Ubuntu or Debian based system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Before installing clone the rook git repo which contains all the configurations.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone --single-branch --branch master https://github.com/rook/rook.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Navigate to the example directory.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rook/deploy/examples
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 1: Install CRDs and the Rook Operator
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f crds.yaml -f common.yaml -f operator.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;crds.yaml&lt;/strong&gt;: contains all the crd required by Rook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;common.yaml&lt;/strong&gt;: contains RBAC resources, namespace definitions, Service Account and other resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;operator.yaml&lt;/strong&gt;: This is the actual brain or core of the rook-ceph cluster. It manages the lifecycle of ceph components.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Ceph by default takes 3 mons. So, if you have less than 3 worker nodes then edit your cluster.yaml file and change mon value (.spec.mon.count) to 2 or 1 depending on your nodes. But 3 is recommended for a production cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the operator is up and running we should create a ceph cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: Install Ceph cluster
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;cluster.yaml&lt;/strong&gt; is for bare-metal kubernetes. so if you have a test environment (minikube) or cloud environment then you should apply other yaml files like cluster-test.yaml or cluster-on-pvc.yaml. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional but important:&lt;/strong&gt; Also install the toolbox.yaml file to get the status of the ceph cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 3: Install toolbox
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f toolbox.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To see the health of the ceph cluster execute the following command inside the tools pod.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k exec -it rook-ceph-tools-b48d79d8b-89m8v -- bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can use command &lt;code&gt;ceph status&lt;/code&gt; to check the status of ceph.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-4.4$ ceph status
  cluster:
    id:     ff587402-5139-40a2-ac2a-f6a0de371dc6
    health: HEALTH_WARN
            mons a,b,c are low on available space

  services:
    mon: 3 daemons, quorum a,b,c (age 40m)
    mgr: a(active, since 36m), standbys: b
    osd: 3 osds: 3 up (since 40m), 3 in (since 2d)

  data:
    pools:   2 pools, 33 pgs
    objects: 621 objects, 2.1 GiB
    usage:   4.2 GiB used, 296 GiB / 300 GiB avail
    pgs:     33 active+clean

  io:
    client:   14 KiB/s wr, 0 op/s rd, 0 op/s wr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can also see OSD using command &lt;code&gt;ceph osd status&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-4.4$ ceph osd status
ID  HOST          USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  workernode1  1456M  98.5G      0        0       0        0   exists,up
 1  workernode2  1147M  98.8G      0     6553       0        0   exists,up
 2  workernode3  1787M  98.2G      0        0       0        0   exists,up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To check the raw storage and pool use command &lt;code&gt;ceph df&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-4.4$ ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    300 GiB  296 GiB  4.3 GiB   4.3 GiB       1.43
TOTAL  300 GiB  296 GiB  4.3 GiB   4.3 GiB       1.43

--- POOLS ---
POOL         ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr          1    1  577 KiB        2  1.7 MiB      0     93 GiB
homelabpool   2   32  2.1 GiB      626  4.1 GiB   1.45    140 GiB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Creating Storage Classes
&lt;/h2&gt;

&lt;p&gt;We can create 3 types of storage in rook-ceph: Object, Block and Filesystem. So to create these storage we need to create ceph pool and storage class first.&lt;/p&gt;

&lt;p&gt;Storage Class tells Kubernetes what type of Ceph storage to provision and how to interact with Rook to provision PVs automatically when a PVC is created.&lt;/p&gt;

&lt;p&gt;A Ceph Pool is a logical grouping of storage resources in a Ceph cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Block Storage
&lt;/h3&gt;

&lt;p&gt;Go to location rook/deploy/examples/csi/rbd.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rook/deploy/examples/csi/rbd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Apply the storageclass.yaml&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f storageclass.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;There are two section under this file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To create CephBlockPool named replica pool.&lt;/li&gt;
&lt;li&gt;To create a storageclass named rook-ceph-block.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Replicated size of CephBlockPool is 3 so any PVC you create in cluster will be stored at 3 location.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  failureDomain: host
  replicated:
    size: 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Filesystem Storage
&lt;/h3&gt;

&lt;p&gt;To create filesystem storage navigate to rook/deploy/examples/csi/cephfs&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rook/deploy/examples/csi/cephfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Apply storageclass.yaml&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f storageclass.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Object Storage
&lt;/h2&gt;

&lt;p&gt;Finally to create the Object storage navigate to rook/deploy/examples&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rook/deploy/examples
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and apply object.yaml file&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f object.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;If you don't install &lt;strong&gt;lvm2&lt;/strong&gt; package then &lt;strong&gt;osd&lt;/strong&gt; pods will fail and &lt;strong&gt;mon&lt;/strong&gt;pods won't be create so make sure to install it.&lt;/li&gt;
&lt;li&gt;If you are facing this issue &lt;code&gt;Still connecting to unix:///csi/csi.sock&lt;/code&gt; in &lt;code&gt;csi-rbdplugin&lt;/code&gt; pod then use &lt;code&gt;modprobe rbd&lt;/code&gt; on all nodes to solve this issue.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Setting up Rook and Ceph in a Kubernetes homelab provides a powerful way to simulate production-grade storage environments. With proper hardware and configuration, you can create scalable, self-healing, and persistent storage for your workloads. By following this guide, you deployed Rook, configured a Ceph cluster, and enabled block, filesystem, and object storage—all with built-in redundancy and monitoring support. This setup not only enhances your Kubernetes lab experience but also helps you build hands-on skills for real-world scenarios.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/abhinav1015" rel="noopener noreferrer"&gt;
        abhinav1015
      &lt;/a&gt; / &lt;a href="https://github.com/abhinav1015/Devops-Homelab" rel="noopener noreferrer"&gt;
        Devops-Homelab
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This is a devops homelab.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;DevOps Homelab&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;This repository contains the configuration files and deployment manifests for various DevOps tools and infrastructure setup used in my personal Kubernetes Homelab. It leverages GitOps with ArgoCD for continuous delivery and management of Kubernetes resources.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Overview&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;The following tools and services have been deployed and managed in this Kubernetes homelab
ArgoCD: GitOps continuous delivery tool for managing Kubernetes applications.
GitLab: GitLab instance for CI/CD pipelines and repository management.
Cert-Manager: A tool for managing SSL/TLS certificates in Kubernetes.
Rook &amp;amp; Ceph: A cloud-native storage solution for Kubernetes using Ceph.
MetalLB: Load Balancer for exposing services to external traffic.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Infrastructure Setup&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;This project follows a modular approach to organize configurations into different sections:&lt;/p&gt;
&lt;p&gt;Infrastructure (infra/):
Contains all the configuration files required to set up infrastructure components like GitLab, Cert-Manager, Ceph, etc.&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image/Logo&lt;/th&gt;
&lt;th&gt;Infrastructure Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/f851837d7a0298bf0c62820028221ba52dbe3fb00d97e37ebee465b31ed6edd9/68747470733a2f2f646f63732e726b65322e696f2f696d672f6c6f676f2d686f72697a6f6e74616c2d726b65322e737667"&gt;&lt;img src="https://camo.githubusercontent.com/f851837d7a0298bf0c62820028221ba52dbe3fb00d97e37ebee465b31ed6edd9/68747470733a2f2f646f63732e726b65322e696f2f696d672f6c6f676f2d686f72697a6f6e74616c2d726b65322e737667" width="100"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;RKE2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rancher's next-generation Kubernetes distribution, fully compliant and secured out-of-the-box.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/9ef1573e6433d16c14e9606ae329f0ddaccda934e5edb976aafd10d6b3cf9826/687474703a2f2f6172676f2d63642e72656164746865646f63732e696f2f656e2f737461626c652f6173736574732f6c6f676f2e706e67"&gt;&lt;img src="https://camo.githubusercontent.com/9ef1573e6433d16c14e9606ae329f0ddaccda934e5edb976aafd10d6b3cf9826/687474703a2f2f6172676f2d63642e72656164746865646f63732e696f2f656e2f737461626c652f6173736574732f6c6f676f2e706e67" width="100"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;ArgoCD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A declarative GitOps continuous&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/abhinav1015/Devops-Homelab" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
  </channel>
</rss>
