DEV Community

Ezequiel
Ezequiel

Posted on

Ceph on Ubuntu (Single Node): Setup Guide (OSD, iSCSI, Bonding)

I recently put together a small Ceph lab on Ubuntu using cephadm, mainly to understand how everything fits together in a real environment.

Most guides out there are either too abstract or assume multi-node production setups, so I decided to document a single-node deployment end-to-end β€” focusing on what actually matters when you're getting started.


What this includes

  • Ceph installation (cephadm)
  • OSD provisioning from previously used disks
  • Network bonding (802.3ad / LACP)
  • iSCSI gateway setup
  • LVM snapshots for rollback scenarios

Everything is based on Ubuntu Server 22.04 and Ceph Squid.


Repo

Full step-by-step docs here:

πŸ‘‰ https://github.com/EzequielPA4/ceph-infrastructure-docs


Why this might help

If you're:

  • testing Ceph in a lab
  • learning how OSDs really work
  • trying to avoid common cephadm issues
  • dealing with reused disks (LVM leftovers, etc.)

this will probably save you time.


Notes from the lab

A few things that are worth calling out:

  • Ceph is very strict with disks β†’ anything with existing LVM or FS will be rejected
  • Cleaning disks properly is key (wipefs + sgdisk + dd in some cases)
  • Bonding (LACP) needs correct switch config or it just won’t behave
  • Snapshots with LVM are useful, but you need to monitor usage (lvs -o +data_percent)

Limitations

This is a single-node setup, so:

  • no HA
  • no quorum discussion
  • focused on learning / testing

If you're running something similar

I'm curious how others are setting this up in lab environments.

  • Are you using cephadm or something else?
  • Any gotchas with iSCSI gateways?

Feel free to share your setup or improvements.

Top comments (0)