as part of my presentation "From Servers to Serverless in Ten Minutes" (slides) presented during the OCP Virtual Summit on 12 May 2020, I promised to describe our storage setup.
we had two system setups, as discussed in the talk
deskside - Sesame Discovery Fast-Start
our deskside cluster using the Sesame Discovery Fast-Start unit consists of four nodes:
four - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
three - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
two - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
one - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 256GB memory, 1TB NVMe
where each node was configured with four individual NVMe drives:
disk WDS250G3X0C /dev/nvme0n1 (WD Black SN750 256GB NVMe flash)
disk WDS250G3X0C /dev/nvme1n1 (WD Black SN750 256GB NVMe flash)
disk WDS250G3X0C /dev/nvme2n1 (WD Black SN750 256GB NVMe flash)
disk WDS250G3X0C /dev/nvme3n1 (WD Black SN750 256GB NVMe flash)
so a total of 4TB of NVMe flash capacity across a 4-node cluster
a detailed view of the Discovery hardware is shown in this short (2 min) video:
OpenEBS for Kubernetes
we deployed a Rancher RKE environment, with the bootstrap methods outlined in the talk slides, then just did a simple helm install for the OpenEBS install, following the instructions at:
https://docs.openebs.io/docs/next/installation.html
we used command-line kubectl, as well as the graphical interface, with a straightforward setup experience using either method. it took less than 20 minutes from click to ready.
rack-level - Sesame for Open Systems
for our rack-level deployment, we have four nodes using JBOD HDD storage:
nlou14 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
36 Vendor: ATA Model: HGST HUH721212AL Rev: W3D0 12TB
nlou12 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
36 Vendor: ATA Model: HGST HUH721212AL Rev: W3D0 12TB
nrou14 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
36 Vendor: ATA Model: HGST HUH721212AL Rev: W3D0 12TB
nrou12 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 864TB HDD
36 Vendor: ATA Model: HGST HUH721212AL Rev: W3D0 12TB
and two nodes using NVMe flash storage:
nlou17 - Leopard ORv2-DDR4, dual CPU E5-2680 v4 @ 2.40GHz, 256GB memory, 15TB NVMe
4 nvme WUS4BB038D4M9E4 3.84 TB (WD SN640 3.84TB)
nlou15 - Leopard ORv2-DDR4, dual E5-2678 v3 @ 2.50GHz, 128GB memory, 15TB NVMe
4 nvme WUS4BB038D4M9E4 3.84 TB (WD SN640 3.84TB)
this means that our cluster was able to expose a total of 1.7 PB of HDD capacity and 30TB of NVMe flash capacity to the kubernetes workloads
a typical cluster setup from our customers might consist of four or five JBOD nodes - up to 4.3 PB of total HDD storage - and a half dozen flash nodes - 90TB of NVMe flash - to support up to 18 compute nodes (432 cores, 9.2 TB of memory) - all connected with dual 25G ethernet via our top-of-rack 32-port 100G switch
this configuration in a Sesame rack brings balanced storage, compute, and networking in a cost-effective solution
a detailed view of our rack-scale solutions can be seen in this short (4 min) video:
full details of these offerings, as well as contact info can be found at our website sesame.com
comments and questions welcome - thanks for reading!
in addition to OpenEBS, we have also tested the Ceph and OpenIO software-defined storage solutions on the same hardware nodes - more on those experiences in our next post!
Top comments (0)