Step 6
if you are in /home/user , then a new folder called rook should be there.
cd rook
step 7
switch to the /cluster/examples/kubernetes/ceph directory and follow the steps below.
step 8
run the command , kubectl apply -f common.yaml
from the cluster/examples/kubernetes/ceph directory.
step 9
create a new operator.yaml file, do not use the one, from the directory, create a new one and apply it.
vi operator2.yaml
kubectl apply -f operator2.yaml
step 9 validate you have storage class for premiun SSD disks.
kubectl get storageclass
step 10
Create a new cluster2.yaml file, copy and paste the one attached
apply the yaml
kubectl apply -f cluster2.yaml
step 11
validate you got the OSD pods created.
kubectl get pods -n rook-ceph
you will see some pods in init, wait a while
they will eventually start.
rook-ceph-osd will mount a disk using a pvc each one
rook-ceph-osd runs 1 per node.
if a node goes down , the disk dies, the other nodes will have enough data to restore in a new node.
rook-ceph-mon-X are the ones who control the logic on which side it has to replicate the data to have redundancy.
when accessing the data mons are the ones who informs the "client" to know where to retrieve the data.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
and just to summarize, step by step procedure I used to get to the point of having the pods running is:
Step 1 creating a nodepool in AKS:
az aks nodepool add --cluster-name aks3del --name npstorage --node-count 2 --resource-group aks3del --node-taints storage-node=true:NoSchedule
Step 2
az aks get-credentials --resource-group aks3del --name aks3del
step 4
kubectl get nodes
Step 5
execute this command:
git clone github.com/rook/rook.git
Step 6
if you are in /home/user , then a new folder called rook should be there.
cd rook
step 7
switch to the /cluster/examples/kubernetes/ceph directory and follow the steps below.
step 8
run the command , kubectl apply -f common.yaml
from the cluster/examples/kubernetes/ceph directory.
step 9
create a new operator.yaml file, do not use the one, from the directory, create a new one and apply it.
vi operator2.yaml
kubectl apply -f operator2.yaml
step 9 validate you have storage class for premiun SSD disks.
kubectl get storageclass
step 10
Create a new cluster2.yaml file, copy and paste the one attached
apply the yaml
kubectl apply -f cluster2.yaml
step 11
validate you got the OSD pods created.
kubectl get pods -n rook-ceph
you will see some pods in init, wait a while
they will eventually start.
rook-ceph-osd will mount a disk using a pvc each one
rook-ceph-osd runs 1 per node.
if a node goes down , the disk dies, the other nodes will have enough data to restore in a new node.
rook-ceph-mon-X are the ones who control the logic on which side it has to replicate the data to have redundancy.
when accessing the data mons are the ones who informs the "client" to know where to retrieve the data.