There are moments when simple tasks can slow down our progress, often because they're not part of our regular routine. This post captured set of steps to enable Shared Block Volume for multiple OCI Compute Instances. Such a setup also needs Oracle Cluster File System(OCFS2) to be enabled.
I so wish this enablement is possible by a click of Button, API call ...
Lets make our hands dirty.
Starting point.
- Two OCI Instance with the provision to access through SSH keys
- Block Volume in the same Availability Domain
Attach the Block Volume to Instance
- Navigate to the Unattached Block Volume. Attached Instances >> Attach Instance
Attach the First Instance.
Repeat the same for Second Instance
Setup the Oracle Cluster File System(OCFS2)
1) Add stateful Ingress in Security List of the Instance VCN
Source: Subnet/VCN CIDR
IP Prototocol: TCP and UDP
Source Port: All
Target Port: 3260 and 7777
2) Open the Target ports on Local OS Firewall on instance-1 and instance-2
sudo firewall-cmd --zone=public --permanent --add-port=7777/tcp
sudo firewall-cmd --zone=public --permanent --add-port=7777/udp
sudo firewall-cmd --zone=public --permanent --add-port=3260/tcp
sudo firewall-cmd --zone=public --permanent --add-port=3260/udp
sudo firewall-cmd --complete-reload
3) Install OCFS2 Packages on instance-1 and instance-2
sudo dnf install -y ocfs2-tools
4) Create Cluster definition on instance-1
sudo o2cb add-cluster ocifs2
5) Add the nodes to Cluster on instance-1
sudo o2cb add-node ocifs2 instance-1 --ip <PRIVATE IP OF instance 1>
sudo o2cb add-node ocifs2 instance-2 --ip <PRIVATE IP OF instance 2>
6) Check the Cluster Config file on instance-1
[opc@instance-1 ~]$ sudo cat /etc/ocfs2/cluster.conf
cluster:
heartbeat_mode = local
node_count = 2
name = ocifs2
node:
number = 0
cluster = ocifs2
ip_port = 7777
ip_address = 172.17.0.173
name = instance-1
node:
number = 1
cluster = ocifs2
ip_port = 7777
ip_address = 172.17.0.154
name = instance-2
7) Copy the content of Cluster Config file to instance-2 at the /etc/ocfs2/cluster.conf
8) Configure the Cluster Stack in instance-1 and instance-2 sequentially
[opc@instance-1 ~]$ sudo /sbin/o2cb.init configure
Configuring the O2CB driver.
...
...
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocifs2
...
9) Verify the settings for Cluster Stack and note the values are loaded, mounted and online. But the heart beat is not active
[opc@instance-1 ~]$ sudo /sbin/o2cb.init status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Stack glue driver: Loaded
Stack plugin "o2cb": Loaded
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster "ocifs2": Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Heartbeat mode: Local
Checking O2CB heartbeat: Not active
Debug file system at /sys/kernel/debug: mounted
Repeat above steps in instance-2
10) Setup the Boot time startup of o2cb and ocfs2 services on instance-1 and instance-2
sudo systemctl enable o2cb
sudo systemctl enable ocfs2
11) Configure Kernel for Cluster Operation on instance-1 and instance-2 using SSH
sudo sysctl kernel.panic=30
sudo sysctl kernel.panic_on_oops=1
Add the entries to sysctl configuration file for persistence
sudo vi /etc/sysctl.conf
kernel.panic=30
kernel.panic_on_oops=1
12) Create OCFS2 Volume on only instance-1
[opc@instance-1 ~]$ sudo mkfs.ocfs2 -L "ocfs2" /dev/sdb
mkfs.ocfs2 1.8.6
Cluster stack: classic o2cb
Label: ocfs2
...
Formatting Journals:
...
Writing lost+found: done
mkfs.ocfs2 successful
Create a mount directory
[opc@instance-1 ~]$ sudo mkdir /ocfs2
Specify the netdev option in /etc/fstab to allow the system to mount ocfs2 volume during boot time
sudo vi /etc/fstab
/dev/sdb /ocfs2 ocfs2 _netdev,defaults 0 0
Reload the sysctl and mount
sudo systemctl daemon-reload
sudo mount -a
Check the mounted block volume
[opc@instance-1 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sdb 8:16 0 1T 0 disk /ocfs2
13) OCFS2 Volume setup and mount on instance-2
[opc@instance-2 ~]$ sudo mkdir /ocfs2
[opc@instance-2 ~]$ sudo vi /etc/fstab
[opc@instance-2 ~]$ sudo systemctl daemon-reload
[opc@instance-2 ~]$ sudo mount -a
[opc@instance-2 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sdb 8:16 0 1T 0 disk /ocfs2
14) Test: Create a file on the mounted volume for eg. on instance-2
[opc@instance-2 ~]$ cd /ocfs2/
[opc@instance-2 ocfs2]$ sudo touch shared-cluster-file.txt
[opc@instance-2 ocfs2]$ ls -lart
total 8
drwxr-xr-x. 2 root root 3896 Apr 29 12:03 lost+found
dr-xr-xr-x. 19 root root 265 Apr 29 12:15 ..
-rw-r--r--. 1 root root 0 Apr 29 12:19 shared-cluster-file.txt
drwxr-xr-x. 3 root root 3896 Apr 29 12:19 .
15) Test: Access the file on the mounted volume of other instance eg. on instance-1
[opc@instance-1 ~]$ cd /ocfs2/
[opc@instance-1 ocfs2]$ ls -lart
total 8
drwxr-xr-x. 2 root root 3896 Apr 29 12:03 lost+found
dr-xr-xr-x. 19 root root 265 Apr 29 12:04 ..
-rw-r--r--. 1 root root 0 Apr 29 12:19 shared-cluster-file.txt
drwxr-xr-x. 3 root root 3896 Apr 29 12:19 .
Top comments (0)