DEV Community

Cover image for Configuring a Cluster File System on OCI using OCFS2
Nirjas Jakilim
Nirjas Jakilim

Posted on

Configuring a Cluster File System on OCI using OCFS2

Setting up a shared file system across multiple virtual machines in the cloud can be tricky, but Oracle Cluster File System Version 2 (OCFS2) makes it straightforward. If you are running instances on Oracle Cloud Infrastructure (OCI) and need multiple VMs to read and write to the same block volume simultaneously, this guide will walk you through the process on Ubuntu.

Prerequisites

Before we begin the configuration, ensure you have the following infrastructure in place:

At least Two Ubuntu VMs provisioned: For this guide, we will use server1 (IP: 10.0.1.85) and server2 (IP: 10.0.1.235).
A Block Volume created and attached to both VMs.
The access type for the block volume must be set to Read/write - shareable.

Once these resources are ready, you can proceed to configure the cluster file system.

Step 1: Configure Security Rules

OCFS2 nodes need to communicate with each other over specific ports. You must update your OCI Virtual Cloud Network (VCN) security lists to allow internal traffic.

Allow TCP communication on ports 7777 and 3260. Add the following ingress rules for your private subnet (e.g., 10.0.1.0/24):

Source: 10.0.1.0/24 | IP Protocol: TCP | Destination Port: 7777
Source: 10.0.1.0/24 | IP Protocol: TCP | Destination Port: 3260

To verify the connectivity between the VMs, run the following netcat commands from one VM to the other:

nc -vz 10.0.1.85 3260;
nc -vz 10.0.1.85 7777
Enter fullscreen mode Exit fullscreen mode

Note: If your security rules are correct, the connection will show as "refused" because the cluster services are not running yet. This is expected. Do the same thing from the other VM with your server2 ip. 

Step 2: Install OCFS2 Packages

Log into both VMs and install the necessary OCFS2 tools. Run the following command on both machines:

sudo apt update && sudo apt upgrade -y;
sudo apt install ocfs2-tools-dev ocfs2-tools -y
Enter fullscreen mode Exit fullscreen mode

Step 3: Define the Cluster

Next, we need to create the cluster definition using the o2cb utility. This will generate the /etc/ocfs2/cluster.conf file if it doesn't already exist.

Run the following command on both VMs to create a cluster named ociocfs2:

sudo o2cb add-cluster ociocfs2
Enter fullscreen mode Exit fullscreen mode

Now, add your nodes to the cluster. Run these commands on both VMs:

sudo o2cb add-node ociocfs2 server1 --ip 10.0.1.85;
sudo o2cb add-node ociocfs2 server2 --ip 10.0.1.235
Enter fullscreen mode Exit fullscreen mode

You can verify the configuration by checking the .conf file:

sudo cat /etc/ocfs2/cluster.conf
Enter fullscreen mode Exit fullscreen mode

The output should list your cluster name, the heartbeat mode, and the details for both nodes.

cluster:
        name = ociocfs2
        heartbeat_mode = local
        node_count = 2

node:
        cluster = ociocfs2
        number = 0
        ip_port = 7777
        ip_address = 10.0.1.85
        name = server1

node:
        cluster = ociocfs2
        number = 1
        ip_port = 7777
        ip_address = 10.0.1.235
        name = server2
Enter fullscreen mode Exit fullscreen mode

Step 4: Attach the Block Volume via iSCSI

You need to run the iSCSI commands provided in your OCI console to attach the block volume at the OS level. To get the commands, go to you cloud console's storage>block volume and then click on your block volume name. There's on the attached instances click on the 3 dot menu of your each instance's section. You can get there the attach and detach commands.

iSCSI command option page

On server1:

sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.2:3260;
sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -n node.startup -v automatic;
sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.2:3260 -l
Enter fullscreen mode Exit fullscreen mode

On server2:

sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.3:3260;
sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -n node.startup -v automatic;
sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.3:3260 -l
Enter fullscreen mode Exit fullscreen mode

Once attached, identify the device path of your new volume:

sudo fdisk -l
Enter fullscreen mode Exit fullscreen mode

Look for a pattern like /dev/sdb whose size exactly match your created block volume's size. That should be your target device. For the remainder of this guide, we will assume the device is /dev/sdb.

Step 5: Format and Sync the Partition

Start the cluster services before creating the partition:

sudo systemctl start ocfs2;
sudo systemctl start o2cb
Enter fullscreen mode Exit fullscreen mode

Now, format the partition on only one machine:

sudo mkfs.ocfs2 -L "ocfs2" /dev/sdb
Enter fullscreen mode Exit fullscreen mode

This will initialize the superblock, format the journals, and establish the cluster stack. Once it says mkfs.ocfs2 successful, the formatting is complete.

To ensure the partition table is updated across your environment, run the following sync command on both VMs:

sudo blockdev --rereadpt /dev/sdb
Enter fullscreen mode Exit fullscreen mode

Step 6: Final Configuration and Mounting

Reconfigure the cluster tools on both VMs by running:

sudo dpkg-reconfigure ocfs2-tools
Enter fullscreen mode Exit fullscreen mode

When prompted, provide the exact cluster name you defined earlier (ociocfs2).
To ensure the drive mounts automatically on boot, add the following line to your /etc/fstab file:

/dev/sdb /data ocfs2 _netdev,defaults 0 0
Enter fullscreen mode Exit fullscreen mode

You also need to configure the kernel for cluster operation so the changes persist across reboots. Add these entries to your /etc/sysctl.conf file:

# Define panic and panic_on_oops for cluster operation 
kernel.panic=30
kernel.panic_on_oops=1
Enter fullscreen mode Exit fullscreen mode

Ensure the /data target directory exists on both VMs. Then, run the mount command on only one node:

sudo mount -a
Enter fullscreen mode Exit fullscreen mode

If everything is configured correctly, the volume should now be mounted on both nodes automatically.
You can verify the mount point on your instances using df -hT:

nirzak@server1:~$ df -hT | grep data
/dev/sdb      ocfs2  106G  2.2G  104G   3% /data

nirzak@server2:~$ df -hT | grep /data
/dev/sdb      ocfs2  106G  2.2G  104G   3% /data
Enter fullscreen mode Exit fullscreen mode

Troubleshooting & Expanding the Cluster

Module Errors: If you encounter a module error during setup, you may need to install the Oracle-specific Linux modules. Run:

sudo apt install linux-modules-extra-$(uname -r)
Enter fullscreen mode Exit fullscreen mode

Adding New Nodes: If you need to make changes to your nodes or add new ones to the cluster configuration later, you must unregister and re-register the cluster using the following commands:

sudo o2cb unregister-cluster <cluster_name>;
sudo o2cb register-cluster <cluster_name>
Enter fullscreen mode Exit fullscreen mode

Then restart both services

sudo systemctl restart ocfs2 && sudo systemctl restart o2cb
Enter fullscreen mode Exit fullscreen mode

After restarting the services, you can mount the file system to the new node with:

 

sudo mount.ocfs2 /dev/sdb /data
Enter fullscreen mode Exit fullscreen mode

Don't forget to replace /dev/sdb with your devices's path and /data with your mountpoint's path. Your cluster file system should now be online. Try to copy file on one server's directory and check if it appears on the second server or not. If it appears yourc cluster should be working fine.

Top comments (0)