Setting up a shared file system across multiple virtual machines in the cloud can be tricky, but Oracle Cluster File System Version 2 (OCFS2) makes it straightforward. If you are running instances on Oracle Cloud Infrastructure (OCI) and need multiple VMs to read and write to the same block volume simultaneously, this guide will walk you through the process on Ubuntu.
Prerequisites
Before we begin the configuration, ensure you have the following infrastructure in place:
At least Two Ubuntu VMs provisioned: For this guide, we will use server1 (IP: 10.0.1.85) and server2 (IP: 10.0.1.235).
A Block Volume created and attached to both VMs.
The access type for the block volume must be set to Read/write - shareable.
Once these resources are ready, you can proceed to configure the cluster file system.
Step 1: Configure Security Rules
OCFS2 nodes need to communicate with each other over specific ports. You must update your OCI Virtual Cloud Network (VCN) security lists to allow internal traffic.
Allow TCP communication on ports 7777 and 3260. Add the following ingress rules for your private subnet (e.g., 10.0.1.0/24):
Source: 10.0.1.0/24 | IP Protocol: TCP | Destination Port: 7777
Source: 10.0.1.0/24 | IP Protocol: TCP | Destination Port: 3260
To verify the connectivity between the VMs, run the following netcat commands from one VM to the other:
nc -vz 10.0.1.85 3260;
nc -vz 10.0.1.85 7777
Note: If your security rules are correct, the connection will show as "refused" because the cluster services are not running yet. This is expected. Do the same thing from the other VM with your server2 ip.
Step 2: Install OCFS2 Packages
Log into both VMs and install the necessary OCFS2 tools. Run the following command on both machines:
sudo apt update && sudo apt upgrade -y;
sudo apt install ocfs2-tools-dev ocfs2-tools -y
Step 3: Define the Cluster
Next, we need to create the cluster definition using the o2cb utility. This will generate the /etc/ocfs2/cluster.conf file if it doesn't already exist.
Run the following command on both VMs to create a cluster named ociocfs2:
sudo o2cb add-cluster ociocfs2
Now, add your nodes to the cluster. Run these commands on both VMs:
sudo o2cb add-node ociocfs2 server1 --ip 10.0.1.85;
sudo o2cb add-node ociocfs2 server2 --ip 10.0.1.235
You can verify the configuration by checking the .conf file:
sudo cat /etc/ocfs2/cluster.conf
The output should list your cluster name, the heartbeat mode, and the details for both nodes.
cluster:
name = ociocfs2
heartbeat_mode = local
node_count = 2
node:
cluster = ociocfs2
number = 0
ip_port = 7777
ip_address = 10.0.1.85
name = server1
node:
cluster = ociocfs2
number = 1
ip_port = 7777
ip_address = 10.0.1.235
name = server2
Step 4: Attach the Block Volume via iSCSI
You need to run the iSCSI commands provided in your OCI console to attach the block volume at the OS level. To get the commands, go to you cloud console's storage>block volume and then click on your block volume name. There's on the attached instances click on the 3 dot menu of your each instance's section. You can get there the attach and detach commands.
On server1:
sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.2:3260;
sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -n node.startup -v automatic;
sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.2:3260 -l
On server2:
sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.3:3260;
sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -n node.startup -v automatic;
sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:5f2a1b82-9c40-4e1a-8b3d-72f6a5b9c1e4 -p 169.254.2.3:3260 -l
Once attached, identify the device path of your new volume:
sudo fdisk -l
Look for a pattern like /dev/sdb whose size exactly match your created block volume's size. That should be your target device. For the remainder of this guide, we will assume the device is /dev/sdb.
Step 5: Format and Sync the Partition
Start the cluster services before creating the partition:
sudo systemctl start ocfs2;
sudo systemctl start o2cb
Now, format the partition on only one machine:
sudo mkfs.ocfs2 -L "ocfs2" /dev/sdb
This will initialize the superblock, format the journals, and establish the cluster stack. Once it says mkfs.ocfs2 successful, the formatting is complete.
To ensure the partition table is updated across your environment, run the following sync command on both VMs:
sudo blockdev --rereadpt /dev/sdb
Step 6: Final Configuration and Mounting
Reconfigure the cluster tools on both VMs by running:
sudo dpkg-reconfigure ocfs2-tools
When prompted, provide the exact cluster name you defined earlier (ociocfs2).
To ensure the drive mounts automatically on boot, add the following line to your /etc/fstab file:
/dev/sdb /data ocfs2 _netdev,defaults 0 0
You also need to configure the kernel for cluster operation so the changes persist across reboots. Add these entries to your /etc/sysctl.conf file:
# Define panic and panic_on_oops for cluster operation
kernel.panic=30
kernel.panic_on_oops=1
Ensure the /data target directory exists on both VMs. Then, run the mount command on only one node:
sudo mount -a
If everything is configured correctly, the volume should now be mounted on both nodes automatically.
You can verify the mount point on your instances using df -hT:
nirzak@server1:~$ df -hT | grep data
/dev/sdb ocfs2 106G 2.2G 104G 3% /data
nirzak@server2:~$ df -hT | grep /data
/dev/sdb ocfs2 106G 2.2G 104G 3% /data
Troubleshooting & Expanding the Cluster
Module Errors: If you encounter a module error during setup, you may need to install the Oracle-specific Linux modules. Run:
sudo apt install linux-modules-extra-$(uname -r)
Adding New Nodes: If you need to make changes to your nodes or add new ones to the cluster configuration later, you must unregister and re-register the cluster using the following commands:
sudo o2cb unregister-cluster <cluster_name>;
sudo o2cb register-cluster <cluster_name>
Then restart both services
sudo systemctl restart ocfs2 && sudo systemctl restart o2cb
After restarting the services, you can mount the file system to the new node with:
sudo mount.ocfs2 /dev/sdb /data
Don't forget to replace /dev/sdb with your devices's path and /data with your mountpoint's path. Your cluster file system should now be online. Try to copy file on one server's directory and check if it appears on the second server or not. If it appears yourc cluster should be working fine.

Top comments (0)