Introduction
According to the Gluster documentation, Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace
π‘ More documentation on Gluster can be found here
We need at least 3 nodes for this configuration. Two of them will work as the gluster servers and one as the client.
π I have used 3 nodes myself for
glusterfs
configuration. All of these nodes are Virtual Machines running CentOS 7 inside VMWare workstation
Creating and formatting partition
π I have added an extra 1GB HDD in both server side Linux systems so that I can create a partition for testing gluster
Using the fdisk
manager, I created 4 partitions on both server sides
fdisk -l /dev/sdb
Device Boot Start End Blocks Id System
/dev/sdb1 2048 514047 256000 83 Linux
/dev/sdb2 514048 1026047 256000 83 Linux
/dev/sdb3 1026048 1538047 256000 83 Linux
/dev/sdb4 1538048 2097151 279552 83 Linux
π‘ It is up to you how many partitions you want for this hands on. I needed few more as I had some other testing to do
Now, we will format this using xfs
on both servers
mkfs.xfs /dev/sdb1
After successful format, we need a directory where we will mount this partition
mkdir /gluster1
# Mounting our partition
mount /dev/sdb1 /gluster1
# Setting up auto-mount
echo '/dev/sdb1 /gluster1 xfs defaults 0 0' >> /etc/fstab
Installing the glusterfs
package
We can directly use
yum -y install glusterfs-server
π If
yum
isn't able to find this package, you can useyum -y install centos-release-gluster
command before using the above command
Now we can just start the service
systemctl start glusterd
systemctl enable glusterd
Confirming the status
systemctl status glusterd
β glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2023-02-15 10:09:19 KST; 12h ago
Firewall settings
We can either turn off the firewall service
systemctl stop firewalld
systemctl disable firewalld
Or we can just add the service glusterfs
firewall-cmd --permanent --add-service=glusterfs
success
# Or add the port number (24007 is the default but we can add 24008 or 24009 as well)
firewall-cmd --permanent --add-port=24007/tcp
success
firewall-cmd --reload
success
π I will strongly recommend just adding the service instead of disabling the firewall especially if you aren't in a testing environment
Configuring the peer pool
We can use hostnames to probe with the other server but I will be using IP address to probe to the second server
π When using hostnames to probe, it is advised to probe back the server1 from server2, server3 or upto nth server based on the cluster size
Probing from the first server
gluster peer probe <Your-Second-Server-IP>
Checking the peer status
gluster peer status
Number of Peers: 1
.
.
State: Peer in Cluster (Connected)
If we check the status from the second server, we should get the same status
Setting up a glusterfs
volume
We need a volume that can act as the 'brick' holder. Basically, a brick is simply any filesystem you can export as a GlusterFS mount point
Before creating this volume and assigning, we need a new test directory inside the /gluster1
and /gluster2
directories on both servers that will work as the gluster volume
On both of the servers,
# In Linux 1
mkdir /gluster1/gv0
# In Linux 2
mkdir /gluster2/gv0
Now from any server,
gluster volume create gv0 <Your-First-Server-IP>:/gluster1/gv0 <Your-Second-Server-IP>:/gluster2/gv0
Upon success, we should be able to see
volume create: gv0: success: please start the volume to access data
Starting the volume
gluster volume start gv0
volume start: gv0: success
We can check the information of our gluster volume
Volume Name: gv0
Type: Distribute
Volume ID: 42307952-b960-4f9d-85b5-00d8bbed7acf
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: <Your-First-Server-IP>:/gluster1/test1
Brick2: <Your-Second-Server-IP>:/gluster2/test2
π We can access the log files for gluster
/var/log/glusterfs/glusterd.log
to troubleshoot any issues
Testing the volume
From the client Linux, we need to install the client glusterfs package
yum -y install gluster-client
After installation, we need to create a mount point in our client
mkdir /gluster
Now, we just need to mount the gluster volume
mount -t glusterfs <First/Second-server-IP>:/gv0 /gluster
You can check the mount status using the df -h
and that's it!
Conclusion
β¨ There are different types of volumes in GlusterFS. By default, when we create a gluster volume, distributed glusterfs volume is created. We can always choose between the replicated, distributed replicated, dispersed and the distributed dispersed volumes
For more details on different types of gluster volumes, do check out the Gluster documentation architecture section over here
Top comments (0)