loading...

The Punishment of NFS on Hardened EL7 Systems

ferricoxide profile image Thomas H Jones II Originally published at thjones2.blogspot.com on ・2 min read

All of the customers I currently serve operate under two, main requirements:

  • If using Linux, it has to be Enterprise Linux (RHEL or CentOS)
  • All such systems must be hardened to meet organizational specifications

The latter means that IPv6 support has to be disabled on all ELx deployments.

On the plus side of current customer-trends, most are (finally) making the effort to migrate from EL6 to EL7. Unfortunately, recent releases of EL7 included an update to the RPC subsystem. Further unfortunately, this update can cause the RPC subsystem to break on a system that's been hardened to disable IPv6.

With later updates to the RPC subsystems, it will attempt to perform an IPv6 network-bind. It makes the determination of whether to attempt this based on whether the IPv6 components are available/enabled in the initramfs boot-kernel.

With typical hardening-routines, IPv6 disablement happens after the initramfs boot-kernel has loaded. This is done when the boot processes read the /etc/sysctl.conf file and files within /etc/sysctl.d. Unfortunately, if the system-owner hasn't ensured that the /etc/sysctl.conf file packaged within the initramfs looks like the one in the booted system's root filesystem and the root filesystem's /etc/sysctl.conf file disables IPv6, bad times ensue. The RPC subsystem assumes that IPv6 is available. Then, when systemd attempts to start the rpcbind.socket unit, it fails. All the other systemd units that depend on the rpcbind.socket unit then also fail. This means no RPC service and no NFS server or client services.

In this scenario, the general fix-process is:

  1. Uninstall the dracut-config-generic RPM (yum erase -y dracut-config-generic)
  2. Rebuild the kernel (dracut -v -f)
  3. Reboot the system Once the system comes back from the reboot, all of the RPC components – and services that rely on them – should function as expected.

...but that's only the first hurdle. When using default NFS mount options, NFS clients will attempt to perform an NFS v4.1 mount of the NFS server's shares. If NFS hasn't been explicitly configured for GSS-protected mounts, the mount of the filesystem typically takes around two minutes to occur (while the GSS-related subsystems try to negotiate the session before ultimately timing-out and reverting to the sys security-mode). To speed things up a skosh, one can either force the use of NFSv3, explicitly request the sys security-mode or wholly disable the rpc-gssd service-components. Explicitly requesting the sys security-model (using the sec=sys mount-option) halves the amount of time needed to negotiate the initial mount-request. Requesting NFSv3 (using the vers=3 mount-option) avoids the security-related negotiations, altogether, making the mount-action almost instantaneous. Similarly, disabling the rpc-gssd service-components (using systemctl's mask command for the rpc-gssd and/or nfs-secure services) avoids the GSS-related negotiation-components, making the mount-action almost spontaneous.

Once those bits are out of the way, then it's usually just a matter of configuring appropriate SELinux elements to allow the sharing-out of the desired filesystems and setting up the export-definitions.

Posted on by:

ferricoxide profile

Thomas H Jones II

@ferricoxide

Been using UNIX since the late 80s; Linux since the mid-90s; virtualization since the early 2000s and spent the past few years working in the cloud space.

Discussion

pic
Editor guide