DEV Community

Cover image for Troubleshooting EFS Mount Failures in EKS: The IAM Mount Option Mystery
Michael Uanikehi
Michael Uanikehi

Posted on

Troubleshooting EFS Mount Failures in EKS: The IAM Mount Option Mystery

TL;DR

If you're getting mount.nfs4: access denied by server while mounting 127.0.0.1:/ when mounting EFS volumes in EKS, and your security groups are correct, you're probably missing the iam mount option in your PersistentVolume definition when using an EFS file system policy.

The Problem

While integrating a new reporting service into our EKS cluster that needed to write reports to a shared EFS filesystem. The pod kept failing to mount with this cryptic error:

MountVolume.SetUp failed for volume "efs-pv": rpc error: code = Internal desc = Could not mount "{efs_id}:/"
Output: mount.nfs4: access denied by server while mounting 127.0.0.1:/
Enter fullscreen mode Exit fullscreen mode

The Investigation Journey

Initial Suspicions (All Wrong)

Theory 1: Security Group Issues

  • Verified NFS traffic (TCP 2049) allowed between worker nodes and EFS mount targets
  • Mount targets existed in all Availability Zones
  • Result: Security groups were perfect. Not the issue.

Theory 2: EFS File System Policy

  • We had recently added an IAM-based file system policy to restrict access
  • Policy included conditions like aws:PrincipalArn to whitelist specific IAM roles
  • The breakthrough: Removing the policy made it work!

The Eureka Moment

Reading the AWS EFS troubleshooting documentation, I found this gem:

If you don't add the iam mount option with a restrictive file system policy, then the pods fail with the following error message:

mount.nfs4: access denied by server while mounting 127.0.0.1:/

Root Cause Analysis

The issue had three interconnected parts:

1. EFS File System Policy Conditions

We used aws:PrincipalArn in our policy conditions:

{
 "Condition": {
 "ArnLike": {
 "aws:PrincipalArn": [
 "arn:aws:iam::123456789012:role/worker-node-role",
 "arn:aws:iam::123456789012:role/efs-csi-driver-role"
 ]
 }
 }
}
Enter fullscreen mode Exit fullscreen mode

Problem: Per AWS docs, aws:PrincipalArn and most IAM condition keys are NOT enforced for NFS client mounts to EFS. Only these conditions work:

  • aws:SecureTransport (Boolean)
  • aws:SourceIp (String - public IPs only)
  • elasticfilesystem:AccessPointArn (String)
  • elasticfilesystem:AccessedViaMountTarget (Boolean)

2. Missing IAM Mount Option

Our PersistentVolume was missing the iam mount option:

# BEFORE - Missing iam mount option
apiVersion: v1
kind: PersistentVolume
metadata:
 name: efs-pv
spec:
 storageClassName: aws-efs-csi-sc
 csi:
 driver: efs.csi.aws.com
 volumeHandle: "{efs_id}"
Enter fullscreen mode Exit fullscreen mode

Without iam, the EFS CSI driver doesn't authenticate using IAM roles, so any file system policy with IAM restrictions fails.

3. The EFS Mount Flow

When using the EFS CSI driver with tls mount option:

  1. Node-level mount happens first (via worker node IAM role)
  2. Without iam option → Anonymous NFS mount
  3. With iam option → Authenticated mount using IAM role credentials

The Solution

Fix 1: Added mountOptions: [tls, iam] to PersistentVolume

#  AFTER - With iam mount option
apiVersion: v1
kind: PersistentVolume
metadata:
 name: efs-pv
spec:
 storageClassName: aws-efs-csi-sc
 mountOptions:
 - tls # Encryption in transit
 - iam # Enable IAM authentication
 csi:
 driver: efs.csi.aws.com
 volumeHandle: "{efs_id}"
Enter fullscreen mode Exit fullscreen mode

Fix 2: Use Only Supported EFS Condition Keys

If you need a file system policy, use only the supported conditions:

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Principal": { "AWS": "*" },
 "Action": [
 "elasticfilesystem:ClientMount",
 "elasticfilesystem:ClientWrite",
 "elasticfilesystem:ClientRootAccess"
 ],
 "Resource": "arn:aws:elasticfilesystem:us-east-1:123456789012:file-system/{efs_id}",
 "Condition": {
 "Bool": {
 "elasticfilesystem:AccessedViaMountTarget": "true",
 "aws:SecureTransport": "true"
 }
 }
 }
 ]
}
Enter fullscreen mode Exit fullscreen mode

This policy:

  • Requires TLS encryption (aws:SecureTransport)
  • Requires access via mount targets (prevents direct IP access)
  • Uses only supported condition keys
  • Relies on security groups for network-level access control

Key Learnings

1. IAM Mount Option is Required for IAM Authorization

Without -o iam, EFS mounts are anonymous. Any IAM-based file system policy will deny access.

2. Not All IAM Conditions Work with EFS

Only 4 condition keys are enforced for NFS mounts. Using others creates a false sense of security.

3. Layer Your Security Properly

  • Network Layer: Security groups (who can reach mount targets)
  • IAM Layer: IAM policies on roles (what actions are allowed)
  • File System Layer: EFS policy (additional restrictions)

4. Read the Error Logs Carefully

The error message mentioned 127.0.0.1 because the EFS mount helper creates a local stunnel proxy for TLS. The actual connection fails at the IAM authorization layer, not network layer.

5. Test Mount Operations Manually

SSH to a worker node and test the mount with the EFS mount helper:

sudo mount -t efs -o tls,iam {efs_id}:/ /mnt/test
Enter fullscreen mode Exit fullscreen mode

This validates the configuration outside of Kubernetes.

Conclusion

What seemed like a complex IAM policy issue turned out to be a missing mount option. The key insight was understanding that EFS file system policies require explicit IAM authentication via the iam mount option, and that most IAM condition keys don't apply to NFS mounts.

Top comments (0)