Mounting Amazon EFS across multiple AWS regions is not something you do every day—but when you need to, the pain becomes real. In this article, I’ll walk through how I achieved cross-region EFS mounting from three AWS regions into a single Kubernetes (EKS) and EC2-based deployment. We’ll cover the architecture, common pitfalls, and practical work-arounds for both environments.
EFS DNS is regional. Each mount helper expects the region-specific hostname (e.g., fs-1234.efs.us-east-1.amazonaws.com). When you point that hostname at a mount target in a different region, the helper often fails—especially inside Kubernetes—because it does a DNS check and won’t trust /etc/hosts overrides.
Why Cross-Region EFS Mounting?
We manage workloads that span multiple AWS regions to support high availability and global financial clients. These workloads rely on shared file systems, and Amazon EFS was our go-to choice. However, EFS is not designed for seamless cross-region mounting. We needed a way to mount:
This was technically possible—but full of edge cases.
The Architecture:
-EFS volumes in three regions (us-east-1,
eu-west-1, and
ap-southeast-1)
-EKS cluster and EC2 in us-east-1
-VPC peering between regions (NFS port 2049 open)
-Mount helper: amazon-efs-utils
Common Problems
amazon-efs-utils
requires AWS-provided DNS names, not custom hostnames or CNAMEs
When used inside Kubernetes, DNS resolution often fails or defaults to 127.0.0.1
Even when using hostAliases
in pods, the mount helper doesn’t always respect it
IAM role mismatch between pod and node leads to permission errors
Kubernetes (EKS) Approach
Key Steps
- Install
amazon-efs-utils
in an init container (or bake it into the image). - Resolve the mount-target IPs for each region.
- Add IP/hostname pairs to
/etc/hosts
viahostAliases
or an init container. - Mount with
tls,iam,region=<SOURCE_REGION>
options.
env:
- name: AWS_REGION
value: "us-east-1"
hostAliases:
- ip: "10.94.117.128"
hostnames:
- "{efs id}.efs.us-east-1.amazonaws.com"
- ip: "10.94.125.126"
hostnames:
- "{efs id}.efs.ap-southeast-1.amazonaws.com"
- ip: "10.94.109.68"
hostnames:
- "{efs id}.efs.eu-west-1.amazonaws.com"
args:
yum install -y amazon-efs-utils
# Mount each EFS
mount -t efs -o tls,iam,region=us-east-1 ${EFS_ID}:/ /mnt/efs-east
mount -t efs -o tls,iam,region=ap-southeast-1 ${EFS_ID}:/ /mnt/efs-ap
mount -t efs -o tls,iam,region=eu-west-1 ${EFS_ID}:/ /mnt/efs-eu
Challenges:
Mount helper may still ignore /etc/hosts
Pod IAM must allow: elasticfilesystem:ClientMount
+ ClientWrite
You can only mount if IP is reachable from the current AZ/subnet
EC2 Approach
Much easier than EKS thanks to direct /etc/hosts
control.
User Data Script:
#!/bin/bash
yum install -y amazon-efs-utils
# Resolve EFS hostnames manually
echo "10.00.111.100 ${EFS_ID}.efs.us-east-1.amazonaws.com" >> /etc/hosts
echo "10.00.111.101 ${EFS_ID}.efs.ap-southeast-1.amazonaws.com" >> /etc/hosts
echo "10.00.111.102 ${EFS_ID}.efs.eu-west-1.amazonaws.com" >> /etc/hosts
# Create mount points
mkdir -p /mnt/efs-east /mnt/efs-ap /mnt/efs-eu
# Mount each EFS
mount -t efs -o tls,iam,region=us-east-1 ${EFS_ID}:/ /mnt/efs-east
mount -t efs -o tls,iam,region=ap-southeast-1 ${EFS_ID}:/ /mnt/efs-ap
mount -t efs -o tls,iam,region=eu-west-1 ${EFS_ID}:/ /mnt/efs-eu
Lessons Learned
Only one mount target IP per hostname works
If the IP is unreachable, mount fails completely, even if DNS resolves
IAM must be correct on every node or pod
EKS needs extra care due to kube-dns + IAM
Recommendations
Use EC2 where possible if reliability matters
In EKS, use hostAliases
but always test per AZ
Consider building a helper script or sidecar to handle resolution dynamically
Use region-specific mount options (e.g. -o tls,iam,region=eu-west-1
)
Stay close to AWS guidance like this one
Bonus: Automate It
You can automate:
IP resolution via boto3
IAM role patching
Host file injection via DaemonSet
Mount validation in init containers
Final Thoughts
Cross-region EFS mounting does work — but it’s fragile. Knowing how DNS, IPs, IAM, and Linux internals interact is key to making it reliable. If you’ve ever fought 127.0.0.1
DNS resolution in a pod, or had a mount fail mysteriously in one AZ but not another, this article is for you.
Top comments (1)
Well explained