Two days ago, AWS launched S3 Files — a managed NFS layer that turns any S3 bucket into a mountable filesystem. Sub-millisecond latency within AWS. Full read/write. Bidirectional sync. The AWS community collectively lost its mind, and rightfully so.
There's just one problem: it only works on AWS compute. EC2, Lambda, EKS, ECS. Not your Mac. Not your laptop. Not the machine where you actually write code.
I spent the last 48 hours fixing that. Along the way, I kernel-panicked my MacBook five times, got "access denied" in three different ways, discovered a crash bug in efs-proxy, and eventually built a tool that mounts S3 Files on macOS with two commands. This is the story of everything that went wrong, and the one thing that finally worked.
Why This Matters
As Corey Quinn put it, S3 has never been a filesystem — but now there's a real one sitting in front of it. Andy Warfield's team didn't just bolt a POSIX layer onto S3 and call it a day. They built a proper filesystem backed by EFS infrastructure, with S3 as the durable source of truth.
Think of S3 Files as another tier in the S3 hierarchy — a file system front end for hot, frequently accessed data that needs mutation, user interaction, or low-latency access. You create a file system on any bucket or prefix with no data migration. Your existing S3 data is immediately visible as files and folders.
The smart defaults are what make it feel magical:
-
Metadata pre-warms instantly. When you create a file system, all S3 key prefixes are mapped to directories and files.
lsworks immediately — no waiting. This is a massive differentiator from FUSE-based tools like Mountpoint, wherelson a large dataset can take minutes because it does a HEAD or LIST call per object. -
Small files (under 128KB) auto-sync on directory access. When you
cdinto a directory, code files, configs, and small assets are pulled into the fast tier automatically. No explicit fetch needed. -
Large files stream directly from S3. Files over 128KB are lazy-loaded on first read, and very large files may be served directly from S3's throughput layer without ever being copied into the file system tier. This is the ReadBypass optimization in
efs-proxy— designed for EC2, but as we'll see, it doesn't play well with our non-standard Docker + NLB setup.
Changes sync back to S3 approximately every minute. Changes to S3 objects sync into the file system via EventBridge notifications. Data expires from the fast tier after 30 days by default (configurable) and rehydrates on next access.
AWS explicitly positions agentic AI as a first-class use case — multi-step, multi-process workloads where agents need to share state, read reference data, and produce outputs collaboratively. That's exactly the use case that got me excited enough to spend 48 hours making this work on a Mac.
And here's something that doesn't get enough credit: S3 Files shipped with Day 1 CloudFormation support (AWS::S3Files::FileSystem and AWS::S3Files::MountTarget). CDK works via L1 constructs — no native L2 constructs yet, but you can provision everything from IaC on Day 1. Our entire CDK stack — VPC, bucket, IAM role, S3 Files filesystem, mount target, NLB — deploys in one command. That's rare for a new AWS service these days. But what about local development? What about editing S3-backed files in VS Code on your Mac? What about ls, cat, echo "hello" > file.txt from your terminal?
That's what I wanted. A native Mac folder backed by S3.
The Problem: macOS Can't Speak S3 Files
S3 Files requires three things that macOS cannot provide:
- NFSv4.2 — macOS ships with NFSv4.0. The NFS client is baked into the kernel. You can't upgrade it.
- TLS encryption — S3 Files rejects every unencrypted NFS connection. No exceptions.
-
IAM authentication — Every mount requires an EFS RPC Bind handshake with AWS credentials, handled by a binary called
efs-proxy(part ofamazon-efs-utils). This only runs on Linux.
Three hard requirements. Zero macOS support. Let's see how many ways this can fail.
Attempt 1: Native macOS NFS Mount → 💀 Kernel Panic (x5)
My first instinct was the obvious one. S3 Files exposes a mount target with a private IP in your VPC. I put an internet-facing Network Load Balancer in front of it (TCP 2049), pointed my Mac at the NLB, and ran:
sudo mount -t nfs -o vers=4 nlb-dns.amazonaws.com:/ /mnt/s3files
The screen went black. Hard reboot. I tried again with different NFS options. Black screen. Reboot. I tried vers=4.0 explicitly. Black screen. Reboot.
Five kernel panics in total. macOS NFSv4 bugs are well-documented — the client chokes on protocol features it doesn't understand. When S3 Files responds with NFSv4.2 capabilities, the macOS NFS client doesn't gracefully degrade. It crashes the kernel.
Lesson: macOS NFSv4 is not just old — it's actively dangerous when pointed at a v4.2 server.
Attempt 2: Raw mount -t nfs4 via NLB → ❌ "access denied"
OK, so macOS is out. I spun up a Docker container running Amazon Linux (which has a proper NFSv4.2 client) and tried a raw NFS mount from inside the container:
mount -t nfs4 -o nfsvers=4.2 nlb-dns.amazonaws.com:/ /mnt/s3files
"Access denied."
This is where I started reading the efs-utils source code. S3 Files isn't a standard NFS server you can just connect to. Before any NFS traffic flows, the client must authenticate via a custom protocol called EFS RPC Bind — essentially proving "I have valid AWS credentials and I'm allowed to mount this filesystem." The efs-proxy binary handles this. A raw mount -t nfs4 skips the entire auth layer.
Lesson: You can't just NFS-mount S3 Files. The auth isn't optional — it's the only way in.
Attempt 3: efs-proxy Without TLS → ❌ "access denied"
I installed amazon-efs-utils in the container and tried mount -t s3files. The efs-proxy binary started up, but I hadn't configured TLS properly (Docker isn't EC2 — there's no instance metadata service, no AZ info, no automatic certificate provisioning).
"Access denied." Again.
Digging into the efs-utils config, I found that efs-proxy wraps the TCP connection to port 2049 in TLS 1.2, then performs an RPC Bind — a custom handshake where the client proves it has valid AWS credentials. Think of it as mTLS with IAM instead of certificates. Without TLS, the mount target drops the connection before auth even begins.
I patched the config file (/etc/amazon/efs/s3files-utils.conf) to remove the {az_id} placeholder from the DNS format (no AZ metadata in Docker) and set the region via environment variable.
Lesson: S3 Files enforces TLS on every single connection. No TLS, no mount. Period.
Attempt 4: The IPv6 Detour → ✅ First Success (But Wrong Conclusion)
At this point I was convinced the NLB was the problem. Something about how it proxied TCP was breaking S3 Files at the NFS protocol level. So I built a workaround: bypass the NLB entirely.
The mount target ENI had an IPv6 address (assigned by the subnet's IPv6 CIDR). My Mac has IPv6 connectivity. Docker Desktop doesn't — but I could bridge the gap with a Python TCP proxy on my Mac that accepts IPv4 from Docker and forwards to the mount target over IPv6.
This required opening the mount target's security group directly to my public IPv6 address on port 2049. Not great — exposing a mount target to the internet is exactly the kind of thing Security Hub flags. But for debugging, I went with it.
Docker → Mac TCP bridge (IPv4:2049) → IPv6 → Mount Target (SG opened for my IPv6)
Inside the container, I used mount -t s3files with mounttargetip pointing at the Mac's Docker gateway. And it worked. Files appeared. Read/write confirmed. S3 sync verified. First success after hours of debugging.
But why did it work when the NLB path didn't? I assumed it was because I'd eliminated the NLB. Wrong.
The real reason: mount -t s3files automatically enables TLS. My earlier attempts used manual efs-proxy commands without TLS. The official mount helper adds it by default — S3 Files won't work without it.
Retried the NLB with mount -t s3files instead of manual efs-proxy. Worked perfectly. TLS was the missing piece all along. The NLB was fine — it's Layer 4, it just passes TCP bytes through, TLS and all.
I deleted the TCP bridge, removed the IPv6 SG rule, and moved on.
Lesson: when something works through path A but not path B, the difference might not be the path — it might be what path A does automatically that you forgot to do on path B.
Attempt 5: efs-proxy ReadBypass → ❌ Proxy Crash Loop
With the NLB working via mount -t s3files, I had one more problem. During my earlier manual efs-proxy debugging (before discovering the TLS fix), I'd hit a persistent crash:
ERROR efs_proxy::nfs::nfs_reader Error handling parsing error SendError { .. }
The proxy would connect, authenticate (BindResponse::READY), then crash the moment NFS traffic flowed. Restart. Crash. Restart. Hundreds of incarnations per second.
After reading the efs-proxy source and the mount.s3files Python wrapper, I found the culprit: the ReadBypass module. Remember how S3 Files serves large files directly from S3's throughput layer? ReadBypass is the efs-proxy implementation of that — it intercepts NFS read requests and serves them directly from S3, bypassing the NFS data path. This is designed for EC2 instances with direct VPC access to S3. In our setup — Docker container, patched efs-utils config, traffic routed through an NLB — the parser chokes on certain response formats and panics. It's not necessarily a bug in ReadBypass itself; it's that we're running efs-proxy far outside its intended environment.
The efs-proxy binary accepts a --no-direct-s3-read flag (I found this by running efs-proxy --help after the --no-read-bypass flag I guessed didn't exist). The mount -t s3files equivalent is the nodirects3read mount option.
With ReadBypass disabled, the proxy forwarded NFS traffic cleanly. No crashes.
Lesson: efs-proxy ReadBypass doesn't work in our non-standard Docker + NLB setup. Use nodirects3read to disable it. On a normal EC2 instance, it likely works fine.
Attempt 6: The Full Stack → ✅ It Works
The winning combination:
- Docker (Amazon Linux) — provides NFSv4.2 kernel support
- efs-proxy — handles TLS + IAM authentication
- NLB — bridges Docker Desktop to the VPC mount target
-
nodirects3read— avoids the ReadBypass crash - WebDAV — re-exports the NFS mount to macOS as a native folder
Wait — WebDAV? Why not just use the Docker mount directly?
Because Docker Desktop runs in a Linux VM. The NFS mount lives inside that VM. To access it from macOS, you need to re-export it over a protocol that macOS can mount natively. The two candidates: SMB (Samba) and WebDAV.
I benchmarked both. The results were... not close.
The Benchmark: WebDAV Destroys SMB on macOS
| Operation | Docker (NFS direct) | Mac (WebDAV) | Mac (SMB) |
|---|---|---|---|
| List directory | 0.09s | 0.08s | 4.3s |
| Read small file | 0.13s | 0.05s | 0.49s |
| Write + read back | 0.27s | 0.53s | 1.7s |
| Throughput | Docker (NFS) | WebDAV | SMB |
|---|---|---|---|
| 10 MB write | 1.2s | 1.4s | 11.0s |
| 10 MB read | 0.10s | 0.03s | 0.42s |
| 100 MB write | 6.8s | 9.3s | 87.0s |
| Write throughput | ~15 MB/s | ~11 MB/s | ~1.1 MB/s |
| Read throughput | ~830 MB/s | ~400 MB/s | ~24 MB/s |
WebDAV is 10–54x faster than SMB on macOS. Apple's SMB client is notoriously slow — it adds packet signing, metadata prefetching, and delayed TCP acknowledgments to every operation. A simple ls triggers dozens of round-trips. WebDAV is just HTTP requests — one request, one response, done.
I used WsgiDAV as the WebDAV server inside the container. It re-exports the NFS mount at /mnt/s3files over HTTP on port 8080. macOS mounts it natively via mount_webdav.
Region Matters: ca-central-1 vs us-east-2
Since the latency floor is internet RTT, I deployed the same CDK stack to two regions and benchmarked from my Mac in Canada:
| Operation (Docker NFS) | us-east-2 | ca-central-1 | Improvement |
|---|---|---|---|
| List directory | 0.09s | 0.08s | ~same |
| Read small file | 0.13s | 0.06s | 2x faster |
| Write + read back | 0.27s | 0.16s | 40% faster |
| 10MB write | 1.2s | 1.0s | 17% faster |
| 10MB read | 0.10s | 0.06s | 40% faster |
The CDK stack is region-agnostic — just change -c region=ca-central-1. Pick the region closest to you. For me in Canada, ca-central-1 shaves ~40% off interactive operations.
The Architecture
Your Mac talks WebDAV to a Docker container. The container talks authenticated, encrypted NFSv4.2 to S3 Files through an NLB. The NLB is Layer 4 — it just forwards TCP bytes without inspecting or modifying the TLS payload. S3 Files syncs bidirectionally with your S3 bucket. From your Mac's perspective, it's just a folder.
The Developer Experience: Two Commands
I wrapped everything in a CDK stack and a shell script. The entire setup:
# 1. Deploy infrastructure (VPC, bucket, IAM role, S3 Files, NLB)
cd infra && npm install && npx cdk deploy -c region=ca-central-1
# 2. Mount
./docker/docker-mount.sh up <NLB_DNS_from_CDK_output>
# 3. Use it
ls /tmp/s3files/
echo "hello world" > /tmp/s3files/test.txt
open /tmp/s3files # opens in Finder
code /tmp/s3files # opens in VS Code
That's it. docker-mount.sh up builds the container, starts efs-proxy, mounts S3 Files via NFS, starts the WebDAV server, and mounts WebDAV at /tmp/s3files. One command. To tear down: docker-mount.sh down.
The CDK stack provisions everything: VPC with public subnet, S3 bucket (versioning enabled — required by S3 Files), IAM role with the elasticfilesystem.amazonaws.com trust policy, the S3 Files filesystem and mount target, an NLB forwarding TCP 2049, and security groups locking it down.
The Backstory: Mountpoint for S3 and the iPhone Backup That Almost Worked
This isn't my first attempt at mounting S3 locally. Last year, I experimented with Mountpoint for Amazon S3 on Windows via WSL2. Mountpoint is a FUSE-based client that presents S3 as a local filesystem — but it's optimized for read-heavy workloads. Writes are limited: you can create new files, but you can't modify existing ones in place.
I had a wild idea: back up my iPhone to S3 via iTunes. I mounted an S3 bucket using Mountpoint in WSL2, pointed iTunes at it, and kicked off a backup. The initial full backup actually worked — iTunes wrote all the files sequentially, which is exactly what Mountpoint handles well.
Then I tried an incremental backup. iTunes needs to read existing backup files, compare them, and overwrite changed ones. Mountpoint doesn't support overwrites. The backup failed.
S3 Files changes this equation entirely. Full read/write. In-place modifications. Bidirectional sync. The filesystem semantics that iTunes (and every other desktop app) expects. I haven't re-tested the iPhone backup scenario yet with S3 Files, but the technical blockers that stopped Mountpoint are gone. This could finally be the path to backing up an iPhone directly to S3 with full incremental support.
What's Next: Use Cases I'm Excited About
Shared IDE workspace. Mount the same S3 bucket from multiple machines. Edit files in VS Code on your Mac, pick up where you left off on your Linux workstation. S3 is the source of truth. No git push/pull dance for work-in-progress files.
Agentic AI shared state. This is the one that keeps me up at night. AI agents — coding assistants like Kiro, autonomous agents like OpenClaw — increasingly work with files: markdown docs, config files, memory stores, tool outputs. Mount an S3-backed filesystem as the agent's workspace. Multiple agents can read and write to the same shared state. The data lives in S3, durable and accessible from anywhere. It's a shared brain for your agent fleet.
Cross-platform development. Same S3 bucket, three platforms: macOS (via Docker + WebDAV), Windows (via WSL2 — native NFSv4.2, no Docker needed), Linux (native mount -t s3files). One source of truth, zero file sync tools.
A Note on WSL2
If you're on Windows, you might not need Docker at all. WSL2 runs a real Linux kernel (5.15+) with full NFSv4.2 support. You can install amazon-efs-utils directly in WSL2 and mount S3 Files natively — no WebDAV re-export, no container overhead. The mount appears as a Linux path accessible from Windows Explorer via \\wsl$\. You'd still need the NLB (or a VPN) for connectivity, but the protocol stack is native. I haven't tested this yet, but the kernel capabilities are all there.
S3 Files vs. Mountpoint for Amazon S3
For anyone wondering how these two compare:
| S3 Files | Mountpoint for S3 | |
|---|---|---|
| Protocol | NFS (NFSv4.2) | FUSE |
| Read/Write | Full read/write | Read-heavy (limited writes) |
| Latency | Sub-millisecond | Milliseconds |
| Sync | Bidirectional (S3 ↔ filesystem) | One-way (S3 → filesystem) |
| Requires | Mount target in VPC | Just IAM credentials |
| Platform | Linux only (EC2, ECS, EKS, Lambda) | Linux, macOS |
S3 Files is a managed NFS filesystem with S3 as the durable backend. Mountpoint is a lightweight FUSE client for reading large datasets from S3. Different tools for different jobs. S3 Files gives you the full filesystem semantics that applications like databases, IDEs, and backup tools expect. Mountpoint gives you fast, cheap reads for data pipelines.
Security: What's Safe and What's Not
The PoC uses an internet-facing NLB so Docker Desktop can reach the mount target. This sounds scary, but the actual risk is mitigated:
- S3 Files enforces TLS encryption and IAM authentication on every connection — you can't mount without valid AWS credentials
- The NLB security group only allows inbound TCP 2049
- The mount target security group only accepts traffic from the NLB security group
That said, for production use, replace the public NLB with AWS Client VPN. AWS documents this exact pattern for accessing EFS from on-premises networks, and it applies equally to S3 Files. VPN eliminates the internet-facing endpoint entirely. Also use private subnets with a Gateway endpoint for S3 — it's free and routes S3 traffic through the AWS network, bypassing NAT Gateway costs.
The Failure Table
Because every good debugging story deserves a summary of the wreckage:
| Approach | Result | Root Cause |
|---|---|---|
| Native macOS NFS mount | 💀 Kernel panic (5x) | macOS NFSv4.0 can't handle v4.2 responses |
Raw mount -t nfs4 (no efs-proxy) |
❌ "access denied" | Missing EFS RPC Bind authentication |
| efs-proxy without TLS | ❌ "access denied" | S3 Files requires TLS on all connections |
| efs-proxy with ReadBypass | ❌ Proxy crash loop | ReadBypass incompatible with Docker + NLB setup |
Docker + efs-proxy + TLS + NLB + nodirects3read + WebDAV |
✅ Works | All requirements met |
Try It Yourself
The entire project is open source (MIT): github.com/awsdataarchitect/s3files-mount
Two commands to go from zero to a native Mac folder backed by S3:
cd infra && npx cdk deploy -c region=ca-central-1
./docker/docker-mount.sh up <NLB_DNS>
If you try it, break it, improve it, or find new use cases — I'd love to hear about it. Open an issue, submit a PR, or find me on LinkedIn.
S3 has never been a filesystem. But as of this week, your S3 data can live in one — even on your Mac.









Top comments (0)