<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pendela BhargavaSai</title>
    <description>The latest articles on DEV Community by Pendela BhargavaSai (@pendelabhargavasai).</description>
    <link>https://dev.to/pendelabhargavasai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pendelabhargavasai"/>
    <language>en</language>
    <item>
      <title>"Why can’t I just mount S3 like a drive?” AWS finally answering that question in 2026</title>
      <dc:creator>Pendela BhargavaSai</dc:creator>
      <pubDate>Sun, 12 Apr 2026 13:35:35 +0000</pubDate>
      <link>https://dev.to/pendelabhargavasai/why-cant-i-just-mount-s3-like-a-drive-aws-finally-answering-that-question-in-2026-4g00</link>
      <guid>https://dev.to/pendelabhargavasai/why-cant-i-just-mount-s3-like-a-drive-aws-finally-answering-that-question-in-2026-4g00</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;From "why can't I just mount S3 like a drive?" to AWS finally answering that question in 2026.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;I've had that conversation more times than I can count.&lt;/p&gt;

&lt;p&gt;A developer joins a new AWS project, looks at the architecture, and asks: &lt;em&gt;"We're already storing everything in S3 — why do we also need EFS? Can't we just mount S3 directly?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And every time, the answer was the same patient explanation about object storage vs file systems, why they're fundamentally different, and why you need separate services for separate workloads. It was the right answer. It just wasn't a satisfying one.&lt;/p&gt;

&lt;p&gt;That changed in April 2026 when AWS launched &lt;strong&gt;S3 Files&lt;/strong&gt; — and suddenly that conversation got a lot shorter.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne1ezqqr8ls1axsuyqwh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne1ezqqr8ls1axsuyqwh.png" alt=" " width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But before we get there, let's start from the beginning. Because understanding &lt;em&gt;why&lt;/em&gt; S3 Files matters requires understanding the problem it's solving. And that means understanding the full AWS storage landscape.&lt;/p&gt;


&lt;h2&gt;
  
  
  The AWS Storage Trinity (Before S3 Files)
&lt;/h2&gt;

&lt;p&gt;AWS has three primary storage services, each built for a completely different purpose. Engineers often get confused because on the surface they all seem to do the same thing: store data. But the &lt;em&gt;way&lt;/em&gt; they store it — and who can access it and how — is completely different.&lt;/p&gt;

&lt;p&gt;Here's the simplest way I know to think about it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3&lt;/strong&gt; is like a giant library. You can store billions of books (objects), and anyone with the right access can retrieve any book. But to fix a typo on page 47, you have to reprint the entire book.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EBS&lt;/strong&gt; is like a hard drive physically attached to your computer. Super fast, but only your computer can use it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EFS&lt;/strong&gt; is like a shared office filing cabinet on a network. Anyone in the office can open a drawer, pull out a folder, and edit a document — at the same time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's go deeper on each one.&lt;/p&gt;


&lt;h2&gt;
  
  
  Amazon S3 — Object Storage Built for Scale
&lt;/h2&gt;

&lt;p&gt;S3 (Simple Storage Service) launched in 2006 and fundamentally changed how the world thinks about storing data. The core idea is simple: you have &lt;strong&gt;buckets&lt;/strong&gt;, and inside buckets you store &lt;strong&gt;objects&lt;/strong&gt;. Each object is just a file plus its metadata, stored at a unique key (think of it like a URL).&lt;/p&gt;
&lt;h3&gt;
  
  
  What makes S3 special
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Virtually unlimited scale.&lt;/strong&gt; S3 stores more than 500 trillion objects across hundreds of exabytes today.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;11 nines of durability (99.999999999%).&lt;/strong&gt; AWS automatically replicates your data across at least three Availability Zones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pay only for what you use.&lt;/strong&gt; No minimum capacity, no infrastructure to manage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple storage classes.&lt;/strong&gt; From S3 Standard (~$0.023/GB) down to Glacier Deep Archive (~$0.00099/GB) for data you almost never touch.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The one thing S3 cannot do
&lt;/h3&gt;

&lt;p&gt;Here's the catch that trips everyone up: &lt;strong&gt;S3 is not a file system.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you store something in S3, it becomes an immutable object. If you want to change even a single character in a file, you have to download the entire object, make your change, and re-upload the whole thing as a new object. There's no such thing as "open this file and edit line 47." That's just not how object storage works.&lt;/p&gt;

&lt;p&gt;This isn't a bug — it's by design. The immutability of objects is part of what makes S3 so durable and scalable. But it creates real friction for any workload that needs to &lt;em&gt;work with&lt;/em&gt; data the way normal applications do: open a file, read some bytes, write some bytes, save.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# What you can do with S3&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;cp &lt;/span&gt;myfile.txt s3://my-bucket/myfile.txt    &lt;span class="c"&gt;# upload&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;cp &lt;/span&gt;s3://my-bucket/myfile.txt ./myfile.txt  &lt;span class="c"&gt;# download&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;rm &lt;/span&gt;s3://my-bucket/myfile.txt               &lt;span class="c"&gt;# delete&lt;/span&gt;

&lt;span class="c"&gt;# What you CANNOT do&lt;/span&gt;
&lt;span class="c"&gt;# Open myfile.txt and append a line — impossible without full re-upload&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihtdjmavdki0i9x7xit0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihtdjmavdki0i9x7xit0.jpg" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Amazon EBS — The Fast Attached Drive
&lt;/h2&gt;

&lt;p&gt;EBS (Elastic Block Store) is block storage — the AWS equivalent of an SSD attached directly to your server. When you launch an EC2 instance, the root volume (where the operating system lives) is an EBS volume.&lt;/p&gt;

&lt;h3&gt;
  
  
  What EBS is good at
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed.&lt;/strong&gt; EBS delivers single-digit millisecond latency because it behaves like a local disk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POSIX semantics.&lt;/strong&gt; You can open files, write individual bytes, seek to specific positions — everything a normal file system supports.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency.&lt;/strong&gt; What you write is immediately readable. No eventual consistency concerns.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The hard limit of EBS
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;EBS volumes can only be attached to one EC2 instance at a time&lt;/strong&gt; (with some multi-attach exceptions for specific use cases). &lt;/p&gt;

&lt;p&gt;This means if you have a cluster of 10 EC2 instances all running your application, each one needs its own EBS volume. They can't share data through EBS. If instance A writes a file, instance B can't see it without some kind of sync mechanism.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EC2 Instance A  →  EBS Volume A  (can't share)
EC2 Instance B  →  EBS Volume B  (separate, isolated)
EC2 Instance C  →  EBS Volume C  (separate, isolated)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For single-instance workloads — databases, operating system volumes, single-server applications — EBS is excellent. The moment you need shared storage across multiple servers, you hit a wall.&lt;/p&gt;




&lt;h2&gt;
  
  
  Amazon EFS — The Shared Network Drive
&lt;/h2&gt;

&lt;p&gt;EFS (Elastic File System) is AWS's managed Network File System (NFS). Think of it as a shared drive that any number of EC2 instances, containers, or Lambda functions can mount simultaneously and use like a local file system.&lt;/p&gt;

&lt;h3&gt;
  
  
  What EFS solves
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent access.&lt;/strong&gt; Thousands of compute resources can mount and use the same EFS volume at the same time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full POSIX semantics.&lt;/strong&gt; Open files, edit bytes in-place, file locking, directory operations — everything works.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scales automatically.&lt;/strong&gt; The file system grows and shrinks as you add or remove files. No capacity planning required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sub-millisecond latency&lt;/strong&gt; on Standard tier.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EC2 Instance A  ──┐
EC2 Instance B  ──┤──→  EFS Volume  (all share the same files)
EC2 Instance C  ──┘
Lambda Function ──┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F368aftu96o0epx2bty0g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F368aftu96o0epx2bty0g.jpg" alt=" " width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Where EFS falls short
&lt;/h3&gt;

&lt;p&gt;The pricing model. &lt;strong&gt;EFS charges you for every gigabyte stored, whether you touched it this month or not.&lt;/strong&gt; Standard tier is $0.30/GB-month — roughly 13x more expensive than S3 Standard per gigabyte.&lt;/p&gt;

&lt;p&gt;This is fine when your data is "hot" (actively accessed). It's painful when you have petabytes of data where only a fraction is actively used at any time. You end up paying full file system prices for data that's sitting idle.&lt;/p&gt;

&lt;p&gt;And the other problem: &lt;strong&gt;EFS has zero native integration with S3.&lt;/strong&gt; They're completely separate systems. Your data lake is in S3. Your compute needs EFS. So you write sync scripts to copy data back and forth — and now you have two copies of everything, two storage bills, and a manual process that breaks at the worst possible times.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Old Workflow Pain (The Problem All of This Creates)
&lt;/h2&gt;

&lt;p&gt;Before S3 Files, a typical ML or data engineering team's workflow looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;S3 Data Lake
    ↓  (manual copy — takes time, costs money)
EFS Volume
    ↓  (mount on EC2)
EC2 Training Job
    ↓  (output back to EFS)
    ↓  (another manual copy)
S3 Data Lake  ← results stored here for analytics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every arrow in that diagram is a point of failure. Every copy step is a delay, a cost, and a potential for the two copies to drift out of sync. Engineers were spending real engineering hours maintaining these sync pipelines — hours that weren't building anything valuable.&lt;/p&gt;

&lt;p&gt;This is the problem that s3fs tried to solve, years before AWS had an official answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  s3fs-fuse — The Community's Workaround
&lt;/h2&gt;

&lt;p&gt;If you've been working with AWS for a few years, you've probably encountered &lt;code&gt;s3fs-fuse&lt;/code&gt;. It's an open-source FUSE (Filesystem in Userspace) tool that lets you mount an S3 bucket as a local directory on Linux, macOS, or FreeBSD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;s3fs

&lt;span class="c"&gt;# Configure credentials&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ACCESS_KEY_ID:SECRET_ACCESS_KEY"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ~/.passwd-s3fs
&lt;span class="nb"&gt;chmod &lt;/span&gt;600 ~/.passwd-s3fs

&lt;span class="c"&gt;# Mount your bucket&lt;/span&gt;
s3fs my-bucket /mnt/s3-data &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;passwd_file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;~/.passwd-s3fs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you can run &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;cp&lt;/code&gt;, &lt;code&gt;cat&lt;/code&gt; — your S3 bucket looks like a local folder. For a quick demo or a simple use case, it feels magical.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's actually happening under the hood
&lt;/h3&gt;

&lt;p&gt;Here's the thing nobody tells you upfront: s3fs isn't &lt;em&gt;really&lt;/em&gt; giving you file system access to S3. It's translating file commands into S3 API calls — and the translation has serious limitations.&lt;/p&gt;

&lt;p&gt;When you "edit" a file through s3fs, this is what actually happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You: nano myfile.txt  (make a small change, save)
     ↓
s3fs: GET entire object from S3 → download to local temp cache
s3fs: You edit the local temp copy
s3fs: On file close → PUT entire object back to S3 (full re-upload)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change one character in a 10GB file? s3fs downloads all 10GB, makes the change, and uploads all 10GB again. Every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The real limitations you need to know
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;No file locking.&lt;/strong&gt; If two processes try to write to the same file through s3fs at the same time, you get data corruption. Not an error message — silent data corruption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No atomic renames.&lt;/strong&gt; Renaming a file in s3fs copies it to a new key and deletes the old one. Any application that relies on atomic renames (which includes most databases and many log processors) will break.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slow directory listings.&lt;/strong&gt; Every &lt;code&gt;ls&lt;/code&gt; is a &lt;code&gt;ListObjects&lt;/code&gt; API call to S3. On a bucket with millions of objects, this is painfully slow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No hard links or symbolic links.&lt;/strong&gt; S3 simply doesn't support them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Operation          | What s3fs does              | Problem
-------------------|-----------------------------|-----------------------
Read file          | GET entire object           | Slow for large files
Edit file          | Download → edit → full PUT  | Expensive re-upload
Append to file     | Rewrite entire object       | Very expensive
Rename file        | Copy + Delete               | Not atomic
File lock          | Not supported               | Data corruption risk
List directory     | ListObjects API call        | Slow on large buckets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;s3fs works well for lightweight, read-heavy, single-process use cases. But the moment you need multi-process access, in-place edits, or production reliability — it starts breaking down. The community built it because AWS didn't have a better answer. Eventually, AWS tried building their own version.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mountpoint for S3 — AWS's Open-Source Attempt (2023)
&lt;/h2&gt;

&lt;p&gt;In 2023, AWS released &lt;strong&gt;Mountpoint for S3&lt;/strong&gt;, their own open-source FUSE client. It was faster than s3fs-fuse and better optimised for cloud-native read-heavy workloads.&lt;/p&gt;

&lt;p&gt;But it still couldn't do in-place edits, directory renames, or file locking. It was better than s3fs-fuse, but it still hit the same fundamental ceiling: you can't make S3's API behave like a real file system by pretending.&lt;/p&gt;

&lt;p&gt;AWS knew this. Internally, they'd been trying to solve it properly for years.&lt;/p&gt;




&lt;h2&gt;
  
  
  Amazon S3 Files — The Real Solution (April 2026)
&lt;/h2&gt;

&lt;p&gt;On April 7, 2026, AWS launched &lt;strong&gt;S3 Files&lt;/strong&gt; — and it's the most significant S3 update since the service launched.&lt;/p&gt;

&lt;p&gt;The internal project was even called "EFS3" at one point. One engineer on the team described the design process as &lt;em&gt;"a battle of unpalatable compromises."&lt;/em&gt; Getting object storage and file system semantics to truly coexist is genuinely hard engineering. Every design decision forced a tradeoff where either the file presentation or the object presentation had to give something up.&lt;/p&gt;

&lt;p&gt;What they landed on is clever: instead of trying to make the S3 API &lt;em&gt;behave&lt;/em&gt; like a file system (which is what s3fs does), they did the opposite — they took a real, production-grade file system (EFS) and connected it directly to S3 storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  How S3 Files actually works
&lt;/h3&gt;

&lt;p&gt;S3 Files uses a &lt;strong&gt;two-tier architecture&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 1 — EFS Cache Layer (hot data)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stores your active working set: recently written files, recently read files, metadata&lt;/li&gt;
&lt;li&gt;Delivers ~1ms latency&lt;/li&gt;
&lt;li&gt;Serves small files (under 128KB by default) entirely from cache&lt;/li&gt;
&lt;li&gt;Handles all NFS file operations — open, read, write, rename, lock&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tier 2 — S3 Bucket (your full dataset)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Holds your complete data at normal S3 prices (~$0.023/GB)&lt;/li&gt;
&lt;li&gt;Large reads (1MB+) bypass the cache entirely and stream directly from S3 for free&lt;/li&gt;
&lt;li&gt;Changes made through the file system sync back to S3 automatically within minutes
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your Application
      ↓  (NFS mount — standard Linux file operations)
EFS Cache Layer  ←→  Smart Router
      ↓                    ↓
   Hot data            Cold/large data
   (~1ms)              (streams from S3, free)
      ↓                    ↓
      └────────────────────┘
                  ↓
            S3 Bucket
       (your data, always here)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key insight: &lt;strong&gt;your data never leaves S3.&lt;/strong&gt; The EFS cache is just a smart caching layer on top. You're not maintaining two copies — you have one copy in S3, accessible via both the S3 API and the file system mount simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk0a728p2arun7vntopk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk0a728p2arun7vntopk.png" alt=" " width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  OLD way to New way
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F662i5jvty3f4qfmx1swi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F662i5jvty3f4qfmx1swi.png" alt=" " width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting started in 3 steps
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create an S3 file system&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the AWS Console → S3 → File Systems → Create file system. Enter your bucket name, done.&lt;/p&gt;

&lt;p&gt;Or via CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3api create-file-system &lt;span class="nt"&gt;--bucket&lt;/span&gt; my-bucket
aws s3api create-mount-target &lt;span class="nt"&gt;--file-system-id&lt;/span&gt; fs-xxxx &lt;span class="nt"&gt;--subnet-id&lt;/span&gt; subnet-xxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Mount it on your EC2 instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Make sure the &lt;code&gt;amazon-efs-utils&lt;/code&gt; package is installed (preinstalled on AWS AMIs), then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /mnt/s3files
&lt;span class="nb"&gt;sudo &lt;/span&gt;mount &lt;span class="nt"&gt;-t&lt;/span&gt; s3files fs-0aa860d05df9afdfe:/ /mnt/s3files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Use it like any local directory&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a file&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Hello S3 Files"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /mnt/s3files/hello.txt

&lt;span class="c"&gt;# Edit it in place&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"New line added"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /mnt/s3files/hello.txt

&lt;span class="c"&gt;# List files&lt;/span&gt;
&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt; /mnt/s3files/

&lt;span class="c"&gt;# The same data is accessible via S3 API too&lt;/span&gt;
aws s3 &lt;span class="nb"&gt;ls &lt;/span&gt;s3://my-bucket/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Changes you make through the file system mount appear in S3 within minutes. Changes made directly to the S3 bucket appear in the file system within seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security — what you need to know
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;IAM integration for access control at both file system and object level&lt;/li&gt;
&lt;li&gt;Data encrypted in transit using TLS 1.3&lt;/li&gt;
&lt;li&gt;Data encrypted at rest using SSE-S3 (or KMS if you prefer customer-managed keys)&lt;/li&gt;
&lt;li&gt;POSIX permissions (UID/GID) stored as S3 object metadata&lt;/li&gt;
&lt;li&gt;Monitor via CloudWatch metrics and CloudTrail logs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pricing — the part that actually makes sense
&lt;/h3&gt;

&lt;p&gt;S3 Files charges EFS-level rates, but &lt;strong&gt;only on the fraction of data you're actively working with&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What you pay for&lt;/th&gt;
&lt;th&gt;Rate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;High-performance storage (hot data)&lt;/td&gt;
&lt;td&gt;$0.30/GB-month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reads (small files served from cache)&lt;/td&gt;
&lt;td&gt;$0.03/GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Writes&lt;/td&gt;
&lt;td&gt;$0.06/GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Everything else in your S3 bucket&lt;/td&gt;
&lt;td&gt;Standard S3 rates (~$0.023/GB)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you have a 100TB dataset but only 1TB is actively used at any time — you pay EFS rates on 1TB and S3 rates on the other 99TB. AWS claims up to 90% cost savings compared to the old pattern of cycling data between S3 and a dedicated EFS volume.&lt;/p&gt;




&lt;h2&gt;
  
  
  Putting It All Together — Which Service Should You Use?
&lt;/h2&gt;

&lt;p&gt;Here's the honest answer:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use this&lt;/th&gt;
&lt;th&gt;When you need&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;S3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Bulk storage, backups, data lakes, analytics, static assets, anything accessed via API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;EBS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;OS volumes, databases, single-instance high-performance storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;EFS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Shared file system for legacy NAS migration, on-premises workloads moving to cloud, apps that need pure NFS without S3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;S3 Files&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ML pipelines, agentic AI workflows, data engineering, any workload where both S3 API and file system access are needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;s3fs-fuse&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Quick prototypes, read-heavy single-process scripts, legacy apps where you can't change the architecture&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  The quick comparison
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy897f8eanwniy70iw975.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy897f8eanwniy70iw975.png" alt=" " width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters for ML and AI Workloads
&lt;/h2&gt;

&lt;p&gt;If you're building machine learning pipelines or agentic AI systems, S3 Files is worth paying close attention to.&lt;/p&gt;

&lt;p&gt;The old workflow was: data lives in S3 → copy to EFS before training → run training job → copy results back to S3. For large datasets, that copy step alone could take hours. You were also paying double storage costs during the transition.&lt;/p&gt;

&lt;p&gt;With S3 Files, your training job mounts the S3 bucket directly. The EFS cache warms up as your training reads data. No copy step. No sync script. No duplicate storage.&lt;/p&gt;

&lt;p&gt;For agentic AI systems specifically — where multiple agents need to coordinate through shared files, read from each other's outputs, maintain shared state — S3 Files provides exactly the concurrent NFS access with close-to-open consistency that these workloads need. Standard Python file operations, standard shell tools, all working against data that lives in S3.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Short Version
&lt;/h2&gt;

&lt;p&gt;For a decade, AWS storage was a choice: pay S3 prices and lose file system semantics, or pay EFS prices and lose S3 integration. Teams wrote sync scripts, maintained duplicate data, and spent engineering time on storage plumbing instead of actual product work.&lt;/p&gt;

&lt;p&gt;s3fs-fuse was the community's best attempt at a workaround — and it worked, up to a point. But it was always emulating file system behavior on top of an API that wasn't designed for it.&lt;/p&gt;

&lt;p&gt;S3 Files is the first time AWS has genuinely solved this at the right layer. Real NFS semantics, real S3 storage, real production reliability. One bucket, two protocols, no compromises.&lt;/p&gt;

&lt;p&gt;If you've ever maintained a sync script between your data lake and your compute layer — you know exactly what problem this solves. And you know exactly how good it feels to delete that script.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/s3/features/files/" rel="noopener noreferrer"&gt;Amazon S3 Files product page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/aws/launching-s3-files-making-s3-buckets-accessible-as-file-systems/" rel="noopener noreferrer"&gt;AWS Blog: Launching S3 Files&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files.html" rel="noopener noreferrer"&gt;S3 Files documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/s3fs-fuse/s3fs-fuse" rel="noopener noreferrer"&gt;s3fs-fuse on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/s3/pricing/" rel="noopener noreferrer"&gt;Amazon S3 pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/efs/pricing/" rel="noopener noreferrer"&gt;Amazon EFS pricing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=zb8TdNJhZCk" rel="noopener noreferrer"&gt;Intro to S3 Files by Darko Mesaros&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Published April 2026. All pricing figures reflect us-east-1 as of the time of writing.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If this helped you, drop a reaction or leave a comment — curious what storage patterns others are running into in the wild.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>machinelearning</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
