DEV Community

Cover image for AWS S3 Files just made Transfer Family SFTP obsolete for most use cases

AWS S3 Files just made Transfer Family SFTP obsolete for most use cases

AWS just launched one of the most impactful storage features in years: S3 Files. It puts an EFS-compatible file system interface directly in front of your S3 buckets. When I was introduced to S3 Files (as part of AWS Community Builder program), I immediately thought of SFTP as the most obvious use case for my clients.

If you're currently paying for AWS Transfer Family to give your partners SFTP access to S3, you should read this carefully. There's now a dramatically cheaper and more powerful alternative.

What is S3 Files?

S3 Files creates a high-performance NFS file system backed by an S3 bucket. Think of it as an EFS-like layer that reads and writes directly to S3 objects, with automatic bidirectional synchronization. Any file written through the file system appears as an S3 object, and any object uploaded to S3 becomes visible through the file system.

The key properties:

  • Sub-millisecond latency for file operations
  • Automatic sync between file system and S3 bucket (powered by EventBridge under the hood)
  • Mountable on ECS Fargate, ECS Managed Instances, EKS, and EC2
  • Standard NFS protocol — no special client needed on the compute side (ECS/EKS handle it natively)
  • Access points with POSIX user/group enforcement
  • S3 Versioning required and leveraged for consistency

The Problem with AWS Transfer Family

AWS Transfer Family has been the "official" way to expose SFTP endpoints backed by S3. It works, but it comes with serious pain points:

It's expensive

Transfer Family charges $0.30/hour just for the endpoint — that's ~$216/month before you transfer a single byte. Add data transfer costs on top. For a service that many teams use for a handful of daily file drops, this is hard to justify.

It's a black box

You get an SFTP endpoint, but you don't control the server. Custom authentication requires Lambda hooks. Logging is limited. You can't SSH in to debug. You can't customize the SFTP server behavior, add pre/post-processing scripts, or run anything alongside it.

The New Architecture: atmoz/sftp + S3 Files on ECS Fargate

Here's what you could run instead:

The components:

  1. S3 bucket with versioning enabled (required by S3 Files)
  2. S3 Files file system pointed at the bucket, with mount targets in your VPC
  3. EFS volume for persistent SSH host keys (stable fingerprint across restarts and scaling)
  4. ECS Fargate service running atmoz/sftp with the S3 Files volume mounted at /home
  5. Network Load Balancer exposing port 22
  6. DNS record for sftp.yourdomain.com (optional)

Files uploaded via SFTP land on the S3 Files mount → appear in S3 within seconds → trigger S3 event notifications for downstream processing.

Cost Comparison

Component Transfer Family S3 Files + Fargate
Base cost $0.30/hr (~$216/mo) NLB: ~$16/mo
Compute Included Fargate 0.25 vCPU / 512MB: ~$9/mo
Storage S3 pricing S3 pricing (same)
Data transfer $0.04/GB over SFTP Standard NLB pricing
Monthly minimum ~$216 ~$25

That's roughly 8x cheaper at the base level. For low-to-medium traffic SFTP use cases (which is most of them), the savings are significant.

Why This Is Better Than EFS-Backed SFTP

Before S3 Files, the common DIY approach was to mount EFS on Fargate and run atmoz/sftp. We did exactly this. It worked, but had a fundamental limitation: your files lived in EFS, not S3.

That meant:

  • No S3 event notifications when files arrived
  • No S3 lifecycle policies
  • No S3 cross-region replication
  • No direct S3 API access to the files
  • EFS pricing ($0.30/GB for Standard) vs S3 ($0.023/GB)
  • Separate backup strategy needed

With S3 Files, the data lives in S3. You get the full S3 feature set — notifications, lifecycle rules, replication, analytics, Glacier tiering — while still having a mountable file system for your SFTP server.

Event-Driven File Processing

Both Transfer Family and our S3 Files approach write to S3, so you get the same event-driven capabilities either way:

  • S3 Event Notifications → SQS/SNS/Lambda for immediate processing when a file arrives
  • S3 Event Notifications → EventBridge for complex routing rules
  • S3 Inventory for auditing
  • S3 Object Lock for compliance
  • S3 Replication to replicate uploaded files to another region or account

The difference isn't in features — it's in cost. You get the exact same S3 event-driven pipeline for ~$25/mo instead of ~$216/mo.

The Terraform Implementation

Since aws_s3files_file_system isn't in the Terraform AWS provider yet (PR #47325 is open and prioritized), we manage S3 Files resources through terraform_data with local-exec provisioners calling the AWS CLI.

The key resources:

# S3 Files file system — created via AWS CLI
resource "terraform_data" "s3files_file_system" {
  provisioner "local-exec" {
    command = <<-EOT
      aws s3files create-file-system \
        --bucket "$BUCKET_ARN" \
        --role-arn "$ROLE_ARN" \
        --accept-bucket-warning \
        --region "$REGION"
    EOT
  }
}

# Mount targets in each private subnet
resource "terraform_data" "s3files_mount_targets" {
  for_each = toset(var.private_subnet_ids)
  provisioner "local-exec" {
    command = <<-EOT
      aws s3files create-mount-target \
        --file-system-id "$FS_ID" \
        --subnet-id "${each.value}" \
        --security-groups "$SG_ID"
    EOT
  }
}

# ECS task definition uses s3filesVolumeConfiguration
volume = {
  sftp-home = {
    s3files_volume_configuration = {
      file_system_arn = local.s3files_fs_arn
      root_directory  = "/"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The full working Terraform code is available as a Terraform module. The Terraform AWS provider doesn't support aws_s3files_file_system yet (PR #47325 is open and prioritized), so S3 Files resources are currently managed via terraform_data + AWS CLI. I pledge to update this module to use native Terraform resources as soon as the provider ships S3 Files support.

IAM Setup

Two IAM roles are needed:

  1. S3 Files service role — assumed by elasticfilesystem.amazonaws.com to sync between the file system and S3 bucket. Needs S3 read/write on the bucket + EventBridge permissions for change detection.

  2. ECS task role — needs s3files:ClientMount, s3files:ClientWrite, and s3:GetObject/s3:ListBucket on the backing bucket for optimized reads.

When Transfer Family Still Makes Sense

To be fair, Transfer Family isn't dead for every use case:

  • Managed SFTP keys and user management — Transfer Family has built-in identity provider integration (AD, Lambda custom auth). With atmoz/sftp, you manage users via config.
  • AS2 protocol support — if you need AS2, Transfer Family is still the only managed option.
  • FTPS — Transfer Family supports FTPS natively.
  • Zero ops tolerance — if you truly cannot manage a container, Transfer Family is fully managed.

But for the vast majority of SFTP use cases — partners dropping files that need processing — the S3 Files approach is cheaper, more flexible, and gives you better observability.

Getting Started

Prerequisite: The aws s3files commands require AWS CLI v2.34.26 or later. You also need jq installed (used by the Terraform provisioner scripts). Update the CLI with brew upgrade awscli or see AWS CLI install guide.

  1. Create an S3 bucket with versioning enabled
  2. Create an IAM role for S3 Files with the required trust and permissions policies
  3. Create an S3 Files file system via the console or aws s3files create-file-system
  4. Create mount targets in your VPC subnets
  5. Create an EFS for persistent SSH host keys
  6. Deploy an ECS Fargate service with atmoz/sftp, mounting S3 Files at /home and EFS at /etc/ssh/
  7. Put an NLB in front, point your DNS at it
  8. Set up S3 event notifications on the bucket for downstream processing

Or just use the Terraform module — the whole thing deploys in under 10 minutes.

Testing it end-to-end

After terraform apply, the SFTP server is ready in about 8 minutes (most of the time is S3 Files mount targets becoming available). Here's a quick test:

# Upload a file
echo "Hello from S3 Files SFTP!" > test.txt
sshpass -p demo sftp -o StrictHostKeyChecking=no -P 22 demo@<sftp_endpoint> <<EOF
cd upload
put test.txt
bye
EOF

# Verify it landed in S3 (wait ~30-60s for sync)
aws s3 cp s3://<sftp_bucket_name>/demo/upload/test.txt -
# Output: Hello from S3 Files SFTP!
Enter fullscreen mode Exit fullscreen mode

We also verified that SSH host keys persist across task restarts — the server fingerprint stays the same after a forced redeployment, thanks to the EFS volume mounted at /etc/ssh/.

Conclusion

S3 Files bridges the gap between file system and object storage in a way that makes a lot of expensive AWS services feel redundant. For SFTP specifically, the combination of atmoz/sftp + S3 Files on Fargate gives you:

  • ~8x lower cost than Transfer Family
  • Full control over the SFTP server
  • Native S3 event notifications for event-driven processing
  • S3 as the source of truth — lifecycle rules, replication, analytics all work
  • Infrastructure as Code with Terraform (even before native provider support)

The days of paying $216/month minimum for a managed SFTP endpoint are over for most teams. S3 Files is the missing piece that makes DIY SFTP on AWS not just viable, but ~8x cheaper.

Top comments (0)