DEV Community

Cover image for Mount S3 Storage Compatibility As Local Filesystem
Nhật Trường
Nhật Trường

Posted on

Mount S3 Storage Compatibility As Local Filesystem

S3 object storage offers scalable and cost-effective storage solutions, but working with it directly can be challenging when your applications expect traditional filesystem access. This guide explores two powerful tools - rclone and s3fs - that bridge this gap by mounting S3 buckets as local filesystems.

Visit my blog here

Prerequisites

Before getting started, ensure you have installed the required third-party software:

  • rclone: A versatile command-line tool for managing files on cloud storage

  • s3fs: A FUSE-based filesystem specifically designed for S3

Configuring Your S3 Mount Tools

Step 1: Setting Up Configuration Files

Each tool requires specific configuration to connect to your S3 bucket:

rclone Configuration

Create a configuration file at /etc/rclone.conf:

[s3-mount]
type = s3
provider = AWS
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = YOUR_ENDPOINT_URL
acl = private
Enter fullscreen mode Exit fullscreen mode

See rclone.config.example for a complete template.

s3fs Configuration

Create a credentials file at /etc/passwd-s3fs with the following format:

ACCESS_KEY_ID:SECRET_ACCESS_KEY
Enter fullscreen mode Exit fullscreen mode

Set appropriate permissions:

chmod 600 /etc/passwd-s3fs
Enter fullscreen mode Exit fullscreen mode

See s3fs-passwd.example for reference.

Step 2: Creating Mount Scripts

Create shell scripts to manage the mounting process with proper parameters:

rclone Mount Script

Create /usr/local/bin/rclone-mount.sh:

#!/bin/bash

# Configuration variables
bucket="your-bucket-name"
url="https://your-endpoint.com"
mount_point="/mnt/s3-bucket"
config_file="/etc/rclone.conf"
log_file="/var/log/rclone-mount.log"
log_level="DEBUG"
provider="s3"  # Options: vstorage, s3, etc.

# Create mount point if it doesn't exist
mkdir -p "${mount_point}"

# Mount the bucket
rclone mount \
  --config "${config_file}" \
  --log-file "${log_file}" \
  --log-level "${log_level}" \
  --allow-other \
  --file-perms 0644 \
  --dir-perms 0755 \
  --vfs-cache-mode full \
  --vfs-cache-max-size 1G \
  --vfs-read-chunk-size 10M \
  --daemon \
  "${provider}:${bucket}" "${mount_point}"

exit 0
Enter fullscreen mode Exit fullscreen mode

Make the script executable:

chmod +x /usr/local/bin/rclone-mount.sh
Enter fullscreen mode Exit fullscreen mode

s3fs Mount Script

Create /usr/local/bin/s3fs-mount.sh:

#!/bin/bash

# Configuration variables
bucket="your-bucket-name"
url="https://your-endpoint.com"
mount_point="/mnt/s3-bucket"
passwd_file="/etc/passwd-s3fs"
log_file="/var/log/s3fs-mount.log"
log_level="debug"
region="HCM03"  # Your specific region

# Create mount point if it doesn't exist
mkdir -p "${mount_point}"

# Mount the bucket
s3fs "${bucket}" "${mount_point}" \
  -o passwd_file="${passwd_file}" \
  -o url="${url}" \
  -o use_path_request_style \
  -o allow_other \
  -o umask=0022 \
  -o dbglevel="${log_level}" \
  -o curldbg \
  -o endpoint="${region}" \
  > "${log_file}" 2>&1

exit 0
Enter fullscreen mode Exit fullscreen mode

Make the script executable:

chmod +x /usr/local/bin/s3fs-mount.sh
Enter fullscreen mode Exit fullscreen mode

Step 3: Creating Systemd Service Units

To ensure your S3 bucket mounts automatically at boot and is properly managed by systemd:

rclone Systemd Service

Create /lib/systemd/system/rclone-mount.service:

[Unit]
Description=Mount S3 Bucket using rclone
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStart=/usr/local/bin/rclone-mount.sh
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

s3fs Systemd Service

Create /lib/systemd/system/s3fs-mount.service:

[Unit]
Description=Mount S3 Bucket using s3fs
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/s3fs-mount.sh
RemainAfterExit=yes
ExecStop=/bin/fusermount -u /mnt/s3-bucket

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Step 4: Enable and Start the Service

Choose which tool you prefer (rclone or s3fs) and enable its service:

# For rclone
sudo systemctl daemon-reload
sudo systemctl enable rclone-mount.service --now
sudo systemctl status rclone-mount.service

# For s3fs
sudo systemctl daemon-reload
sudo systemctl enable s3fs-mount.service --now
sudo systemctl status s3fs-mount.service
Enter fullscreen mode Exit fullscreen mode

Performance Considerations

  • rclone:

    • Offers better performance for large files
    • More feature-rich with built-in caching
    • Uses more memory but provides better throughput
    • Excellent for backup/sync operations
  • s3fs:

    • Simpler, lighter resource footprint
    • Better for direct file access patterns
    • More POSIX-compliant but slower for metadata operations
    • Good for applications that need basic file access

Troubleshooting Common Issues

Mount Failure

If your mount fails to initialize:

  1. Check credentials: Verify your access keys are correct in the configuration files
   cat /var/log/rclone-mount.log | grep "auth"
   # or
   cat /var/log/s3fs-mount.log | grep "auth"
Enter fullscreen mode Exit fullscreen mode
  1. Test connectivity: Confirm network access to your S3 endpoint
   curl -I https://your-endpoint.com
Enter fullscreen mode Exit fullscreen mode
  1. Permissions: Ensure your mount scripts are executable
   ls -la /usr/local/bin/rclone-mount.sh
   ls -la /usr/local/bin/s3fs-mount.sh
Enter fullscreen mode Exit fullscreen mode
  1. Bucket existence: Verify the bucket name is spelled correctly and exists
   # For AWS S3
   aws s3 ls s3://your-bucket-name

   # For other S3 providers, use their CLI tools
Enter fullscreen mode Exit fullscreen mode

Performance Issues

If you experience slow access:

  1. Increase cache size: For rclone, modify the --vfs-cache-max-size parameter
  2. Adjust chunk size: Modify --vfs-read-chunk-size for your workload
  3. Check network latency: High latency to your S3 endpoint will impact performance
  4. Consider local caching: For frequently accessed files

References

Top comments (0)

The Most Contextual AI Development Assistant

Pieces.app image

Our centralized storage agent works on-device, unifying various developer tools to proactively capture and enrich useful materials, streamline collaboration, and solve complex problems through a contextual understanding of your unique workflow.

👥 Ideal for solo developers, teams, and cross-company projects

Learn more

👋 Kindness is contagious

Engage with a wealth of insights in this thoughtful article, valued within the supportive DEV Community. Coders of every background are welcome to join in and add to our collective wisdom.

A sincere "thank you" often brightens someone’s day. Share your gratitude in the comments below!

On DEV, the act of sharing knowledge eases our journey and fortifies our community ties. Found value in this? A quick thank you to the author can make a significant impact.

Okay