Attaching an S3 Bucket as NFS Volume in Docker
We can attach an S3 bucket as a mounted Volume in docker.
We need to use a Plugin to achieve this.
The plugin is
We will first install the plugin
docker plugin install rexray/s3fs:latest S3FS_REGION=us-east-2 S3FS_OPTIONS="allow_other,iam_role=auto,umask=000" --grant-all-permissions
We will have to install the plugin as above ,as it gives access to the plugin to S3.
Once installed we can check using
docker plugin ls
Now we can mount the S3 bucket using the volume driver like below to test the mount.
docker run -ti --volume-driver=rexray/s3fs -v ${aws-bucket-name}:/data ubuntu sleep infinity
Thats it the Volume has been mounted from our S3 Bucket
We can inspect the container and check if the bucket has been mounted
"Mounts": [
{
"Type": "volume",
"Name": "maps-openmaps-schools",
"Source": "",
"Destination": "/data",
"Driver": "rexray/s3fs:latest",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
We can also inspect the volume
$ docker volume inspect maps-openmaps-schools
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "rexray/s3fs:latest",
"Labels": null,
"Mountpoint": "",
"Name": "maps-openmaps-schools",
"Options": null,
"Scope": "global",
"Status": {
"availabilityZone": "",
"fields": null,
"iops": 0,
"name": "maps-openmaps-schools",
"server": "s3fs",
"service": "s3fs",
"size": 0,
"type": ""
}
}
]
Top comments (6)
Remember that s3 is NOT a file system, but an object store - while mounting IS an incredibly useful capability - I wouldn't leverage anything more than file read or create - don't try to append a file, don't try to use file system trickery (e.g. links, fs event listening, etc....).
Does anyone know why I have this error when installing the plugin? Error response from daemon: dial unix /run/docker/plugins/36e9d9fa7a4bc75983/rexray.sock: connect: no such file or directory
Thanks for that post.
How are the perfs?
I remember making an s3fs-based system in Kubernetes some time ago, and the perfs were pretty bad...
Will be keeping an eye on the performance. I am loading a pretty big file , have not tested as yet
How to provide AWS access key and secret key?
The rexray/s3fs plugin uses parameters S3FS_ACCESSKEY/S3FS_SECRETKEY for the s3fs or EBS_ACCESSKEY/EBS_SECRETKEY for the ebs file system type.
Example to install the plugin and use values from environment variables:
docker plugin install rexray/s3fs:latest S3FS_REGION=us-east-2 S3FS_OPTIONS="allow_other,iam_role=auto,umask=000" S3FS_ACCESSKEY=$AWS_ACCESS_KEY_ID S3FS_SECRETKEY=$AWS_SECRET_KEY_ID --grant-all-permissions