DEV Community

Cover image for S3 Client against disasters (hacks, fires, catastrophes)
Steeve
Steeve

Posted on

S3 Client against disasters (hacks, fires, catastrophes)

Introduction

S3 is a major storage standard supported by many cloud providers; easy to set up and, most of the time, performant.

What if your storage region is no longer available? What if your cloud provider network or DNS is unavailable anymore due to a fire, hack, or natural disaster?

Many S3 clients are available in all programming languages, but none of them can work in the event of a total interruption, which could cause your production application to fail.

That's why we develop tiny-storage-client for Node: you won't have S3 downtime anymore!

"You said multi-region? Pfff you should go multi-cloud"

Tiny Storage Client

TSC (for tiny-storage-client) is a smart HTTP client for S3 that switches Cloud Provider if something goes wrong, such as an error 500, a network issue, timeouts, or a DNS issue.

Other advantages of TSC:

  • 🐞 Small codebase: Vanilla JS + Only 2 dependencies rock-req as HTTP client (zero deps) and aws4 for signing S3 requests (zero deps too).
  • 🦄 Simple: Only 10 methods: uploadFile, downloadFile, deleteFile, listFiles, listBuckets, headBucket, setFileMetadata, getFileMetadata, deleteFiles and request for custom requests.
  • ⚡️ Supports Open Stack Swift and S3 credentials: Although the two storage systems have completely different APIs, TSC unifies and provides the same methods.
  • Fully tested: Unit tests cover all functions and error behaviours, and the package is battle-tested in production against hundreds of TBs of file uploads, downloads and deletes.
  • 🍒 Cherry on top: TSC returns API results as Javascript Objects, not as XML (for instance, listing objects).

Install and Setup

  1. Create the same bucket on two or three Cloud Providers, for instance: AWS, Google Cloud, and OVHCloud
  2. Synchronise buckets with Sclone or Rclone
  3. Install tiny-storage-client on your Node project: npm install --save tiny-storage-client
  4. Initialise TSC with one or multiple storage: pass an array of S3 or Open Stack Swift credentials. In the following example, the S3 client is initialised with credentials of two cloud providers: an OVHCloud S3 and an AWS S3.
const tsc = require('tiny-storage-client');

const s3 = tsc([{
  accessKeyId    : 'accessKeyId',
  secretAccessKey: 'secretAccessKey',
  url            : 's3.gra.io.cloud.ovh.net',
  region         : 'gra'
},
{
  accessKeyId    : 'accessKeyId',
  secretAccessKey: 'secretAccessKey',
  url            : 's3.eu-west-3.amazonaws.com',
  region         : 'eu-west-3'
}])
Enter fullscreen mode Exit fullscreen mode

All requests will go to the first S3 defined on the credential list. If something goes wrong, it will switch to the next S3 credentials and retry the request. If any storage is available, an error message is returned: All S3 storages are not available, and it will retry requests on the first S3 of the credential list.

Upload Object example

To upload an object, use the uploadFile function: pass the destination container as first argument, the object name and the file as an absolute path (or directly as Buffer), such as:

s3.uploadFile('bucketName', 'file.pdf', path.join(__dirname, 'dir2', 'file.pdf'), (err, resp) => {
  if (err) {
    return console.log("Error on upload: ", err.toString());
  }
  /**
   * Request response:
   * - resp.body
   * - resp.headers
   * - resp.statusCode
   */
})
Enter fullscreen mode Exit fullscreen mode

When the upload is done, the callback is executed and two arguments are available:

  • err: The only error returned on the callback is the major one: all storages are unavailable.
  • resp: It is the S3/Swift API response with the following format:
{
  "body": {},
  "headers": {},
  "statusCode": 200
}
Enter fullscreen mode Exit fullscreen mode

If an error occurs with the first storage, it will retry the upload with the second provider. Make sure you have a bi-directional synchronisation between all your buckets.

Download Object example

To download an object, use the downloadFile function: pass the bucket name, the object name, and the callback will return the object as Buffer:

s3.downloadFile('bucketName', '2023-invoice.pdf', (err, resp) => {
  if (err) {
    return console.log("Error on download: ", err);
  }
  /**
   * Request reponse:
   * - resp.body => downloaded file as Buffer
   * - resp.headers
   * - resp.statusCode
   */
})
Enter fullscreen mode Exit fullscreen mode

If the first storage provider has an error, TSC will retry the download with the second storage provider. Make sure you have bi-directional synchronisation between your buckets.

List Objects example

For listing files of a bucket, use the listFiles function: provide the bucket name as first argument:

s3.listFiles('bucketName', function(err, resp) {
  if (err) {
    return console.log("Error on listing files: ", err.toString());
  }
   /**
   * Request reponse:
   * - resp.headers
   * - resp.statusCode
   * - resp.body => list of files as JSON format:
   *    {
   *      "name": "bucketName",
   *      "keycount": 1,
   *      "maxkeys": 1000,
   *      "istruncated": false,
   *      "contents": [
   *        {
   *          "key": "file-1.docx",
   *          "lastmodified": "2023-03-07T17:03:54.000Z",
   *          "etag": "7ad22b1297611d62ef4a4704c97afa6b",
   *          "size": 61396,
   *          "storageclass": "STANDARD"
   *        }
   *      ]
   *    }
   */
});
Enter fullscreen mode Exit fullscreen mode

When the HTTP request is done, the resp.body.contents contains the list of objects.

Requests examples

Find all request examples on the GitHub Repository:

Feedback from a real production usage

Tiny Storage Client saved us from network and server issues, saving our clients from bugs or downtime. For example, a couple of months ago, our main cloud provider returned status 500 for two days for unknown reasons (I'll not name the cloud provider 👀), and we did not notice the interruption because TSC switched storage automatically.

In an independent VPS server, our S3 buckets synchronisation is executed by Sclone, which provides a multi-cloud and bi-directional sync. I made a dedicated article about Sclone:

Conclusion

Cloud providers are selling "multi-region", which is useful; however, setting up your own "multi-cloud" provider using tiny-storage-client will make your production application more resilient.

Thanks for reading, and have a great day! Cheers 🍻

Top comments (0)