DEV Community

Shin-Young Jung
Shin-Young Jung

Posted on

AWS SDK with Javascript: Multi-Files Download from S3

If you want to download multiple files as zipped from AWS S3, and you have your server to take care of the file metadata handlings, then this article may help you understand how the process works. AWS doesn't provide the default multi-files download, so in order to achieve you may want to add lambda function or to use your own implemented service. This article uses the custom service to support multi-files download function.

To know the basic setup of the code (AWS keys, the meaning of Key in the code, bucket, etc.), please refer to another article that I worte AWS SDK with Javascript: Download File from S3.

import * as AWS from 'aws-sdk';
import archiver from 'archiver';
import path from 'path'
import { PassThrough } from 'stream';

const s3bucket = new AWS.S3({
  accessKeyId: process.env.AWS_ACCESS_KEY,
  secretAccessKey: process.env.AWS_SECRET_KEY,
  signatureVersion: 'v4',
  region: process.env.AWS_REGION, // ex) us-west-2
});
Enter fullscreen mode Exit fullscreen mode

Here is the main function that allows to receive file infos as an input parameter and to create stream object that can be used to write or pass the data as zipped format.

const multiFilesStream = (infos) => {
  // using archiver package to create archive object with zip setting -> level from 0(fast, low compression) to 10(slow, high compression) 
  const archive = archiver('zip', { zlib: { level: 5 } });

  for (let i = 0; i < infos.length; i += 1) {
    // using pass through stream object to wrap the stream from aws s3
    const passthrough = new PassThrough();
    s3bucket
      .getObject({
        Bucket: process.env.AWS_BUCKET_NAME,
        Key: path.join(infos[i].path, infos[i].filename);
      })
      .createReadStream()
      .pipe(passthrough);
    // name parameter is the name of the file that the file needs to be when unzipped.
    archive.append(passthrough, { name: infos[i].filename });
  }
  return archive;
};

Enter fullscreen mode Exit fullscreen mode
const files = [
   {path: '/drive', filename: 'file1.jpg'},
   {path: '/drive', filename: 'file2.jpg'},
   {path: '/drive', filename: 'file3.jpg'},
   {path: '/drive', filename: 'file4.jpg'},
];

const mfStream = multiFilesStream(files);

// res is the response object in the http request. You may want to create your own write stream object to write files in your local machine
mfStream.pipe(res);

// use finalize function to start the process
mfStream.finalize();
Enter fullscreen mode Exit fullscreen mode

Top comments (6)

Collapse
 
rodrigo_schreiner_f9cabfe profile image
Rodrigo Schreiner

Hey, thanks a lot for the example. I was able to run the code on my own app but the only issue I am having is that I can't find where the zip file is being saved on my local.

I tried to add
.createReadStream(__dirname + '/example.zip') but I couldn't find it.

Please look my code below and see if you can help me, it is the last part I need to finalize my project. Thanks a lot for the help

I call the "listTst" on my node app from my react app.

I can see the 2 files being downloaded on the network tab of my browser but no zip is saved.

Image description

const multiFilesStream = (infos) => {
    // using archiver package to create archive object with zip setting -> level from 0(fast, low compression) to 10(slow, high compression) 
    const archive = archiver('zip', { zlib: { level: 5 } });

    for (let i = 0; i < infos.length; i += 1) {
      // using pass through stream object to wrap the stream from aws s3
      const passthrough = new PassThrough();
      s3
        .getObject({
          Bucket: S3_BUCKET,
          Key: infos[i].filename
        })
        .createReadStream(__dirname + '/example.zip')
        .pipe(passthrough);
      // name parameter is the name of the file that the file needs to be when unzipped.
      archive.append(passthrough, { name: infos[i].filename });
    }
    return archive;
};

app.post('/listTst', (req, res) => {
    const packageName = req.body.packageName;
    const userId = req.body.userId;
    const packageAddress = req.body.packageAddress;

    const s3 = new AWS.S3({
        accessKeyId: key',
        secretAccessKey: 'secret'
    });
    var bucketParams = {
        Bucket : S3_BUCKET,
        Prefix : `still-pictures/22f84fbd-460b-cfc0-c716-245732f1887e/`
    };

    // Call S3 to obtain a list of the objects in the bucket
    s3.listObjects(bucketParams, function(err, data) {
        if (err) {
            console.log("Error", err);
        } else {
            const files = [];
            const noOfObjects = data.Contents;
            for (let i = 0; i < noOfObjects.length; i++) {
                files.push(
                    {"filename": noOfObjects[i].Key}
                );
            }
            const mfStream = multiFilesStream(files);

            // res is the response object in the http request. You may want to create your own write stream object to write files in your local machine
            mfStream.pipe(res);

            // use finalize function to start the process
            mfStream.finalize();
        }
    });
});
Enter fullscreen mode Exit fullscreen mode
Collapse
 
mayowadabiri profile image
Mayowa Dabiri

Thank you for this great article. It is really helpful for me.

Collapse
 
navdesign profile image
Navninder Benipal

Hey thanks for posting this amazing solution. I was getting an error (0, archiver_1.default) is not a function. Would you care to help me out resolve it???

Collapse
 
ng_speedster profile image
Yuriy
import * archiver  from "archiver";
Enter fullscreen mode Exit fullscreen mode
Collapse
 
navdesign profile image
Navninder Benipal

Hey thanks, I figured it out long ago. I was developing an app and needed this file archiving and download feature. I used this archiver utility for my app dockefy. Take a look and let me know if you like it at dockefy.com

Thread Thread
 
ng_speedster profile image
Yuriy

Hi, I had such a feeling...)
I left that comment in case someone might run into the same issue)
dockefy.com looks awesome, great job!