DEV Community

just a martian
just a martian

Posted on

How do you merge millions of small files in a S3 bucket to larger single files to a separate bucket daily?

We have a situation where we're currently uploading events to s3 in real-time. The result is roughly 30 million tiny json files (<1kb) per day. These files sit in a raw layer bucket with the following folder format "#{folder_name}/#{year}/#{month}/#{day}/#{hour}/#{minute}/#{System.os_time()}-#{file_name}.#{file_ext}". We want to send this to a data warehouse for analytics but need the files to be much larger (150-200mb). What solutions are there for merging json files from a s3 bucket back to a separate s3 bucket. I have tried developing a lambda to tackle this problem but it was not enough since all the files must be downloaded in /tmp and lambda ran out of memory. Please help :)

Top comments (5)

Collapse
 
tiguchi profile image
Thomas Werner

I think it mostly depends what the expected target output format is. If you simply need to concatenate all events as a list (JSON array), then it could be probably done by opening an output stream for a file in the target S3 bucket, and writing each JSON file to it one after the other. Memory consumption should be constant, given that all input JSON files are the same size. You only need to make sure that the list of event file paths / handles is not loaded into a collection all at once, so you don't run out of memory.

But it sounds like you need to apply more complicated merge logic? What's an example for an event file and what's the expected result format?

Collapse
 
peterb profile image
peterb • Edited

Redshift Spectrum does an excellent job of this, you can read from S3 and write back to S3 (parquet etc) in one command as a stream

e.g. take lots of jsonl event files and make some 1 GB parquet files
First
create external table mytable (....)
row format serde 'org.openx.data.jsonserde.JsonSerDe'
stored as inputformat 'org.apache.hadoop.mapred.TextInputFormat'
outputformat 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
location 's3://bucket/folderforjson/path/yesr/month/day ...'

Then
upload ('select columns from mytable where ...')
to 's3://bucket/folderforparquet/year/month/day...'
iam_role 'arn:aws:iam::123456789:role/prod....-role'
format parquet
partition by (year, month, day)
include
cleanpath

You can buy Redshift by the hour, and Redshift Spectrum is $5 per TB

Collapse
 
kishanbsh profile image
Kishan B

whitfin.io/quickly-concatenating-f...

This tool does the job of concatenating within S3 without requiring a download.. It uses the multipart api of S3 to achieve the same.

Collapse
 
talr98 profile image
Tal Rofe

Why do you need the entire data in your lambda? You should use streaming & multi-part uploading to S3

Collapse
 
whynotmarryj profile image
just a martian

1 min of files exceeds 500mb breaking tmp