A quick tip how to use the cloud to transfer humongous files between different cloud services using cURL!
For the past few years, I have been co-organizing several meetups, including JavaScript Israel and IoT Makers Israel. I am a firm believer in recording all the talks and sharing them on YouTube/Vimeo.
For some reason, the video guys sometimes like to share the raw video files using random file sharing services such as sendgb.com, filemail.com, and similar. These files are usually several gigabytes large, and these services will only keep them for a few days.
Since my home upload is not very fast, I was looking for a quick solution that would enable me to transfer these files to a different location, such as google cloud storage or Vimeo, without having to download them to my machine first and then re-upload them to their new home.
After some fiddling, I found a nice trick that makes this super-simple:
- I go to the file sharing service website in Chrome on my machine, and open the developer tools. Then I click on "Download".
- I look at the "Network" tab of Chrome's developer tools, and find the specific line that was triggered by my download request. I right click on it, choose "Copy" and "Copy as cURL (bash)". The copied command includes all the relevant cookies, headers, and any submitted form data, making it easy to run the same HTTP request on a different machine and get the same file.
- Finally, I SSH to one of my cloud machines, paste the cURL command that I just copied, and pipe its output to a second command that will upload the file to the desired service. See some examples below.
Copying the file to Google Cloud Storage or AWS
For copying the file to GCS, I use the gsutil
command from any Google Cloud VM (the VM needs to have write permissions for GCS), or for smaller files (<3GB), I just fire up the Cloud Shell. The result looks like:
curl ... | gsutil cp - gs://bucket-name/filename.mp4
Where curl ...
is the command you copied from Chrome Devtools, bucket-name
is the target GCS bucket you want to copy that file to, and filename.mp4
is what you want to call the file.
If you work on AWS, you can use the aws s3
command to achieve a similar result:
curl ... | aws s3 cp - s3://bucket-name/filename.mp4
Uploading the file directly to Vimeo
I recently found out that Vimeo support FTP uploads on their paid plans. Thus, I use a similar method, but this time I pipe the output of curl
into a different curl
process that uploads it through FTP:
curl ... | curl -T - ftp://user:PASS@ftp-3.cloud.vimeo.com/video.mp4
You can use this method to upload the file to any ftp server - just replace the user
, PASS
and ftp-3.cloud.vimeo.com
with the relevant values for your FTP account.
Uploading to other places - Google Drive, Dropbox, etc.
You can easily extend this method and upload the files to your favorite cloud provider. For instance, for Google Drive, you can use the gdrive cli and pipe the cURL output to gdrive upload - <path>
.
For Dropbox, you can follow the steps in this StackOverflow answer to create a cURL command that will upload the given file to dropbox. You will only need to change --data-binary @matrices.txt
into --data-binary @-
so that the commands gets it input from stdin, that is the output of other curl
command you pipe into it.
Summary
The trick presented here is quite straightforward - thanks to Chrome's DevTools, you can easily convert any network request your browser makes into a cURL command you can run anywhere and in most cases will produce the same result. I leveraged this method in order to transfer large files using Linux commands on the cloud, but I am sure you can find even more creative use cases. When you do, please share them with me :-)
Top comments (2)
Would rclone be any more efficient? You can use the http module to grab a single file without having to set up a configuration, see advanced options. rclone.org/http/
Nice trick. Will definitely try it out. :)