DEV Community

Stefan Sundin for AWS Community Builders

Posted on • Updated on

Stable versions of shrimp and s3sha256sum

It has been almost a year since I introduced shrimp and s3sha256sum, and I think it is time to announce the changes that have been made since then.

First a quick recap of what these tools do:

  • shrimp is an interactive multipart uploader that excels at uploading very large files to S3. What differentiates shrimp from the aws cli is that it allows the user to dynamically change the bandwidth limit during the upload, and it lets the user pause the upload and resume it later. It can easily resume an upload that was interrupted for any reason.
  • s3sha256sum is basically sha256sum for S3 objects. I created it to help me validate that shrimp is uploading files correctly.

Since releasing the source code last year, I have been incrementally adding features to both programs. shrimp should now support pretty much all of the features in the aws cli, and the command line syntax is now much closer to the aws cli. In addition to feature parity, shrimp has also gained features that set it apart from the aws cli, such as a scheduler that can automatically adjust the bandwidth limiter based on the day and time, and an MFA feature that can automatically generate TOTP codes which is useful in cases where your upload takes longer than your allowed session duration (12 hours at maximum).

In addition to new features, I have also been uploading a lot of large files to S3 using shrimp, all of them successfully without any consistency problem. I now consider shrimp to be battle tested and I can more confidently vouch for its stability.

With these improvements I thought it was time to start releasing binaries and versioning updates. You no longer have to compile the programs from source. Please visit the releases sections on GitHub to download: shrimp and s3sha256sum.

In my last blog post I wrote that shrimp is for "slow internet connections". I have since used shrimp on very fast internet connections and it is indeed capable of uploading very quickly as well (> 50 MB/s). The limiting factor is that it is uploading a single part at a time, and greater speed could potentially be gained by parallelizing this process. However, I do not think this would help most people and it would make the code much more complicated which would make it a lot harder to verify that shrimp is error-free. If you need parallel part uploading then I recommend that you build a custom solution that is implemented to your own specification.

In my next blog post I will announce a new tool that complements these two very nicely. Stay tuned! (edit: here's the blog post about s3verify)

Top comments (0)