DEV Community

Casper
Casper

Posted on

I built a file sharing service because modern ones broke wget and curl

I built https://wgetl.ink, a minimal file-sharing service designed for terminals, automation, and people who just want a real file link.

Most modern file-sharing services turned a simple file transfer into a browser workflow with accounts, JavaScript download pages, redirect tokens, and links that don’t work with tools like wget or curl.

wgetl.ink does the opposite.

You upload a file and get a real HTTP URL that behaves like a file:
The helper app just makes it so you don't have to remember the wget/curl flags for things like resume etc. you dont have to use it you can just use plain wget or curl.

upload
wgetl file.zip

download
wget https://wgetl.ink/running-dog-swims

or simply

wgetl running-dog-swims

The goal is simple:

upload → get link → download

No accounts, no email verification, no browser-only flows.

The service is designed for developers, sysadmins, CI/CD pipelines, researchers, and anyone moving files between machines, scripts, SSH sessions, or headless servers.

There is also a cross-platform CLI helper that works on Linux, macOS, Windows, and Android.

Key features:

• direct HTTP download links that work with wget, curl, browsers, and scripts
• resumable uploads and downloads
• BLAKE3 integrity verification
• password protection and expiry policies
• human-readable share URLs (three-word slugs)
• folder uploads (automatically archived)
• stdin piping support for shell workflows
• content-addressed storage with deduplication

Example workflows:

upload a file
wgetl build.tar.zst

upload a folder
wgetl project/

upload from stdin
cat logs.txt | wgetl

download
wgetl running-dog-swims

The service was built because moving files between systems should still be simple and scriptable. A file link should behave like a file, not like an application page.

If you work in terminals, automation, CI pipelines, or remote servers, this might be useful.

Feedback is welcome. Thanks for reading hope someone finds it useful

Top comments (1)

Collapse
 
apex_stack profile image
Apex Stack

The "a file link should behave like a file" philosophy is spot on. I run a data platform that pulls financial data across thousands of tickers, and the number of times I've needed to move generated CSV exports or database dumps between headless servers over SSH is absurd. Every time I reach for a quick file transfer, I end up fighting some service's JavaScript-heavy download page that doesn't work in a terminal session.

The three-word slug system is a really nice touch too — much easier to type over SSH than a UUID or base64 hash. The BLAKE3 integrity verification is a smart default for anyone piping files through automation scripts where silent corruption would be a nightmare to debug.

Curious about the content-addressed storage with deduplication — are you using BLAKE3 for both the integrity check and the dedup key? And is there a max file size or retention policy, or is it purely expiry-based? For CI/CD artifact sharing this could be incredibly useful if the retention window is predictable.