We built a file transfer service that keeps peer-to-peer transfers free. Not a "free tier with limits" free. Not "free until we get acquired" free. Just free, as long as two browsers can talk to each other directly.
The catch is that we had to make browsers do things they were never designed to do.
This is a write-up of what we learned pushing WebRTC and browser APIs far beyond their comfortable limits.
TL;DR
- Browsers assume you will buffer files in memory, but that breaks fast
- Blob-based approaches OOM around ~2–4GB
- Chromium works best thanks to File System Access API
- Large fsyncs stall "completed" transfers for many minutes (~30mins for a 128GB file)
- Service Worker streaming avoids that entirely
- SCTP congestion control will silently kill throughput unless paced
- Tracking millions of pieces naively explodes memory
- Safari and Firefox impose real, unavoidable limits
The problem: browsers were not built for this
Peer-to-peer file transfer sounds deceptively simple: establish a WebRTC data channel, read file chunks, send them across.
If you have never tried this at scale, you might assume the hard parts are signaling, NAT traversal, or TURN servers.
They are not.
The real problem is that browser APIs assume you have RAM to spare. A lot of it. They assume you are sending a few megabytes, maybe a video file. They do not assume you are streaming a 500GB LLM model between two machines.
A naive implementation looks like this:
const chunks = [];
for await (const chunk of file.stream()) {
channel.send(chunk);
chunks.push(chunk);
}
This works until around 2GB, when Chrome helpfully kills your tab for excessive memory usage. Firefox lasts a bit longer. Safari will not let you try.
The architecture itself does not require a size ceiling. Browsers do.
Constraints we accepted
Before touching architecture, we set non-negotiable constraints:
- Browser-only. No native apps.
- No accounts required. Drop files, share a code, done.
- Any file size. No baked-in assumptions.
- No full buffering in memory.
- Must survive flaky networks.
- Memory usage must be bounded.
- Fail fast when things are broken.
These constraints eliminated most standard approaches immediately.
The problems that almost killed us
1. RAM explosions with Blob-based approaches
Every tutorial suggests assembling a Blob at the end. This is catastrophic for large files.
// Never do this for large files
const chunks = [];
chunks.push(piece);
const blob = new Blob(chunks);
Blob construction copies everything into a contiguous buffer. Chrome tabs die around ~4GB regardless of system RAM.
Solution: stream directly to disk.
Chromium's File System Access API allows positional writes:
const writable = await handle.createWritable();
await writable.write({
type: "write",
position: offset,
data: piece
});
No accumulation. No heap growth. Pieces go straight to disk.
This works well up to ~10GB, then a new problem appears.
2. The fsync problem at >10GB
File System Access writes are atomic. The browser commits everything on close().
For very large files, that final commit can take many minutes.
Users see "100% complete" and then nothing.
Solution: Service Worker streaming.
For large files, we bypass File System Access entirely and stream through a Service Worker into the browser's native download manager.
The download UI shows progress. Completion is instant when the last byte arrives. No fsync stall.
This also gives us natural pull-based backpressure: the browser only requests data as fast as it can write it.
3. Safari does not support File System Access API
Safari has chosen not to support File System Access.
That leaves two options:
- Firefox: OPFS (sandboxed filesystem, ~10GB limit)
- Safari: hard limits
We are explicit in the UI. Safari tops out at ~4GB. Users can switch browsers or use the cloud relay.
4. Firefox 300MB Service Worker streaming bug
Firefox streaming downloads via Service Worker stall around ~300MB.
After debugging: OPFS is more reliable.
Solution: Firefox always streams to OPFS, then triggers a single download when complete.
5. Piece tracking explodes memory
A 1TB file split into 64KB pieces produces ~16 million pieces.
Tracking each piece individually is not viable:
const received = new Set(); // explodes
Solution: water-level tracking.
We track the highest contiguous piece index plus a small bounded set for out-of-order pieces.
Memory stays O(1), independent of file size.
6. SCTP congestion control kills throughput
WebRTC uses SCTP. SCTP has congestion control.
Burst too hard and throughput collapses over time.
Solution: adaptive pacing based on channel buffer pressure.
Smooth, boring, predictable throughput beats fast-then-slow every time.
7. ACK storms on large transfers
Per-piece ACKs do not scale.
Solution: batched water-level ACKs.
"I have received everything up to X, plus these few gaps."
Message size stays constant.
8. Receiver disk slower than network
What if the receiver's disk write speed is slower than the network? Pieces pile up in memory until OOM.
Solution: explicit backpressure signaling.
When the receiver buffer exceeds threshold, it signals the sender to pause. Reads stop. Network naturally drains.
Memory bounds per transfer
| Component | Max |
|---|---|
| Piece queue | 128MB |
| Overflow buffer | 256MB |
| Out-of-order tracking | 128MB |
| Service Worker buffer | 16MB |
| Browser internal buffers | ~4MB |
| Total max | ~532MB |
The key point is not the absolute numbers — it is that memory usage is bounded and independent of file size.
In practice, usage is usually under 50MB.
What actually worked
- Streaming to disk, never memory
- Bounded buffers everywhere
- Positional writes (idempotent)
- Adaptive pacing instead of bursts
- Water-level tracking
- Pull-based Service Worker downloads
- Failing fast when networks are broken
Limits that still exist
- Mobile browsers aggressively suspend background tabs. We have not gotten mobile to work yet at all.
- Safari has a hard ~4GB limit
- Firefox OPFS caps around ~10GB
- TURN relaying is expensive
- "Unlimited" does not bypass physics
Why there is also a cloud path
P2P requires both peers online. Async transfers do not.
For that case, we upload to encrypted distributed storage (Storj) and deliver when the receiver is ready. That is the paid part.
P2P stays free because it costs us almost nothing.
Open questions
- Real resumable sessions
- QUIC / WebTransport vs SCTP
- Mobile execution
- Browser vendor cooperation
Closing
The code that makes this work is roughly:
- ~1,200 lines for streaming + disk handling
- ~2,300 lines for pacing, ACKs, retries, and backpressure
About 3,500 lines of JavaScript to make browsers do something they were not designed to do.
If you want to see the result: perkoon.com/create. Drop a file, share the code, watch it stream. No signup, no limits, no drama.
If you're building something similar and want to compare notes on WebRTC cursed knowledge, we wrote a deeper dive on the WebRTC protocol layer and we're always around on Discord.
Built Perkoon — P2P file transfer, free forever. Discord if you want to talk about it.
Top comments (0)