TL;DR: I built a P2P file transfer tool that runs entirely in the browser. No install, no server relay, no account. Here's what I learned about WebRTC data channels, resumable transfers, and the browser storage mess along the way.
Most file transfer tools follow the same pattern: upload to a server, get a link, the recipient downloads from that server. Your file sits on someone else's infrastructure, at least for a while.
I wanted to try a different approach: what if the file never leaves the browser at all?
That question turned into TransP2P — a browser-based P2P file transfer tool built on WebRTC. In this post I'll walk through the technical decisions, the parts that were harder than expected, and the two browser storage paths that almost no one talks about.
The basic architecture
The flow is straightforward in theory:
- Both devices open the site (no install, no account).
- A WebSocket signaling server helps them exchange SDP offers/answers and ICE candidates.
- A direct WebRTC data channel is established.
- The file is chunked and sent over the data channel.
- The receiver reassembles the chunks and saves the file.
The signaling server never touches the file data. Once the connection is established, it's just helping with connection maintenance.
Chunking everything — even when you don't need to
One early decision that paid off: every file is chunked during transfer, regardless of whether resumable transfer is enabled.
The alternative — sending a file as one blob — seems simpler but creates problems fast. Large files block the main thread, and any network hiccup means starting over.
With chunking, each piece is typically 64 KB. The sender streams chunks over the data channel; the receiver buffers them in memory and writes to disk on download. This gives the best speed because there's no intermediate disk I/O during the transfer itself — the bottleneck is the data channel throughput, not your disk.
The resumable transfer problem (and why there are two solutions)
Resumable transfer was where things got interesting — and where browser differences forced two completely separate implementations.
The goal
If a transfer is interrupted (network drops, user closes the tab), the receiver should be able to resume from where it left off, not start from zero.
The core idea is simple: track which chunks have been received, and tell the sender to start from chunk N on reconnect.
But where you store those chunks is the hard part, because browsers don't give you raw disk access. And they definitely don't all give you the same APIs.
Path 1: OPFS + File System Access API (Chrome, Edge, Firefox)
Modern Chromium-based browsers and newer Firefox versions support the Origin Private File System (OPFS). OPFS lets you write files in a sandboxed origin-private directory, and it's fast — writes happen off the main thread when used with Web Workers.
My implementation:
- File chunks are written directly into OPFS using a writable stream from a Web Worker. This bypasses the browser's memory limit entirely — you can handle files that are gigabytes in size.
- File metadata (chunk count, received chunk indices, file name, total size) is stored in IndexedDB, because it's small, structured, and queryable.
On reconnection, the receiver queries IndexedDB to check whether the transfer was completed for that file. If not, it sends the last successfully received chunk index to the sender via the signaling channel. The sender skips ahead and starts from that chunk.
Path 2: IndexedDB only (Safari / iOS)
Safari does not support OPFS access from Web Workers, and the File System Access API is entirely unavailable on iOS. That means you can't stream to disk off the main thread.
For Safari users, the fallback is IndexedDB for everything — both the file chunks and the metadata.
This works, but it has a ceiling. IndexedDB stores data in memory before flushing to disk, so very large files eventually hit Safari's memory limit. On iOS in particular, the limits are strict — file size is effectively capped, and the experience degrades gracefully by showing a size warning before the transfer starts.
It's not ideal, but it's honest. The tool tells you what your browser can and can't do, rather than failing silently.
Signaling: keep it minimal
The signaling server's only job is to relay SDP and ICE candidates between peers until the WebRTC connection is established. I use WebSocket for this — it's simple, widely supported, and low-latency.
The server doesn't authenticate users (no account needed), doesn't store messages beyond the active session, and doesn't know what's being transferred. Once the data channel opens, the signaling server becomes irrelevant to the transfer.
For production, you do need STUN/TURN servers so peers behind NAT can connect. I use a combination of public STUN servers and a self-hosted TURN server for scenarios where UDP is blocked.
Bypassing browser memory limits
This is the part that surprised me most: the data channel itself doesn't have a memory problem — the receiver's storage strategy does.
If you receive a 2 GB file and try to reassemble it in an ArrayBuffer, you'll crash the browser. The key is streaming the incoming chunks directly to storage (OPFS or IndexedDB) without holding the whole file in memory.
With OPFS + a Web Worker, this works beautifully — the main thread stays responsive, and the file never exists as a single in-memory object. This is how TransP2P can handle multi-gigabyte files on supported browsers.
Safari and iOS Safari can't do this, which brings us back to the two-path problem. It's a good reminder that "works in the browser" and "works equally across all browsers" are two different statements.
Async mode: when P2P isn't possible
P2P is great when both peers are online simultaneously. But sometimes you want to send a file to someone who isn't online yet, or the connection keeps failing.
For this, TransP2P has an optional server cache mode. You upload the file to the server (encrypted, with a randomly generated key that's never sent to the server), get a download link or code, and the recipient retrieves it later. Files expire automatically — either after a set time or after a download count is reached.
This is a paid feature because it consumes server resources, but the encryption design means the server never sees the file contents. The encryption key is derived client-side and embedded in the download link fragment (the part after #), so it never reaches the server at all.
Things I wish I knew earlier
ICE candidate gathering takes time. Don't assume the connection is stuck — it often just needs a few more seconds. Show feedback to the user.
Mobile browsers are not desktop browsers. iOS Safari has the tightest restrictions of any modern browser. Test on it early, not after launch.
Data channel reliability vs speed is a tradeoff. ordered: true gives you guaranteed ordering but can stall on packet loss. For file transfer, I went with ordered: true and chunk-level checksumming — the stall is rare in practice and ordering bugs are worse.
People don't understand "P2P". The biggest UX challenge wasn't technical — it was explaining why the file transfers without uploading. If you build something similar, spend time on the "how it works" copy.
Try it / See the code
TransP2P is free to use (the P2P mode is entirely free; server cache is paid). No account, no install:
The site also has a few longer reads on how P2P file transfer actually works and the hidden privacy risks of cloud-based file transfer — written for a general audience if you're curious about the non-technical side of this.
If you've built something with WebRTC data channels, I'd love to hear what tradeoffs you made — particularly around large file handling. The browser storage landscape is still messy in 2026, and more field reports would help everyone.
This post was originally written by the author of TransP2P. If you're working on P2P tools or browser-based file transfer, feel free to reach out.

Top comments (0)