DEV Community

Cover image for How we cut game update bandwidth by 94% building our own delta sync protocol
Rubén Cruz Fuentes
Rubén Cruz Fuentes

Posted on

How we cut game update bandwidth by 94% building our own delta sync protocol

I'm building Raccreative Games, an indie game platform with a desktop launcher. Developers upload builds via a CLI tool called Clawdrop (Rust), and players download them through an Electron app.

Early on, the update flow was simple and painful: developer changes two textures → uploads the entire 2.2 GB build → every player downloads 2.2 GB. The S3 egress bill didn't care that 98% of those bytes hadn't changed.

I needed differential sync. What I ended up building is rac-delta, an open protocol with SDKs in Rust and Node. This post is about why existing tools didn't fit, how the protocol works, and what the real integration looks like in both a Rust CLI and an Electron app.


Why existing tools didn't work

Before building anything, I looked at what already existed:

rsync needs SSH access to both ends. That's fine for server-to-server, but completely unworkable when one end is a developer's local machine and the other is S3 with temporary credentials.

bsdiff / xdelta produce binary patches between two versions of a single file. They don't understand directories, and they require you to have both the old and new version available at the same time to generate the patch — which means you need to store full previous builds somewhere.

S3 versioning and Azure snapshots track object versions, but at the whole-object level. Downloading "just what changed" isn't a primitive they expose.

Steam / Epic patching is proprietary, closed, and inseparable from their distribution ecosystems.

The gap was clear: there was nothing storage-agnostic, open, and directory-aware. So I designed rac-delta around those three constraints.


How the protocol works

The core idea is chunking with content-addressed storage.

Every file in a directory is split into fixed-size chunks (1 MB by default, configurable). Each chunk is hashed with Blake3, and the result is a manifest file called rd-index.json that describes the entire directory:

{
  "files": [
    {
      "path": "assets/textures/atlas.png",
      "hash": "a3f2...",
      "size": 4194304,
      "chunks": [
        { "hash": "b1c3...", "size": 1048576 },
        { "hash": "d4e5...", "size": 1048576 },
        { "hash": "f6a7...", "size": 1048576 },
        { "hash": "9b2c...", "size": 1048576 }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

To sync, you compare two rd-index.json files - local vs remote - and get back a DeltaPlan:

DeltaPlan {
  newAndModifiedFiles: FileEntry[];
  deletedFiles: string[];
  missingChunks: ChunkEntry[];   // only these get transferred
  obsoleteChunks: ChunkEntry[];
}
Enter fullscreen mode Exit fullscreen mode

Only missingChunks travel over the network. Everything else already exists on the other end.

Why Blake3? For file integrity verification, hashing speed matters more than cryptographic hardness. Blake3 is significantly faster than SHA-256 on modern hardware - relevant when you're scanning multi-gigabyte directories on every sync operation.

Why fixed chunk size? Dynamic chunking (like rsync's rolling checksum) adapts chunk boundaries to file content, which gives better deduplication when bytes are inserted or deleted in the middle of a file. Fixed chunking is simpler, more predictable, and fast enough for the use case here - game builds where files are typically replaced wholesale rather than surgically edited. The chunk size is configurable (you might want 2 MB for large binary assets, 512 KB for smaller files), but it doesn't vary within a run. (But we do recommend a fixed size for every operation)

Chunk deduplication across files: if two files share an identical region, that chunk is stored once. The rd-index.json can reference the same chunk hash from multiple files.


The upload side: Rust CLI (Clawdrop)

The Clawdrop CLI uses the Rust SDK. Here's the core of what a push operation does - I'll show the relevant parts without the surrounding API auth logic:

// 1. Generate local rd-index.json by scanning the directory
let local_rd_index = rac_delta_client
    .delta
    .create_index_from_directory(
        &Path::new(&args.path),
        1024 * 1024,   // 1MB chunks
        Some(6),       // concurrency
        Some(args.ignore),
    )
    .await?;

// 2. Fetch the remote rd-index.json (from S3 via delete-scoped credentials)
let remote_rd_index = rac_delta_client_delete
    .storage
    .get_remote_index()
    .await?;

// 3. Compare - produces the DeltaPlan
let changes = rac_delta_client
    .delta
    .compare_for_upload(&local_rd_index, remote_rd_index.as_ref())
    .await?;

// 4. Upload only missing chunks
pipeline
    .upload_missing_chunks(
        &changes,
        Path::new(&args.path),
        args.force,
        Some(UploadOptions {
            on_progress: Some(Arc::new(move |_phase, progress, speed| {
                // progress reporting to CLI UI
            })),
            ..Default::default()
        }),
    )
    .await?;

// 5. Delete obsolete remote chunks
pipeline.delete_obsolete_chunks(&changes, None).await?;

// 6. Push the new rd-index.json (via presigned URL)
upload_extra_files(&http_client, rd_index_url, rd_index_json).await?;
Enter fullscreen mode Exit fullscreen mode

One detail worth noting: Clawdrop uses two separate S3 clients with different credential scopes - one for uploading new chunks, one for deleting obsolete ones. The upload client has write-only access to a staging prefix; the delete client has read/delete access to the live prefix. This means a compromised upload credential can't touch existing files.


The download side: Electron launcher (Node SDK)

On the player side, the Electron main process handles downloads via IPC. Here's how the rac-delta client is initialized and the download executed:

const racDeltaClient = await RacDeltaClient.create({
  chunkSize: 1024 * 1024,
  storage: {
    type: 's3',
    bucket: credentials.bucket,
    region: credentials.region,
    pathPrefix: credentials.prefix,
    // Credential refresh: if STS token is expiring soon, 
    // ask the renderer process for fresh credentials via IPC
    credentials: () => {
      const expiration = new Date(credentials.expiration).getTime();
      const oneMinute = 60 * 1000;

      if (expiration - Date.now() > oneMinute) {
        return Promise.resolve({ ...credentials });
      }

      // Token about to expire — request fresh ones from the Angular renderer
      return new Promise((resolve, reject) => {
        event.reply('request-fresh-credentials', { game, os });
        ipcMain.once('fresh-credentials', (_event, freshCreds) => {
          freshCreds ? resolve(freshCreds) : reject(new Error('No credentials'));
        });
      });
    },
  },
});

await racDeltaClient.pipelines.download.execute(
  gamePath,
  UpdateStrategy.StreamFromNetwork,  // reconstruct files as chunks arrive
  undefined,
  {
    useExistingIndex: false,
    signal: controller.signal,  // supports abort and pause
    onProgress: (type, progress, diskUsage, speed, bytesDownloaded) => {
      if (type === 'download') {
        event.reply('download-status', {
          progress: Math.round(progress),
          speed: speed / (1024 * 1024),  // MB/s
          downloadedBytes: bytesDownloaded,
        });
      }
      if (type === 'reconstructing') {
        event.reply('disk-status', { progress, diskUsage });
      }
    },
  }
);
Enter fullscreen mode Exit fullscreen mode

A few things happening here worth calling out:

Credential rotation mid-download: S3 STS tokens expire. If a download takes longer than the token lifetime, the credentials callback fires, sends an IPC message to the renderer (Angular), and waits for fresh credentials - all without interrupting the download.

StreamFromNetwork strategy: chunks are written directly to their final file position as they arrive, without buffering to RAM or a temp directory. For players with limited memory this matters - a 2 GB game doesn't require 2 GB of free RAM to update.

There are also other 2 strategies: DownloadAllFirstToMemory and DownloadAllFirstToDisk (and they do exactly what they mean before reconstructing the files).

Pause and resume: the AbortController signal lets the UI cancel mid-download. Clawdrop on the upload side has the same mechanism. Resuming picks up from the last known byte position.


The results

Benchmarks run on real S3 infrastructure (eu-central-1) with a 2.2 GB directory, updating approximately 5% of chunks:

Action rac-delta Raw S3
Download transferred 116 MB 2.219 MB
Upload transferred 115 MB 2.210 MB
Download time 35.5s 671.2s
Upload time 53.3s 268.9s
Egress cost / 1,000 users €9.66 €184.27

The ~95% reduction holds as long as the actual content delta is small relative to total build size - which is the common case for incremental updates. First-time downloads are unaffected, as expected.


Protocol is open, SDKs are MIT

rac-delta is a documented open protocol - the spec describes the rd-index.json structure, the sync algorithm, and the three reconstruction strategies in enough detail to implement in any language. The Rust and Node SDKs are the reference implementations.

npm install rac-delta
cargo add rac-delta
Enter fullscreen mode Exit fullscreen mode

If you're distributing large binaries - desktop app installers, ML model weights, firmware OTA, simulation assets - and "re-upload everything" is your current answer, the protocol is worth a look. The ROI calculator at racdelta.com lets you plug in your own numbers.

Rust SDK: Github repo
NodeJs SDK: Github repo


If you liked this post and want to know more about rac-delta, let me know and I will write more posts about the "guts" of the library :)

Top comments (0)