Learn how I made software feel fast even on an extremely slow 1KB/s network.
I share practical strategies for optimizing frontend, API calls, and data handling.
📹 Real Vessel Network Test
Recorded on a vessel (using VSAT) while testing the real network speed for an old project (~1KB/s)
The Problem
The network was extremely slow, and even small amounts of data took a long time to load.
At this speed, the system I built for normal internet could not work properly:
- Frontend loaded very slowly.
- API requests took a long time or sometimes failed.
- User interactions felt unresponsive and laggy.
Most parts of the system assumed a normal network speed.
I realized I needed to think carefully about how the frontend requests data and how the API responds, so the system could still feel usable even at 1 KB/s.
Speeding Up the Software
After seeing how the software struggled at 1 KB/s, I focused on improving every layer that affects speed and responsiveness.
Here’s how I approached it:
Frontend
I built the frontend with Preact and Vite, prioritizing fast loading and responsiveness even on extremely slow networks.
1. Why Preact?
On a 1 KB/s connection, every kilobyte matters. Using a large framework like React would make even a simple page feel heavy. That’s why I switched to Preact, it offers the same React-style API (hooks, JSX, context) but with a core size of only ~3 KB gzipped.
My project uses a microfrontend architecture, where each part of the system is built and deployed independently.
That means every app ships its own JavaScript runtime — and with React, those runtimes quickly add up to hundreds of kilobytes.
By using Preact, that overhead shrinks to just a few kilobytes per app.
This makes the overall system load faster and reduces memory usage on low-end devices, while still keeping the same React-style development experience.
Comparing Bundle Size
To see how much the choice of framework really matters, I ran a simple test.
I created three minimal “Hello World” apps, one with React, one with Next.js, and one with Preact, all stripped of CSS, assets, and icons. Then I built each project using their respective analyzers:
Next.js with next/bundle-analyzer
React and Preact with nonzzz/vite-bundle-analyzer
The results below show how much smaller Preact’s bundle is compared to React and Next.js.
2. Advanced Frontend Optimizations
Choosing Preact gave me a smaller starting point, but that was only the beginning.
To make the app truly usable on a 1 KB/s network, I applied several architectural and build-level optimizations.
2.1. Bundle Splitting (Dynamic Import + Suspense)
Instead of shipping one large JavaScript bundle, I split the app into smaller chunks and load only what’s needed, using dynamic import(), lazy(), and Suspense to coordinate loading and fallback UI.
import { h, Suspense } from 'preact';
import { lazy } from 'preact/compat';
const Heavy = lazy(() => import('./HeavyComponent'));
export default function App() {
return (
<Suspense fallback={<div>Loading...</div>}> // shown while HeavyComponent loads
<Heavy />
</Suspense>
);
}
2.2. Server-Side Rendering (SSR)
SSR speeds up perceived performance by sending ready-to-render HTML directly from the server.
Users see content immediately, even long before JavaScript finishes loading.
However, that HTML is static at first.
The page still needs to download and hydrate JavaScript before it becomes interactive.
That’s why the size of the client bundle still matters, and this is where Preact shines.
In short: SSR gives instant content, and Preact makes it interactive faster.
2.3. Asset Optimization & Caching
Even with SSR and small bundles, the network is often the slowest part of the system.
To minimize loading time, I focused on reducing asset size and making browser caching more effective.
All static assets, JavaScript, CSS, and images, are deployed with hashed filenames (e.g., main.[hash].js) so browsers can cache them indefinitely.
When a new build is deployed, only the changed files are downloaded again.
Assets are served through a CDN, which delivers them from the nearest edge location, greatly improving latency, especially important on slow or unstable connections.
Each asset is served with:
Cache-Control: public, max-age=31536000, immutable
while HTML remains lightly cached so users always get the latest version.
Images are converted to WebP to reduce file size without visible quality loss, and placeholders are used for lazy loading so content appears immediately.
Backend
Even with a lightweight frontend, the app can still feel slow if the API is heavy.
To optimize both speed and payload size, I focused on several key areas:
1. Zstd Compression
The backend uses Zstd at level 21 for maximum compression, but only if the client requests it.
Responses include a header indicating the compression method (Content-Encoding: zstd), so clients that don’t support it still receive plain data.
This dramatically reduces payload size while remaining compatible.
Example: The client sends the following request:
GET /api/data HTTP/1.1
Host: example.com
Accept-Encoding: gzip, zstd
The Accept-Encoding header tells the server that the client can accept compressed data in the gzip or zstd formats, and prefers the server to send compressed data if possible.
Example: The server sends the following response:
HTTP/1.1 200 OK
Content-Encoding: zstd
Content-Length: 12345
<compressed data in zstd>
2. Binary Serialization with Protobuf
To minimize payload size and speed up parsing, API responses are encoded using Protobuf instead of JSON.
Protobuf is compact, fast, and easy to maintain, making it ideal for low-bandwidth scenarios.
I also considered alternatives:
- JSON: human-readable, but verbose and slower to parse.
- FlatBuffers: extremely compact and fast, but complex to maintain and harder to evolve over time.
Protobuf hits the sweet spot: small payloads, fast parsing, and maintainable schema evolution.
syntax = "proto3";
message User {
string name = 1;
repeated Address address = 2;
}
message Address {
string line = 1;
}
For real-world performance benchmarks, see protobuf.js performance.
3. HTTP/3 Transport
All API requests use HTTP/3, taking advantage of QUIC’s multiplexed streams and improved loss recovery.
Compared to HTTP/2 or HTTP/1.1, HTTP/3 reduces latency, avoids head-of-line blocking, and handles packet loss more efficiently, especially beneficial on unstable or slow networks.
Summary
Optimizing for slow networks requires attention to every layer of the stack, from frontend rendering to API transport.
Frontend optimizations:
- Preact + Vite for minimal bundle size and fast hydration
-
Bundle splitting with
lazy()+Suspensefor component and route-level lazy loading - SSR to deliver instant HTML, with Preact’s small runtime enabling much faster hydration than Next.js
- Asset optimization & caching with hashed filenames, CDN delivery, and WebP images
API optimizations:
- Zstd compression (level 21) applied conditionally for maximum payload reduction
- Protobuf for compact, fast, and maintainable serialization
- HTTP/3 transport for low-latency, reliable delivery on unstable networks
Combined, these strategies allowed the app to remain fast, responsive, and usable even at 1 KB/s, while keeping the total payload minimal.
Thanks for reading!
I hope these insights give you practical ideas for building fast and responsive apps, even on extremely slow networks.




Top comments (1)
🤌🤌🤌