How intentional loading decisions keep your app fast at scale.
Frontend performance is not a late-stage cleanup task. It’s not tech debt. It’s a set of decisions we make every day while we code — what we load, when we load it, and how we render it. The answer depends on the importance of the code, its size, and when the user actually needs it.
Get that wrong, and the browser pays for everything upfront — bytes, main thread, network — whether the user ever sees it or not.
I’m Liana, a frontend engineer at Manychat. In this article we cover five patterns we use to get this right — four import strategies and compression — and how we measure whether it’s working.
Import Patterns
Manychat uses 24 import patterns in practice, grouped into 7 categories — from static and dynamic to type, asset, style, re-exports, and legacy tooling. In practice, two dominate: static imports at 99.6% of the codebase, dynamic imports at 0.4%.
Pattern 1 — Static Imports
Static imports are required for the first meaningful paint. Layout, router, core UI — everything that must exist before the user sees anything. If it’s there on first load, it’s a static import.
If you look carefully at how static imports are ordered: Node.js plugins go on the top level — that’s how the formatter works. Then React core, then external packages. Then dependencies, modules, and other packages.
Pattern 2 — Dynamic imports
0.4% of the codebase. Small number, high impact.
You can see how they differ from static imports in the code — you need a separate file where you define a path and what needs to be imported on the lazy part:
Where is it in the UI? Everywhere where there’s a very heavy screen. Each route becomes a separate chunk which is loaded only on navigation.
A good example is the Flow Player page that you can access by creating an automation, sharing it from the CMS list, and handing the link to someone outside Manychat. It’s heavy. There’s no reason to pay for it on app load.
What dynamic import does — it doesn’t pay for code until the user goes there.
Pattern 3 — Import on Interaction
This is used for optional UI — modals, popovers, or similar things triggered by the user.
We actually don’t use it in our codebase, but we use something very similar: import on render, which is lazy-loaded on mount, not on interaction. You can see this in our modals — all of them work exactly the same way. Why? Because our modals are very lightweight, and there’s no need for import-on-interaction specifically. All our modals render immediately inside Suspense — they just load their chunks lazily, avoiding the cost of features many users never open.
Pattern 4 — Import on Visibility
The component loads when it enters the viewport. This avoids competing with the initial render and reduces the chunk size.
A good example is infinite scroll in TikTok and Instagram automation. When you want to pick a post or reel, a modal opens — and if you have a lot of them, you get an infinite scroll. It avoids loading chunks the user hasn’t scrolled to yet. We have a reusable sentinel component that handles this across the app.
Why not just one import pattern?
One pattern doesn’t fit all. Our goal is to make a right-sized chunk for what we want to load — because it’s bytes plus main thread plus network. We define them the same way we define error severity:
- Critical — load immediately (static)
- Heavy — load on navigation (dynamic)
- Optional — load on interaction
Can we combine them?
Yes — in layers, not as one mega-pattern. And you don’t literally need every technique. That’s overengineering. Define what you want to optimize and why.
Why don’t we use some patterns — hover/focus, prefetch, idle prefetch?
It’s always a question of cost and benefit, and we always run the risk around cache and CDN. Sometimes absence is a deliberate priority — not that we disagree with the idea.
Will it actually change performance?
Like other optimization patterns — it depends on what you measure and where it hurts. Treat them as targeted experiments. They’re useful when you want a specific interaction to feel immediate.
Compression pattern
Imports decide what code loads and when — compression decides how much it weighs when it gets there. Where imports operate at the application bundle layer, compression operates at the origin server and CDN — how you finally deliver bytes to the client.
At Manychat we use two compression utilities: Gzip and Brotli. Both shrink files before they travel over the network and decompress them transparently in the browser. Both are lossless encodings for text-like content (JS, CSS, HTML, JSON, SVG — not already-compressed binaries like most JPEG/PNG).
![]() |
![]() |
|---|
You can check this in the network tab — go to JS files, look at response headers (look for content-encoding: br or gzip).
Gzip is the classic one, supported by every browser. Brotli compresses better, is a little slower, but delivers smaller chunks. Browsers and CDNs pick the best mutually supported algorithm from Accept-Encoding — supporting both gives a good baseline with Gzip and better size where Brotli is available.
Rule: prefer Brotli for static assets where supported; keep Gzip as fallback where needed.
How do we know the patterns are working?
With user-centric metrics and repeatable team habits.
We rely on Core Web Vitals — loading experience, responsiveness, and visual stability — via the web vitals library from Google. When a user stays on a page long enough, an analytics event fires: app_metrics user_interactive_performance. The central place in the codebase is log_web_vitals.
Beyond Core Web Vitals we track two more things:
INP (Interaction to Next Paint) — real-world interaction responsiveness
Long Tasks — where the user can feel that the app is stuck
Both are collected only for logged-in users and visible in Grafana dashboards. For Flow Builder specifically — the heaviest part of Manyсhat — we track INP on both desktop and mobile. In an ideal world every major component would have its own dashboard. For now, we start where it hurts most.
For audits and payload analysis we use Lighthouse — a built-in Chrome tool that generates a detailed performance report for any page. It’s useful for catching issues before they reach real users.
For day-to-day development we use browser DevTools — the network and performance tabs show what’s happening in real time while we code.
Performance is not a one-time fix. Every import decision, every byte that travels over the wire — these are choices that compound over time. Get them right consistently, and users never notice. Get them wrong, and they leave.
The patterns we covered — imports and compression — are not exotic optimizations. They’re the baseline. The metrics are what keep you honest: if you can’t see it, you can’t improve it.
If you want to learn more about how we build Manychat and who we’re currently looking for, check out Manychat Careers.










Top comments (0)