This is the second article in a series about migrating a production healthcare platform from AngularJS to Next.js. The first article covered how we got here — 10 years of technical debt, ecosystem decay, and a failed AI migration attempt. This one is about the approach that actually works.
Why Big-Bang Rewrites Were Never an Option
When you run a healthcare SaaS platform used by clinical staff daily, you can't just flip a switch and hope for the best. You can't take the system down for a migration weekend. You can't risk broken workflows for users who are tracking quality events in hospitals. And you definitely can't pause feature development for months while you rewrite everything.
We knew this from experience. Our designer attempted a Metronic UI framework upgrade twice over the years. Both times followed the same pattern: months of isolated work, then an attempt to merge against a codebase that had moved on without him. The first time, weeks of merge conflicts. The second time — years later — he couldn't even get the project to start. Both attempts were abandoned entirely.
These weren't failures of effort or skill. They were proof that any migration approach requiring a "stop the world" phase is fundamentally broken for a living product with active development.
The Inspiration
The idea came from an unexpected place. I remembered how PrivatBank — one of the largest Ukrainian banks — migrated their web interface sometime in the 2010s. When you logged in, some pages would open in the old design, others in the new. The most common user flows were on the new UI first. Over time, more pages switched over until the old interface simply disappeared.
It took me a couple of months to fully articulate this concept to management. The abstract idea — "we'll run both frontends in parallel and switch pages gradually" — made sense in theory, but it wasn't clicking. Would users see two different apps? How would authentication work? How would we know which pages are ready?
Once we actually built a working prototype and deployed it — management immediately saw how powerful it was. Migration progress became visible and tangible. Not some abstract "we're working on it" buried in a backlog, but real screens that real users could interact with, with instant rollback to the old UI if anything was wrong.
We called it simply "the switch."
Why Our Backend Made This Possible
Before diving into the switch implementation, it's worth explaining why swapping the frontend was feasible at all: our backend was effectively frontend-agnostic from day one.
Over the years, several clients asked for direct API access to build their own integrations. I provided them with API structure documentation and even sample JavaScript snippets with plain AJAX calls — how to authenticate, how to pull data, how to submit records. They built their own tools on top of our backend, completely independent of our UI.
This meant our backend wasn't coupled to AngularJS in any meaningful way. It was a standalone API that happened to have an AngularJS consumer. Adding a Next.js consumer was architecturally no different from adding another client's custom integration. The frontend is just one of potentially many consumers.
On top of that, we had enforced backend-first security from the very beginning — every API call validates access levels server-side, regardless of what the frontend claims. This decision, made in 2015, turned out to be one of the most consequential of the entire project. When we swapped the frontend, the security model didn't need to change at all.
The Architecture
Modular nginx Configuration
The foundation was already in place thanks to our sysadmin's approach to nginx configuration. Instead of monolithic config files, he had structured everything modularly: the main nginx.conf includes all files from conf.d/, where each domain gets its own .conf file. Over 10 years of domain changes, mirror setups, and SSL certificates, this modularity proved invaluable — a programmer's approach to server configuration.
For the switch, we needed two server blocks: one for the old frontend, one for the new. Both proxying /backend/ to the same application server. The critical difference: the old frontend serves static files from a directory (classic AngularJS), while the new one proxies to a Next.js process on port 3000.
Here's the simplified structure of the new frontend's server block:
server {
listen 443 ssl http2;
server_name new.app.example.com;
# The switch configuration — a single JSON file
# that both frontends consume
location /frontend-redirect-config.json {
add_header Access-Control-Allow-Origin "*";
default_type application/json;
alias /srv/nextjs/config/;
}
# Backend API — same upstream as the old frontend
location /backend/ {
proxy_pass http://localhost:18640/;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cookie_path / "/; secure; HttpOnly; SameSite=none";
}
# Next.js application
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cookie_path / "/; secure; HttpOnly; SameSite=none";
}
}
The old frontend's server block looks similar for /backend/, but serves static files directly and carries a decade of accumulated rewrite rules, SAML integration paths, and legacy routing logic. The contrast is striking — and a good reminder of why we're migrating.
Shared Authentication
Both frontends talk to the same backend, and authentication is handled through HTTP-only cookies — a decision we made years ago during security reviews, long before migration was on the table. Because both server blocks set the same cookie parameters (Secure; HttpOnly; SameSite=none), a user logged into one frontend is automatically authenticated on the other. No duplicate login, no token juggling.
There's one catch that cost us some time. The old frontend had been using HTTP-only cookies for authentication for years — this was established during security reviews long ago. But the team building the new frontend somehow implemented authentication differently, bypassing HTTP-only cookies entirely. It worked, so nobody questioned it initially. When I had them align with the established approach — which was non-negotiable for our security standards — authentication between the two frontends broke.
The problem: our old frontend lived on app.example.com and the new one on a completely different domain. HTTP-only cookies don't share across different domains — something that wasn't visible with the previous non-standard auth approach. The fix was straightforward: move the new frontend to a subdomain of the same domain (e.g., new.app.example.com) and set COOKIE_DOMAIN=.example.com so cookies are shared across subdomains. Simple in retrospect, but it only surfaced when we unified the authentication mechanism properly.
The Single Configuration File
From the beginning, I required one architectural constraint: both frontends must consume a single configuration file. Whoever controls the routing shouldn't need to touch either frontend's code, rebuild anything, or redeploy. Just edit the JSON and the behavior changes.
The implementation — built by a key team member who deserves significant credit for making the entire switch mechanism work (Michael Balakhon) — uses a JSON configuration file served from the nginx layer:
{
"version": "0.6-test",
"old-to-new": [
{
"source": "/events",
"destination": "/events/list"
},
{
"source": "/events/:id",
"destination": "/events/detail/:id"
},
{
"source": "/submit",
"destination": "/submit/event",
"force": true
},
{
"source": "/dashboard",
"destination": "/dashboard"
},
{
"source": "/settings",
"destination": "/settings"
}
],
"new-to-old": [
{
"source": "/legacy-settings",
"destination": "/settings"
}
]
}
Two sections: old-to-new (routes that should redirect from the old frontend to the new one) and new-to-old (routes where the new frontend admits "I don't have this yet" and sends the user back). Both frontends fetch this file on startup and apply it to their routing.
Two Levels of Control
The redirect service on the old AngularJS frontend checks two conditions before redirecting:
-
force: true— This page is always served by the new frontend, regardless of user preference. Used for pages that are fully stable and tested. - User toggle — A "Turn on new UI (Beta)" setting in the user's profile. When enabled, all non-forced routes in the config become active for that user.
This gives us surgical control:
-
For production clients: Only
force: truepages redirect. These are the pages we're confident about — submission forms, core event screens. The majority of users interact primarily with these screens and may never even realize there's an old UI behind them. - For test environments: Everything redirects to the new frontend. Even half-finished screens. This keeps the team honest about the real state of things.
- For individual users: If someone hits a bug or needs a feature that hasn't been migrated yet, they flip the toggle off and they're back on the fully functional old UI instantly.
Graceful Degradation
On both frontends, the redirect logic is designed to fail silently. If the configuration file doesn't load — due to a network issue, a misconfiguration, or anything else — the application simply doesn't redirect. The old frontend works exactly as it always has. The new frontend works standalone. No crashes, no error screens, just a log entry.
This was a deliberate design choice. In healthcare, "it doesn't work perfectly yet, we're fixing it" is acceptable. "It doesn't work at all" is not.
What We Migrated First
The migration order followed user impact, not technical complexity. The core event tracking module — submission forms, event lists, event detail screens — went first. These are the screens that 95% of users interact with daily. For most of them, the new UI is the only UI they'll ever see.
Power users, administrators, and super users will encounter the old UI for a while longer. Configuration screens, user management, advanced settings — there are hundreds of these, and they'll migrate gradually. But the critical path is already on the new frontend.
We maintain different configurations for test and production environments. On test, the version string is ahead (e.g., 0.6-test) with more routes pointing to the new frontend. On production (0.4-prod), we're more conservative — fewer old-to-new routes, more new-to-old fallbacks. As screens are tested and stabilized, we promote routes from test to production configuration.
When Is a Page "Ready"?
Our approach to readiness is deliberately quality-driven, not deadline-driven. The switch gives us the luxury of not rushing — the old UI works perfectly fine, so there's no pressure to push half-baked screens to production.
The criterion is simple: when we're fully satisfied with the result, we switch. Not "when it mostly works," not "when we've hit a deadline," but when the page would make a good impression as a finished product. We want users switching to the new UI to feel like they're getting an upgrade, not participating in a beta test.
This is partly why we're "about to launch any day now" and have been for a little while. We could have switched weeks ago if we were willing to ship something rough. We're choosing not to. In healthcare, first impressions with clinical users matter — if the new UI feels buggy or incomplete, users will switch back and be reluctant to try again.
What You'll Forget to Migrate
When you plan a page-by-page migration, you think about routes, components, API calls, and styling. You probably won't think about:
Session timeout logic. Our old frontend had years of accumulated behavior around tracking inactive sessions — idle detection, tab close handling, automatic logout after configurable inactivity periods, warning dialogs before timeout. Some of these were added to satisfy specific client security requirements. None of them existed on the new frontend.
The result: a user on the new UI with strict session settings enabled would get logged out after two minutes of normal use, because the new frontend wasn't sending the keepalive signals that the old one had been sending for years.
These invisible behaviors — things that aren't features, aren't in any ticket, and aren't in any documentation — are the real migration risk. They live in the old code and in the heads of people who built them. The strangler fig approach at least gives you a safety net: when an invisible behavior surfaces as a bug, the user switches back to the old UI while you fix it.
The Page Reload Trade-Off
One thing the switch doesn't do is provide a seamless transition. When a user navigates from an old-UI page to a new-UI page (or vice versa), there's a full page reload. A brief loading flash as the browser navigates between the two subdomains.
This bothered me initially. I remember thinking during the early shell development that any URL change triggering a full reload felt wrong. Then I realized we'd been living in AngularJS's hash-based routing world for so long that I'd forgotten the real web had moved on — modern frameworks handle URL changes without reloads natively.
Within each frontend, navigation is smooth. The Next.js shell maintains a persistent layout container, and only the page content swaps. But crossing the boundary between old and new? Full reload. It's not ideal, but it's a trade-off we can live with. Users see a brief "Loading" state, and they're in the other UI. Far better than the alternative of waiting months for a complete rewrite.
Where We Are Now
The switch is live in production. Core event tracking screens run on the new Next.js frontend. Hundreds of secondary screens still live on the old AngularJS application. Users with the beta toggle can explore more of the new UI, and anyone can fall back to the old UI at any time.
We're days away from opening the new UI to all users. The core workflows are on Next.js, the switch is stable, and the fallback to the old UI is always one toggle away. Hundreds of screens still need to migrate, and that will take months. But the mechanism is proven, the risk is contained, and every new page we switch over makes the next one easier.
Next in the series: what we learned about AI-assisted development during the migration — commercial AI migration services, vibe coding pitfalls, and the real distinction between AI as a tool and AI as a shortcut.
I'm a Technical Lead and Software Architect with 17 years of experience building and maintaining healthcare SaaS platforms. The switch mechanism described in this article was implemented by Michael Balakhon, who turned an architectural concept into a working production system. Connect with me on LinkedIn if you're going through something similar.
Top comments (0)