We were maintaining 3 Git branches for the same white-label SaaS app one per platform. Every feature meant 3 PRs, every bug fix meant 3 cherry-picks. Here's how we collapsed it into a single branch with runtime identity detection and a build-once CI pipeline.
It started as a practical decision. We white-label our SaaS dashboard for multiple enterprise clients same core product, different branding, domain, and feature set per customer. The obvious solution? Keep a branch per platform, merge features into each one when they're ready.
Simple. Reasonable. And quietly, over months, one of the most painful parts of our workflow.
The Ritual Nobody Wanted to Own
Picture this: you've just finished a feature. Tests pass. Code review done. You're ready to ship.
Now do it three times.
merge to launchpad → CI runs → deployed ✓
cherry-pick to nexus → resolve conflicts → deployed ✓
cherry-pick to orion → resolve conflicts → deployed ✓
Every single release. Every bug fix. Every dependency bump. Three PRs, three CI runs, three opportunities for the branches to quietly drift apart. And they always do because entropy is patient and developers are busy.
The worst part wasn't the volume of work. It was the invisible divergence. Orion misses a commit. Nobody notices until a customer reports a bug that was fixed two weeks ago on Launchpad. Now you're doing archaeology in three git histories trying to figure out what's in sync and what isn't.
The 11pm Problem
Normal feature work is just annoying overhead. Emergency fixes are where this setup becomes genuinely dangerous.
A production bug doesn't care that it's late. It doesn't care that you're tired, that the cherry-pick has a conflict, or that you're not sure if that related refactor from last week landed on all three branches.
You fix it. You cherry-pick it. You hope.
We've all been in that Slack thread: "Nexus is fixed, Orion still looks wrong, checking..."
That's not a process problem. That's a system problem and you can't discipline your way out of a bad system.
The Real Antipattern: Using Git as a Config File
Here's the thing we eventually understood: the branches weren't the problem. The reason we needed the branches was.
Every platform difference a feature that only two of three clients get, a logo, a product name in the nav lived inside a Git diff between branches. The branch was the configuration. And the moment you use Git history as a config layer, you've committed yourself to manual synchronization forever.
No config file. No feature flag. Just three versions of the truth, slowly walking away from each other.
The Fix: Make the App Know Where It Is
The solution wasn't a better branching strategy. It was making the platform identity a runtime concern instead of a deploy-time one.
Step 1 Read the hostname, derive everything else
const HOSTNAME = window.location.hostname;
const isLaunchpad = HOSTNAME.includes("launchpad.io");
const isNexus = HOSTNAME.includes("nexus.io");
const isOrion = HOSTNAME.includes("orion.io");
One build artifact. The app figures out where it's running the moment it boots. No environment variables, no separate bundles, no deploy-time injection.
Step 2 Feature flags that know your identity
Instead of hiding features by deleting them from a branch, we gate them with flags derived from the detected platform:
export const FEATURE_FLAGS = {
CONTENT_INSPIRATION: !isOrion,
DIRECTORY: !isOrion,
PLATFORM_CUSTOMIZATION: !isOrion,
};
These flags control both route registration and sidebar visibility a platform that doesn't support a feature never mounts the route and never renders the nav item. It's as if the feature doesn't exist on that platform, without it actually being absent from the codebase.
Want to enable a feature for Orion next quarter? One line. No branch merge, no conflict, no archaeology.
Step 3 Structured assets instead of overwritten paths
Each platform's branding assets used to live at the same path across branches a logo.png that meant different things depending on which branch you were on. We replaced that with an explicit identity directory:
public/identities/
launchpad/big-logo.png
launchpad/small-logo.png
nexus/big-logo.png
nexus/small-logo.png
orion/big-logo.png
orion/small-logo.png
A PLATFORM_CONFIG lookup table maps the detected hostname to the right asset paths at runtime. All logos coexist in one repo no collisions, no branch-specific overrides.
Step 4 Build once, deploy everywhere in parallel
This is where it all comes together. The CI pipeline now has one job that builds the app and three jobs that deploy the same artifact simultaneously:
push to main
└── detect-changes
└── build ──────────── (one artifact, uploaded once)
├── deploy → Launchpad (S3 + CDN invalidation + health check)
├── deploy → Nexus (S3 + CDN invalidation + health check)
└── deploy → Orion (S3 + CDN invalidation + health check)
Here's the actual GitHub Actions structure:
jobs:
detect-changes:
outputs:
dashboard: ${{ steps.filter.outputs.dashboard }}
build:
needs: detect-changes
if: needs.detect-changes.outputs.dashboard == 'true'
steps:
- run: pnpm build
- uses: actions/upload-artifact@v4
with:
name: dashboard-build
path: apps/dashboard/build
deploy-launchpad:
needs: build
steps:
- uses: actions/download-artifact@v4
with:
name: dashboard-build
- name: Sync S3
# sync to launchpad bucket
- name: Invalidate CloudFront
# invalidate /index.html only
- name: Health check
run: |
for i in $(seq 1 10); do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" --max-time 15 https://app.launchpad.io)
[ "$STATUS" = "200" ] && { echo "Health check passed"; exit 0; }
echo "Attempt $i: HTTP $STATUS retrying in 15s"
sleep 15
done
exit 1
deploy-nexus:
needs: build
# same pattern, different bucket + distribution
deploy-orion:
needs: build
# same pattern, different bucket + distribution
A few details worth calling out:
Path-based change detection. The build only triggers when something in apps/ or packages/ actually changed. Updating a doc or a workflow comment doesn't burn CI minutes rebuilding the app.
Pinned action versions. Every third-party GitHub Action is pinned to a commit SHA not @v2, not @master. Floating tags are someone else's broken deploy waiting to become yours.
Surgical CDN invalidation. Only /index.html gets invalidated after each deploy. Every other asset is content-hashed by Vite, so it auto-invalidates when its content changes. Invalidating /* on every deploy is wasteful and slower for users still in-flight.
Health checks that actually mean something. After each deploy, CI curls the platform's own URL and retries for up to 2.5 minutes. If the app doesn't respond healthy, the job fails loudly on the right platform's job not silently 20 minutes later in a support ticket.
The GitHub Workflow: Step by Step
This wasn't a weekend rewrite we moved incrementally. Here's the exact sequence we followed so nothing broke in production while the migration was in progress.
Step 1 Audit what was actually different across branches
Before touching any code, we listed every platform-specific difference:
- Logo and favicon paths
- Product name in the nav, page titles, and meta tags
- Features enabled/disabled per platform
- Sidebar items and routes that shouldn't exist on certain platforms
This became the spec for what PLATFORM_CONFIG and FEATURE_FLAGS needed to cover.
Step 2 Build the platform detection layer
Added a single platform-config.js module that derives everything from window.location.hostname. All downstream code reads from here nothing else checks the hostname directly.
const HOSTNAME = window.location.hostname;
export const isLaunchpad = HOSTNAME.includes("launchpad.io");
export const isNexus = HOSTNAME.includes("nexus.io");
export const isOrion = HOSTNAME.includes("orion.io");
export const PLATFORM_CONFIG = {
name: isOrion ? "Orion" : isNexus ? "Nexus" : "Launchpad",
bigLogo: `/identities/${isOrion ? "orion" : isNexus ? "nexus" : "launchpad"}/big-logo.png`,
smallLogo:`/identities/${isOrion ? "orion" : isNexus ? "nexus" : "launchpad"}/small-logo.png`,
};
export const FEATURE_FLAGS = {
CONTENT_INSPIRATION: !isOrion,
DIRECTORY: !isOrion,
PLATFORM_CUSTOMIZATION: !isOrion,
};
Step 3 Replace all hardcoded identity values
Went through every file that referenced a platform name, logo path, or brand color and replaced them with PLATFORM_CONFIG lookups. Components that rendered conditionally based on the branch now read from FEATURE_FLAGS.
Step 4 Migrate assets into public/identities/
Moved all per-platform logos into the structured directory. Deleted the old shared logo.png that was being overwritten per branch. Verified each platform rendered the correct logo locally by temporarily overriding window.location.hostname in dev.
Step 5 Gate routes and sidebar items behind feature flags
Routes that shouldn't exist on Orion were wrapped:
// Before hard-deleted from the branch
{ path: "content-inspiration", element: <ContentInspiration /> }
// After present in all branches, conditionally registered
...(FEATURE_FLAGS.CONTENT_INSPIRATION
? [{ path: "content-inspiration", element: <ContentInspiration /> }]
: [])
Same pattern applied to sidebar data flags guard both the nav item and the route, so there's no way to reach a disabled page via URL either.
Step 6 Rewrite the workflow from scratch
The original workflow was a single job: build → deploy to one bucket. We replaced it with:
detect-changes uses dorny/paths-filter to check if anything in apps/ or packages/ actually changed. If not, the entire pipeline skips. This stops CI from rebuilding the app on every doc or config commit.
One gotcha: GitHub Actions outputs are always strings. Comparing
outputs.dashboard == truesilently fails it needs to be== 'true'. We hit this and added a comment in the workflow so the next person doesn't spend an hour debugging it.
build runs once, uploads the artifact with actions/upload-artifact@v4. Retention set to 1 day it only needs to survive the deploy jobs that run immediately after.
deploy-* (×3, in parallel) each job downloads the same artifact and runs:
- S3 sync with
--deleteflag to remove stale files - CloudFront invalidation on
/index.htmlonly not/* - Health check with 10 retries and 15s intervals
Step 7 Pin every action to a commit SHA
Replaced all @v2 / @master action references with pinned SHAs:
# Before dangerous
uses: jakejarvis/s3-sync-action@master
uses: chetan/invalidate-cloudfront-action@v2
# After pinned
uses: jakejarvis/s3-sync-action@be0c4ab89158cac4278689ebedd8407dd5f35a83
uses: chetan/invalidate-cloudfront-action@cacab256f2bd90d1c04447a7d6afdaf6f346e7b3
Floating tags mean a third-party maintainer can silently change what your workflow runs. Pinning to a SHA is the only way to guarantee reproducibility.
Step 8 Change CDN invalidation from /* to /index.html
The old workflow invalidated everything on every deploy. With Vite, every asset (JS, CSS, images) gets a content hash in its filename so they auto-invalidate when they change. The only file that keeps a stable name is index.html. Invalidating /* was burning CDN quota and slowing down cache warm-up for no reason.
Step 9 Set up isolated test infrastructure
Before merging any of this to main, we validated the entire pipeline end-to-end using:
-
Separate S3 buckets per platform (
workflow-testing-1/2/3) - Separate CloudFront distributions pointing to those buckets
- A separate trigger branch so the test workflow fires independently of production
Health checks in the test workflow point to the test CloudFront domains not the production URLs. This was a critical detail: if health checks hit production, a passing check tells you nothing about whether your test deploy actually worked.
Step 10 Merge, delete the platform branches, never look back
Once the workflow passed end-to-end on test infrastructure all three platforms deployed correctly from a single artifact, health checks green we merged to main and retired the per-platform branches.
What Shipping Looks Like Now
Before shipping a feature:
| Platform | Work |
|---|---|
| Launchpad | Open PR → merge → wait for CI |
| Nexus | Open PR → cherry-pick → resolve conflicts → merge |
| Orion | Open PR → cherry-pick → resolve conflicts → merge |
| Total | 3 PRs, ~30–45 min overhead, hope nothing diverged |
After:
| Platform | Work |
|---|---|
| All three | Open 1 PR → merge |
| Total | CI handles the rest |
Before a fix at 11pm:
- Fix it on the main branch
- Cherry-pick to Nexus (conflicts likely)
- Cherry-pick to Orion (more conflicts)
- Deploy all three
- Manually verify all three are healthy
- Update the Slack thread with "ok they're all good I think"
After:
- Fix it
- Merge
- Watch CI confirm all three are healthy
- Go to sleep
What This Unlocked
For developers: CI/CD went from something nobody wanted to touch to something you can actually iterate on. We built an isolated test path separate infrastructure, separate trigger branch, dedicated CDN distributions so pipeline changes can be validated without touching production. The mental model shifted from "three codebases that happen to look similar" to "one codebase that knows where it's deployed."
For the team: New engineers can onboard to the entire deployment model by reading one workflow file and one config module. No tribal knowledge about branch state, no institutional memory required, no "ask someone who knows which cherry-picks landed where."
For the business: Every platform ships the same tested artifact at the same time. No platform running a version that's two fixes behind. No inconsistent feature availability that support has to explain to enterprise clients. Adding a fourth platform is a config entry and a deploy job not a new branch to maintain forever.
The Takeaway
Three branches for three platforms is one of those decisions that feels obviously correct when you make it and obviously wrong in hindsight. It's not a bad call it's one that defers complexity into a form that's invisible until it's actively hurting you.
The underlying question is: where does your platform identity live? If the answer is "in Git history," you've made humans responsible for synchronization that machines should own. And humans are bad at synchronization not because they're careless, but because they have better things to do.
Move the identity into runtime config. Build the artifact once. Let the pipeline own distribution. The complexity doesn't disappear it moves somewhere that doesn't require three PRs to manage.
One branch. One build. One merge to ship everywhere.
Top comments (0)