Read the original here
I was recently sat in a cafe in Putney, migrating my personal website, jxd.dev, into my increasingly large vntg monorepo. It currently contains 7 deployable applications and even more packages that handle everything from UI to billing and authentication to emails. In the future I might write up about the motivation and strategy behind this workflow but this post is all about how I deploy those applications to Vercel.
Problem
Turborepo, Vercel and PNPM workspaces are core to my monorepo, which has the following structure:
.
├── apps/
│ ├── jxd <- Astro
│ ├── vrpc/
│ │ ├── web <- Next
│ │ └── api
│ └── jot/
│ ├── web <- Next
│ └── mobile <- React + Capacitor
├── packages
└── deploy.ts <- Magic happens here!
The problem here is that 7 is greater than 3 and A Git Repository cannot be connected to more than 3 Projects
. So for a while I had 3 projects that were automatically being deployed by Vercel and 4 that were manually being built and deployed locally. Not ideal!
Implementation
So let's fix it so that all the apps being deployed to Vercel are deployed automatically on every push… I think there's a 2 letter acronym for that 🤔…
We start off with a simple Github Actions workflow. Nothing crazy here, checkout, build, deploy:
# .github/workflows/cicd.yml
name: ci/cd
on:
push:
branches: ["main"]
jobs:
build:
name: build and deploy
timeout-minutes: 15
runs-on: ubuntu-latest
env:
DATABASE_URL: ${{secrets.DATABASE_URL}}
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
TURBO_TOKEN: ${{ secrets.VERCEL_TOKEN }}
TURBO_TEAM: vntg
steps:
- name: checkout
uses: actions/checkout@v4
with:
fetch-depth: 2
- uses: pnpm/action-setup@v3
with:
version: 8
- name: setup node
uses: actions/setup-node@v4
with:
node-version: 20
cache: 'pnpm'
- name: install vercel cli
run: pnpm install --global vercel@latest
- name: deps
run: pnpm install
- name: build
run: pnpm build
- name: deploy
run: pnpm run deploy
The magic happens in that img step pnpm run deploy
which is defined as tsx deploy.ts
:
// deploy.ts
import util from "util";
import { exec } from "child_process";
const asyncExec = util.promisify(exec);
const projects = [
{
path: "apps/jxd",
projectId: "...",
},
{
path: "apps/vrpc/marketing",
projectId: "...",
},
// ...
];
async function run() {
const token = process.env.VERCEL_TOKEN;
if (!token) {
throw new Error("missing vercel token");
}
const orgId = process.env.VERCEL_ORG_ID;
if (!orgId) {
throw new Error("missing org id");
}
for (const { path, projectId } of projects) {
await asyncExec(
`VERCEL_PROJECT_ID=${projectId} vercel pull --yes --environment=production --token=${token}`
);
await asyncExec(
`VERCEL_PROJECT_ID=${projectId} vercel build --prod --token=${token} ${path}`
);
await asyncExec(
`VERCEL_PROJECT_ID=${projectId} vercel deploy --prod --prebuilt --token=${token}`
);
}
}
run()
.then(() => console.log("✅ complete"))
.catch((e) => {
console.error(e);
process.exit(1);
});
This script is iterating over my projects configuration and for each running vercel pull
, vercel build
and vercel deploy
. With that all 7 applications are being continuously deployed without using the Vercel-Github connection and therefore sidestepping the connection limit.
Each project is still configured on Vercel and that'll look something like this:
Performance
I'm going to now attempt to perform a feat of prediction and answer some questions you might be thinking…
Why not run every single build in parallel?
Vercel CLI should always be invoked from the monorepo root, not the subdirectory (see link) so if I were to run vercel pull
in parallel then each invocation would overwrite the project settings of the others.
Running build n+1 times, sequentially, must be slow!
The builds themselves are actually very fast thanks to Turborepo's remote caching abilities but the deployments are slow. Currently this runs in ~5 mins…
Conclusion
I'm aware that this solution may not be optimal. I could use matrices within Github Actions to parallelise the deployments. I could use turbo-ignore
to only deploy the changed packages… but right now I don't much care. This is good enough for now! However, in a few months time, as 7 turns to 17, there might be a follow up to this post showing a much improved version and some snide comments about how stupid past-Jamie was. So stick around for that 👋
Top comments (0)