Vercel is a great platform when it comes to hosting easily any web projects. Using it with their own product, NextJS, makes it even smoother. It allows any developer to be fully production-ready in such a short time, and for free : plug your Github repository, do some quick setup and boom, you're live 🚀
However, when your project is growing and scaling, you start to hit Vercel limitations, at least, free tier limitations.
Context
Here at Fluum we are building the next-gen AI Co-Founder for solopreneurs, allowing anyone to run their business and sell their services seamlessly. Focus on what matters to you, your AI Co-Founder handles the rest.
We offer this whole set of features at a very competitive price. Stop juggling between many tools, Fluum is the all-in-one tool you need.
Following our rapid gain of users and traction, the codebase grows, the number of modules grows as well and the tools we need to use to monitor everything become important.
While we used to have build times around 4 to 5 minutes, it quickly became unpredictable, from 5 to 25 minutes, with some "Out of Memory" errors time to time. After a quick check, we found the culprit : Sentry.
Sentry is a tool that allows us to monitor our production frontend errors by sending us alerts whenever something bad happens. To run properly, it needs to gather sourcemaps from the NextJS build and upload them to Sentry, and that's where it starts eating up all the RAM
Initial setup
Following any documentation online, Sentry is said to be added at build time, in the next.config.js
as follows :
const nextConfig = {
....
}
module.exports = withSentryConfig(nextConfig, {
...
})
Since this Sentry build happens inside Vercel, we are quickly limited by the resources offered and start to hit OOM errors
Error: Command "pnpm run build" exited with SIGKILL
▲ Build system report
▲ To always completely log this report, add VERCEL_BUILD_SYSTEM_REPORT=1 as an Environment Variable to your project.
• At least one "Out of Memory" ("OOM") event was detected during the build.
• This occurs when processes or applications running during the build completely fill up the available memory (RAM) in the build container. When this happens, the build container terminates one of the processes during the build with a SIGKILL signal.
• Read this troubleshooting guide for more information: https://vercel.link/troubleshoot-build-errors
Remediations
There are 2 solutions to this problem :
- Remove Sentry and become blind
- Remove Sentry... from NextJS build and handle it outside
We are of course going to opt for choice 2 :)
The key component here becomes GitHub Action : we are going to prebuild the NextJS project in GitHub Action, then run the Sentry CLI to create a new release with sourcemaps, and then deploy the prebuilt NextJS to Vercel using Vercel CLI
Now on Vercel, we unlink the GitHub repository and select the deployment mode from "NextJS" to "Others"
Here is the final Github Action file :
name: Vercel Production Deployment
env:
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID_PROD }}
NODE_OPTIONS: '--max_old_space_size=4096' # 4 GB
on:
push:
branches:
- master
jobs:
notify-slack-starting-production-deployment:
name: Notify Slack
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Get commit message
id: commit_message
run: |
# Escape quotes and newlines in commit message for JSON
COMMIT_MSG=$(git log -1 --pretty=%B | sed 's/"/\\"/g' | tr '\n' ' ' | sed 's/[[:space:]]*$//')
echo "::set-output name=message::$COMMIT_MSG"
- name: Get commit hash
id: commit_hash
run: |
echo "::set-output name=hash::$(git rev-parse --short HEAD)"
- name: Send Slack notification
uses: slackapi/slack-github-action@v1.24.0
with:
payload: |
{
"text": "Deploying Production Frontend | ${{ steps.commit_hash.outputs.hash }} : ${{ steps.commit_message.outputs.message }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_PROD_WEBHOOK_URL }}
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install pnpm
uses: pnpm/action-setup@v2
with:
version: latest
- name: Use Node.js
uses: actions/setup-node@v3
with:
node-version: '22'
cache: 'pnpm'
- name: Cache pnpm store & node_modules
uses: actions/cache@v3
id: pnpm-cache
with:
# Cache both the global store and node_modules
path: |
~/.pnpm-store
node_modules
key: pnpm-${{ runner.os }}-${{ hashFiles('pnpm-lock.yaml') }}
restore-keys: |
pnpm-${{ runner.os }}-
- name: Install Dependencies
run: pnpm install --frozen-lockfile
- name: Install Vercel CLI
run: pnpm install --global vercel@latest
- name: Pull Vercel Environment Information
run: vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}
- name: Build Project Artifacts
run: vercel build --debug --prod --token=${{ secrets.VERCEL_TOKEN }}
- name: Install Sentry CLI
run: pnpm install --global @sentry/cli@latest
- name: Upload Sentry Source Maps
# only on push to master
if: github.event_name == 'push'
run: |
npx @sentry/cli releases new $GITHUB_SHA
npx @sentry/cli releases files $GITHUB_SHA upload-sourcemaps .next --rewrite
npx @sentry/cli releases finalize $GITHUB_SHA
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: ${{ secrets.SENTRY_ORG }}
SENTRY_PROJECT: ${{ secrets.SENTRY_PROJECT }}
- name: Deploy Project Artifacts to Vercel
run: vercel deploy --prebuilt --prod --token=${{ secrets.VERCEL_TOKEN }}
Using this flow, our production builds are now back to approx. 5 minutes, and free of any Out of Memory error 🎉
Additional win : you can save on your organization costs by only having 1 seat in Vercel, now that this is the GitHub Action that is responsible for the deployment 💸
Hoping this solution will help many folks out there and save some time - keep growing 🚀
Top comments (0)