Vercel Pro gives me unlimited cron invocations with a 60 second function cap
Five crons handle blog sync, sitemap, link checks, affiliate health, and analytics rollup
I moved off GitHub Actions cron after two months of silent skipped runs
Every handler uses a CRON_SECRET header check and is safe to run twice
Vercel cron jobs are the reason I stopped paying for a 6 EUR VPS that did nothing except run five shell scripts on a timer. My studio runs on Vercel Pro anyway, so the crons come included, and the functions that execute them share the same deployment as the rest of the site. No separate server. No SSH keys to rotate. No forgotten machine in a Hetzner rack quietly filling its disk with logs.
I run five of them right now. They handle the boring maintenance that used to happen when I remembered, which was roughly never. Blog syndication reconciliation, sitemap regeneration, broken link scanning, affiliate link health checks, and an analytics rollup that writes a single JSON blob the dashboard reads. None of it is clever. All of it used to slip.
How Vercel cron actually works
Cron on Vercel is a crons array in vercel.json that points at a route inside your app. The route is a normal serverless function. Vercel hits it on the schedule you define, using standard cron syntax, always in UTC. There is no timezone setting. If you want 3 AM Berlin time, you do the math.
On Hobby you get two crons, daily frequency max, and a 10 second execution cap. On Pro you get up to 40 crons, any frequency down to one per minute, and 60 seconds of execution time per invocation. I am on Pro. The 60 seconds matters because the analytics rollup takes about 18 seconds on a warm function and closer to 35 on a cold start.
The one pattern you actually need is the CRON_SECRET. Vercel sends an Authorization: Bearer header on every cron invocation, and you compare it in the handler. Without this, anyone who guesses your route path can trigger the function. Here is the check I paste into every cron handler:
// app/api/cron/sitemap/route.ts
export async function GET(request: Request) {
const auth = request.headers.get('authorization')
if (auth !== `Bearer ${process.env.CRON_SECRET}`) {
return new Response('Unauthorized', { status: 401 })
}
// actual work
}
CRON_SECRET lives in Vercel env vars. I generate it with openssl rand -hex 32 and forget about it.
The 5 Vercel cron jobs I actually run
Here is the vercel.json block that registers all five:
{
"crons": [
{ "path": "/api/cron/blog-sync", "schedule": "0 */6 * * *" },
{ "path": "/api/cron/sitemap", "schedule": "0 3 * * *" },
{ "path": "/api/cron/broken-links", "schedule": "0 5 * * 1" },
{ "path": "/api/cron/affiliate-check","schedule": "0 5 * * 2" },
{ "path": "/api/cron/analytics-roll", "schedule": "0 6 * * *" }
]
}
Five lines. That is the entire scheduling layer. What each one does:
1. Blog syndication sync, every 6 hours. I cross-post to Dev.to and Hashnode from my Shopify blog. The syndication engine used to run on my laptop, which meant it only ran when my laptop was open. Now a cron polls both APIs every six hours, pulls the current article list, diffs it against a tracker JSON in blob storage, and re-runs the sync for anything that failed silently. Dev.to has a habit of returning a 202 and then swallowing the article. The reconciliation catches that.
2. Sitemap rebuild, daily at 3 AM UTC. Regenerates sitemap.xml from the Shopify blog API and product API, writes it to the public blob store, then pings Google's sitemap endpoint. It sounds like something Shopify should do. Shopify does do it, but only for products and pages that live inside the Shopify site itself. My blog articles also get syndicated and deserve their own canonical entries. The cron handles the full merged sitemap.
// app/api/cron/sitemap/route.ts
export async function GET(request: Request) {
if (request.headers.get('authorization') !== `Bearer ${process.env.CRON_SECRET}`) {
return new Response('Unauthorized', { status: 401 })
}
const [articles, products] = await Promise.all([
fetchShopifyArticles(),
fetchShopifyProducts(),
])
const xml = buildSitemap([...articles, ...products])
await put('sitemap.xml', xml, { access: 'public', contentType: 'application/xml' })
await fetch(`https://www.google.com/ping?sitemap=${process.env.SITE_URL}/sitemap.xml`)
return Response.json({ urls: articles.length + products.length })
}
3. Broken link scan, Monday 5 AM UTC. Fetches every blog URL, parses the HTML, extracts outbound `values, and does a HEAD request on each one. Anything that returns 4xx or 5xx gets written to alink-issues.json` file with a timestamp. I check it during Friday debriefs. Most of the hits are false positives from sites that block HEAD requests, so the script falls back to a GET with a 5 second timeout before flagging.
4. Affiliate link health, Tuesday 5 AM UTC. Same idea as broken link scan, but only for the affiliate registry. I have maybe 20 affiliate partners with rewritten tracking URLs. If any of those go dark, I am sending readers to a 404 and losing partner commissions. The cron HEAD-requests each one, stores the result, and only flags a link after three consecutive non-200 responses. One failure is usually the affiliate network being grumpy. Three in a row is a dead link.
5. Analytics rollup, daily at 6 AM UTC. This is the one that took the longest to get right. It pulls Shopify store data, Vercel Web Analytics via the REST API, blog view counts, and Buffer post metrics, then merges them into a single JSON file at analytics/daily-{date}.json. The dashboard reads that file. Without this cron, the dashboard used to hit four different APIs on every page load, which was slow and burned rate limits.
Gotchas I hit in the first month
UTC, always. My first sitemap cron was scheduled for 0 3 * * * because I wanted 3 AM Berlin time. It ran at 4 AM Berlin in winter and 5 AM in summer. Write your schedules in UTC, accept that they will drift one hour relative to your local clock twice a year, and move on.
Cold starts. A cron that hasn't run in 6 hours will cold-start. For my analytics rollup that added 10 to 15 seconds. On Hobby's 10 second cap that would have killed it. On Pro with 60 seconds I have headroom, but I still warm-start the heavier function by making the blog sync cron (which runs every 6 hours on the same deployment) touch the same dependencies.
Idempotency. Vercel does not retry failed crons automatically as far as I can tell, but it will absolutely invoke the same cron twice if a deployment overlaps with a scheduled run during propagation. I saw it happen twice in the first week. Every handler I write now assumes it might be called twice in quick succession. Database writes use upserts. File writes are content-addressed or overwrite-safe. The sitemap cron always produces the same output for the same input, so running it twice is harmless.
Logs disappear fast. Vercel's function logs roll off quickly unless you pay extra. I pipe anything I actually want to keep into the same blob store the crons write to, under a logs/ prefix. Costs almost nothing and survives.
The 60 second cap is per invocation, not per day. I forgot this once and tried to process 200 affiliate links sequentially in a single handler. It timed out at link 47. Now the affiliate check paginates: the cron processes 25 links per invocation, stores a cursor, and the next run picks up where the last one left off. Slower, but it finishes.
What I tried first and dropped
GitHub Actions cron. My first instinct, because I already had a workflow file for deploys. Works fine for ten minutes. Then GitHub's scheduler starts silently skipping runs when the platform is under load, which is most weekday mornings. I lost two weeks of blog syncs before I noticed. GitHub even documents this behavior in a sentence that is easy to miss. I moved off.
A 6 EUR Hetzner VPS with systemd timers. Reliable. Boring. Required me to SSH in every few months to update Node, rotate a let's-encrypt cert that did nothing useful, and clean up log files. Also required me to keep a .env file in sync between my laptop and the VPS. The whole thing was a drag. I killed it the week I activated Vercel Pro.
Cloudflare Workers cron triggers. Genuinely good product. The free tier is generous. The reason I dropped it: everything else already lives on Vercel, and splitting cron across two platforms meant two sets of secrets, two dashboards, and two places to check when something broke. One platform, one set of logs, one deploy. Worth the Pro subscription.
Bottom Line
Five Vercel cron jobs replaced a VPS, a GitHub Actions workflow that silently skipped runs, and about 40 minutes a week of manual maintenance. The whole setup is 60 lines of JSON config and five handler files, each under 80 lines. The Pro plan pays for itself in time I no longer spend wondering if anything is actually running.
If you are on Vercel Pro and not using cron yet, the shortest path is: pick the one background task you forget to run most often, write a handler, add it to vercel.json, put CRON_SECRET behind an auth check, deploy. You will have your first cron in under an hour. The other four will follow once you realize how much quiet failure was happening while you weren't looking.
Top comments (0)