TL;DR: Open-source maintainers leave thousands of dollars in free credits and tools on the table because nobody aggregates them. I built OSS Perks — a website + CLI that lists 15+ vendor perk programs and checks your repo's eligibility with one command.
The problem
Vercel, Sentry, JetBrains, Cloudflare — they all give free stuff to OSS maintainers. $3,600 in hosting credits. 5 million error events. Unlimited IDE licenses.
Nobody knows about half of them.
It's all scattered across different websites, buried in marketing pages, each with their own eligibility rules and application steps. I spent a weekend Googling "free tools for open source" and jumping between tabs trying to figure out what my projects qualified for.
Then I saw getfirstcheck.com by @5harath — a curated directory of startup programs offering founders free credits and grants. Saw the launch tweet and it clicked: founders had firstcheck, but OSS maintainers had nothing like it.
So I built OSS Perks.
What it is
Two things:
- A website — searchable directory of OSS perk programs, available in 9 languages
-
A CLI — run
ossperks checkin any repo and it tells you what you qualify for
Architecture
pnpm monorepo, three packages:
ossperks/
├── packages/
│ ├── data/ # JSON programs + Zod schemas
│ └── cli/ # CLI that checks eligibility
├── docs/ # Next.js 16 + Fumadocs website
└── pnpm-workspace.yaml
The main idea: @ossperks/data is the single source of truth. Every program is a JSON file:
{
"slug": "vercel",
"name": "Vercel for Open Source",
"provider": "Vercel",
"url": "https://vercel.com/open-source-program",
"category": "hosting",
"description": "Vercel provides platform credits, community support, and an OSS Starter Pack.",
"perks": [
{
"title": "$3,600 Platform Credits",
"description": "$3,600 in Vercel platform credits distributed over 12 months."
},
{
"title": "OSS Starter Pack",
"description": "Credits from third-party services to boost your project."
}
],
"eligibility": [
"Must be an open-source project that is actively developed and maintained.",
"Must show measurable impact or growth potential.",
"Must follow a Code of Conduct.",
"Credits must be used exclusively for open-source work."
],
"duration": "12 months",
"tags": ["hosting", "deployment", "serverless", "credits"]
}
This one file feeds three things:
- The website renders it as a program page
- The CLI uses it for
list,show,search - A build script converts it to MDX, then lingo.dev translates it into 8 languages
Change the JSON, everything updates.
The core trick: eligibility checking
This is the part I like most. ossperks check reads your repo metadata from GitHub/GitLab and runs every eligibility rule through a chain of matchers.
The key function is matchRule. It takes a human-readable eligibility string like "Must be an open-source project that is actively developed and maintained" and tries to verify it against your repo:
const matchRule = (rule: string, ctx: RepoContext): RuleVerdict =>
checkSubjective(rule) ??
checkProvider(rule, ctx) ??
checkStars(rule, ctx) ??
checkActivity(rule, ctx) ??
checkLicense(rule, ctx) ??
checkRepoAttrs(rule, ctx) ?? { reason: rule, verdict: "unknown" };
Each checker uses regex on the eligibility text to figure out what kind of rule it is, then validates against the repo. Here's the license checker:
const checkLicense = (rule: string, ctx: RepoContext): RuleVerdict | null => {
const label = ctx.license ?? "no detected license";
if (/permissive\s+(?:open[\s-]?source\s+)?licen[sc]e/i.test(rule)) {
return isPermissive(ctx.license)
? { verdict: "pass" }
: {
reason: `requires a permissive license (detected: ${label})`,
verdict: "fail",
};
}
if (/open[\s-]?source\s+licen[sc]e|recognized\s+licen[sc]e/i.test(rule)) {
return isOsiApproved(ctx.license)
? { verdict: "pass" }
: {
reason: `requires an OSI-approved license (detected: ${label})`,
verdict: "fail",
};
}
return null;
};
No program-specific if statements. The eligibility rules are strings in JSON. The engine pattern-matches the intent and checks the repo. Add a new program? Just add a JSON file. The checker handles it.
Output looks like this:
✔ next.js — MIT · 131,247 stars · last push today
Eligibility across 15 programs — 8 eligible, 5 need review, 2 ineligible
✔ vercel eligible
✔ sentry eligible
✔ github-copilot eligible
✔ jetbrains eligible
⚠ cloudflare needs review
• non-commercial requirement cannot be auto-verified
✖ browserstack ineligible
• requires 500+ stars (you have 42)
Data pipeline: JSON to 9 languages
packages/data/src/programs/*.json (source of truth)
│
▼
docs/scripts/generate-programs-mdx.mjs (JSON → MDX)
│
▼
docs/content/programs/en/*.mdx (English)
│
▼ lingo.dev
docs/content/programs/{es,fr,de,ja,ko,zh-CN,pt-BR,ru}/*.mdx
The generation script builds structured Markdown from each JSON program:
const buildMarkdownBody = (p) => {
const sections = [
buildMetaSection(p),
buildPerksSection(p),
buildEligibilitySection(p),
buildRequirementsSection(p),
buildApplicationProcessSection(p),
buildTagsSection(p),
].filter(Boolean);
return sections.join("\n\n");
};
On the website, the translated MDX gets parsed back into structured data for rendering:
const parsePerks = (
section: string
): { title: string; description: string }[] =>
section
.split("\n")
.filter((l) => /^-\s+\*\*/.test(l.trim()))
.map((line) => {
const match = line.match(/\*\*(.+?)\*\*\s*[::]\s*(.*)/);
return match
? { description: match[2], title: match[1] }
: { description: "", title: line };
});
Yes, JSON → MDX → parse back to structured data is a round-trip. But lingo.dev translates MDX files, not JSON. It was the pragmatic call.
Why this stack
| Approach | Why not |
|---|---|
| Static site + JSON API | No i18n, no docs framework |
| Astro | Less i18n ecosystem at scale |
| Docusaurus | Heavier, React 18 only at the time |
| Fumadocs + Next.js 16 (chosen) | i18n for free, MDX, OG images, search |
Fumadocs gave me locale-prefixed routes, language switching, OG image generation, and content sources per locale out of the box. Commander for the CLI because it's the standard.
Validation with Zod
Every program goes through Zod at import time. Bad JSON fails immediately:
export const programSchema = z.object({
applicationProcess: z.array(z.string()).optional(),
applicationUrl: z.string().url().optional(),
category: categoryEnum,
contact: contactSchema.optional(),
description: z.string(),
duration: z.string().optional(),
eligibility: z.array(z.string()),
name: z.string(),
perks: z.array(perkSchema),
provider: z.string(),
requirements: z.array(z.string()).optional(),
slug: z.string(),
tags: z.array(z.string()).optional(),
url: z.string().url(),
});
export const programs: Program[] = raw.map((p) => programSchema.parse(p));
Trade-offs
Being honest:
- MDX round-trip — JSON → MDX → parse back is awkward. Did it because lingo.dev translates MDX, not JSON.
- Regex eligibility matching — works for 15 programs but it's brittle. Structured rules would be better long-term.
-
No auth by default — CLI hits GitHub API unauthenticated. You'll hit rate limits. Set
GITHUB_TOKENto fix it. - 15 programs — there are dozens more (DigitalOcean, AWS, MongoDB, Datadog). Just need someone to add the JSON files.
Try it
Website:
git clone https://github.com/Aniket-508/ossperks.git
cd ossperks
pnpm install
pnpm --filter docs dev
CLI:
npx @ossperks/cli
ossperks check # check current repo
ossperks check --repo vercel/next.js # check specific repo
ossperks list # list all programs
ossperks search hosting # search
ossperks show vercel # program details
What to build next
If you fork this:
-
Add programs — create a
{slug}.json, submit a PR. That's it. -
Structured eligibility rules — replace free-text with
{ "type": "min-stars", "value": 500 }so the checker doesn't need regex. -
GitHub Action — run
ossperks checkin CI, post results as a PR comment. - Expiry tracking — many perks expire after 12 months. Reminders would help.
-
Community submissions — the API route (
/api/submit-program) already exists.
If you maintain an open-source project, run npx @ossperks/cli check. You might be surprised what you qualify for.
Links:
Know a missing program? Open an issue or add a JSON file. Takes 5 minutes.

Top comments (0)