I shipped a small side project this year: sweepbase.net, a comparison site for crypto debit and credit cards. 139 cards, no DB, the whole dataset is one CSV file in the repo.
Here are the things I'd actually tell another dev about it.
CSV beats a DB more often than people admit
The whole catalog is data.csv, parsed at boot, validated with Zod. Reads outnumber writes by something like 10,000 to 1, and most "writes" are me fixing a number once a month.
For that load profile, a database is theatre. CSV in a public repo gives me:
- One source of truth, version controlled
- Diff-able commits when I change a number
- No admin UI to build
- An auditable timeline anybody can inspect
When somebody asks "why did you change Crypto.com APY", I link the commit. That answer is more reassuring than any dashboard.
Zod earns its rent
Zod's schema does double duty: it validates at boot, and it generates the TypeScript type via z.infer. One source for shape, no drift between runtime and compile time.
const CardSchema = z.object({
service: z.string().min(1),
fxMargin: z.number().min(0).max(10),
atmFee: z.number().min(0),
});
export type Card = z.infer<typeof CardSchema>;
If a row in the CSV is malformed, the build fails. I never ship broken data without knowing.
ISR is the right default for content sites
Next.js 15.1 App Router with revalidate: 3600 on every page. The data changes a few times a week. There is no reason to re-render on every request. Lighthouse stays at 100 across the catalog because the rendered HTML is essentially static, and the framework refreshes it every hour.
I had to fight the urge to reach for SSR or client-side fetching. Neither belongs here.
React.cache() is underrated
Multiple components in a single page render call the same getCards() function. Without React.cache(), the CSV gets parsed once per call site. Wrapped in React.cache(), it parses once per request. Easy 10x latency win that I almost missed.
Filters as predicates beats SQL for small data
37 category pages (USA, no-KYC, self-custody, travel, and so on), all rendered from the same Server Component. The category-specific logic lives in lib/filters.ts:
export const isSelfCustody = (card: Card) => card.custody === 'self';
export const isUSACompatible = (card: Card) => card.regions.includes('USA');
Adding a new category page is a 6-line PR: filter, slug, name. No migration, no index to remember.
What I would do differently
- Started the public CSV from day one. I used Notion for the first month, lost a week porting it.
- Set up Sentry before shipping, not after the first ghost bug report.
- Wrote the report-error button in week 1. Real user reports caught more bad data than my own auditing.
Where to look
- Live: sweepbase.net
- Dataset: /datasets/data.csv
- Calculator: /calculator
If you want to see the schema or argue with one of my ratings, both are public. The CSV is the source of truth.
Top comments (0)