Last month I open-sourced awesome-crypto-cards — a curated list of 136 crypto debit and credit cards. This post is about the boring infrastructure: why I run awesome-lint in CI, how I keep the list synced with the dataset behind sweepbase.net, and where I underestimated effort.
Why a flat README, not a database
The list lives as a single README.md. No JSON, no YAML, no static site. People who land on a GitHub awesome-list expect to scan markdown, not click into an interactive viewer.
Trade-offs I accepted: no programmatic queries, no filtering UI, no auto-generated content.
Trade-offs I avoided: an extra build step, broken links from generator bugs, and the friction of "wait, where do I edit this?"
The awesome-lint CI
Every push runs awesome-lint via GitHub Actions. It catches:
- Duplicate URLs (you'd be surprised)
- Links missing https://
- Markdown formatting that breaks GitHub's renderer
- Broken anchor references in the contents section
- Categories that don't sort alphabetically
# .github/workflows/main.yml
name: Awesome Lint
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '22' }
- run: npm ci
- run: npx awesome-lint
The lint config is the strictest version (no-emoji). I keep it that way because the goal is acceptance into other awesome-list registries down the line, and they reject any list that fails their own awesome-lint pass.
Keeping it synced with the source
The dataset behind sweepbase.net is a CSV of 141 rows. Five of those are pre-launch products (waitlist, "in development" custody, "TBA" network) — the README rule is "shipping only," so the README count is 136.
The diff between CSV and README runs as a small Node script:
const csvNames = new Set(cards.map(c => c['Card Service'].trim()));
const readmeNames = new Set();
const re = /- \[([^\]]+)\]\(https:\/\/sweepbase\.net\/cards\//g;
let m;
while ((m = re.exec(readme)) !== null) readmeNames.add(m[1].trim());
const inCsvNotReadme = [...csvNames].filter(n => !readmeNames.has(n));
Each time I add a card to the dataset, this tells me what's missing in the README, and I add it manually. Manual is fine because it's once a week at most.
What I underestimated
- Alphabetical filter sections. Each region/custody/use-case section repeats card names. Adding one new card means editing 4-5 lists. I have a script in mind but haven't built it.
- The "Related Lists" section. The other awesome-lists in the crypto/defi space are mostly stale (2-3 years since update). Including them feels honest but reduces the list's perceived freshness.
- Star farming. Two-week organic plan, 23 days later, 1 star. Reality check: the list needs distribution, not just existence.
If you're building an awesome-list, the lint+CI part is fast. The interesting work is keeping it honest as the underlying space changes.
Top comments (0)