FeedLog turns GitHub issues into publish-ready changelog entries without leaving your repo. You drop @feedlog publish in an issue comment, an AI draft appears for review, you approve it, and it shows up in your public changelog. Simple concept — but shipping it involved a handful of deliberate architecture decisions I want to write down while they're fresh.
Three repos, one product
The codebase lives across three repos:
-
feedlog-api(private) — the Node backend: webhooks, AI processing, the public API your customers call. -
feedlog-app(private) — the web dashboard: OAuth, settings, changelog management. -
feedlog-toolkit(public, MIT) — the embeddable SDK that customers drop into their own site to render the changelog widget.
Splitting into three was a deliberate choice. The toolkit is the only piece customers integrate directly, so it makes sense for it to be public, independently versioned (we use Changesets), and separately releasable without touching internal code. The API and app ship independently too — a frontend deploy doesn't force an API restart, and vice versa.
The toolkit is a Stencil-based monorepo that outputs true web components plus auto-generated React and Vue wrappers. One component source, three framework targets.
The API stack
The API is Node with Fastify as the HTTP framework. Fastify's plugin system and built-in schema validation are a good fit for a small team: we use fastify-type-provider-zod so every route is typed end-to-end from the Zod schema to the handler — no separate OpenAPI spec to keep in sync.
For the database: Drizzle ORM on top of Neon Postgres. Neon gives us a serverless Postgres database with branching, which is useful for previewing migrations. Drizzle keeps the schema as TypeScript and generates SQL migrations via Drizzle Kit. We run migrations as a separate tsx scripts/migrate.ts step, not at startup.
Beyond the request/response path:
- BullMQ + Redis handles async work — GitHub webhook events get queued immediately and processed by a separate worker, so the webhook endpoint always returns fast. The AI draft generation also runs through the queue.
- Croner runs scheduled tasks in-process: a webhook recovery job that redelivers failed GitHub hook payloads every 15 minutes, plus Sentry cron monitor heartbeats for all three processes (API, events worker, external worker).
-
opossumwraps the Postgres pool as a circuit breaker so a DB hiccup degrades gracefully instead of cascading into timeouts across all requests. - Per-API-key rate limiting is stored in Redis via
@fastify/rate-limit, so limits survive restarts and work across multiple instances.
The app stack
The dashboard is TanStack Start (React SSR) with TanStack Router and TanStack Query for data fetching. UI is Tailwind CSS v4 with Radix UI primitives following the shadcn pattern. It deploys to Cloudflare Workers via Wrangler — edge-deployed SSR with no cold start tax.
DB design decisions
This is the part I spent the most time thinking through, and all three decisions have held up well.
UUIDv7 as the primary key
Every table uses UUIDv7 as its primary key, generated by a Postgres extension (uuidv7() as the column default). UUIDv7 is time-ordered and monotonically increasing, which means:
- New rows always insert at the end of the B-tree index — no page splits, no fragmentation.
- The UUID itself encodes the creation timestamp, so we don't need a separate
created_atcolumn on every table.
The one real downside: the Neon console and Drizzle Studio just show the UUID as a UUID. They don't decode it into a human-readable timestamp. It's a small operational annoyance — when you're scanning rows manually you can't immediately see when a record was created. We handle this by having a extractCreatedAtFromUuid7 SQL helper we call when we need the timestamp in a query.
Prefixed public IDs
Internal primary keys are UUIDs and never leave the system. Every table that gets exposed through the API also has a public_id column: a short, URL-safe string with a meaningful prefix.
usr_a3b7kx9m2p1z ← user
ins_q8tnrfw4j6yd ← installation
rep_c2mh5vp0xk3a ← repository
iss_e9rz1db7yt4n ← issue
pk_lw6gc8nu0fqj ← API key
The IDs are prefix_ + 12 characters of base36 nanoid (customAlphabet('0123456789abcdefghijklmnopqrstuvwxyz', 12)). The prefix serves as an immediate type hint when you see an ID in a log, a support ticket, or a URL — you know instantly what kind of entity you're dealing with. Stripe popularized this pattern for good reason.
Soft deletes on every table
All tables have a deleted_at timestamp column, and every delete — no matter how trivial — goes through a soft delete. Even rows that could safely be nuked immediately get deleted_at set instead of being removed.
The pros:
-
Accidental recovery. When something goes wrong in production and a record gets deleted it shouldn't have, you can restore it with an
UPDATE. No backup restore, no data archaeology. - Audit trail. You can always see what existed and when it was removed.
-
Undo flows are free. Upvotes are a good example: when a user un-upvotes something, we set
deleted_at. When they re-upvote, we setdeleted_at = null. The code for "toggle" is trivial — no insert/delete cycle, just a field flip. - Safer debugging. In production you can query soft-deleted rows alongside live ones to understand what happened, without the risk of it being too late.
The obvious tradeoff is that tables accumulate soft-deleted rows over time. The plan for that: a per-table cleanup cron that runs periodically and hard-deletes rows where deleted_at is older than a configurable threshold. We already have croner running in-process and the infrastructure for scheduled work, so this is a straightforward addition — each table can configure its own retention window before a permanent delete runs.
What I'd do differently
Honestly, not much yet. The main thing I'd reconsider is whether croner running in-process in the API server is the right home for cleanup jobs long-term, or whether they should live in a separate scheduled job process. In-process is simpler to start with, but it means every API instance races to run the same cron, which requires a distributed lock. For now the jobs are idempotent enough that duplicate runs are harmless, but it's something to revisit as the system grows.
Top comments (0)