We repeatedly encounter "unprecedented" events that, on inspection, are anything but unprecedented. The problem is rarely lack of information - it's lack of structure.
Social platforms feel too chaotic, ephemeral, and optimize for reaction, not continuity. Media covers isolated incidents. Discussions fragment across threads. Anyone trying to track recurring events - political shifts, service outages, regulatory cycles - ends up maintaining spreadsheets or personal notes that go nowhere.
It Happened Again (IHA) is my attempt to solve that problem structurally. It's a platform for tracking recurring events through source-backed timelines rather than discussion threads - built from scratch as a solo project.
From Incidents to Patterns
The core problem is what I call pattern blindness. We observe events, but we rarely preserve their recurrence in a structured, referenceable form.
IHA is built around two domain primitives:
- Record - the pattern being tracked (e.g., "Bitcoin Crashes").
- Occurrence - a dated, source-backed instance of that pattern (e.g., "The 2022 Crash").
This separation drives the entire design. A Record is not a post. An Occurrence is not a comment. Each has its own lifecycle, ownership rules, verification logic, and visibility constraints.
The timeline is not a UI enhancement - it's the primary interface. Chronology is how patterns actually become visible.
Architecture
The system runs on Next.js 15 (App Router), TypeScript (strict mode), PostgreSQL with Drizzle ORM, and Redis + BullMQ for background processing.
The key architectural decision was enforcing a strict service layer. Route handlers stay thin - validate input, delegate to a service, return a response. Business logic lives in services like RecordService and ModerationService, and database access goes through providers so transaction handling never leaks into routing code.
This might seem like overhead for a solo project, but it paid off quickly by solving two problems that tend to creep in otherwise:
- Circular dependencies between domain areas.
- Permission logic scattered across the UI layer.
With authorization and business rules centralized in services, I could add features like reputation scoring and moderation workflows without destabilizing unrelated parts of the codebase.
Dual-Identifier Strategy
Every entity carries two identifiers - each serving a different purpose.
UUID v7 serves as the primary key: globally unique and time-ordered, well-suited for indexing. For URLs, I use short NanoIDs, giving clean paths like /rec/bitcoin-crash/occ/V1StGXR8 instead of opaque hex strings.
This keeps URLs readable and shareable while avoiding unnecessary exposure of internal identifiers. A small decision with long-term payoff.
Moderation as Infrastructure
Most platforms treat moderation as an admin overlay. I wanted it to be part of the core architecture from the start.
IHA distinguishes between global and local moderators (scoped to specific Records). Authorization is enforced at the service layer - not conditionally hidden in the UI. All moderation actions are logged through an audit mechanism, and visibility changes use timestamped fields rather than hard deletes, keeping actions reversible and traceable.
Enforcing scope at the service level means moderation logic can't be bypassed through inconsistent entry points. Trust is hard to retrofit, so I made it foundational.
Background Processing and Reliability
Several processes run asynchronously - email delivery, badge recalculation, search indexing, visit aggregation - all through BullMQ workers.
Jobs are idempotent with retry policies and exponential backoff. Critical tasks use deterministic job keys to prevent duplicates. Search operations are wrapped with fallbacks so degraded dependencies fail gracefully instead of cascading.
Reputation scoring runs inside transactions to avoid race conditions and gaming exploits. Badge rarity stats are cached and periodically recomputed, balancing performance with accuracy.
Not the most visible features, but they're what separates a prototype from something that actually holds up in production.
Data Sovereignty and GDPR
Infrastructure runs on EU-hosted servers (Hetzner) behind Cloudflare. More importantly, privacy requirements shaped the data model from the beginning rather than being added later.
Soft-deletion has configurable time windows. Audit logs have retention policies. Analytics are consent-based. Account deletion distinguishes between voluntary anonymization and GDPR erasure.
Aligning legal constraints with the technical design early avoids the painful reality of retrofitting compliance into an already fragile system.
Deliberate Tradeoffs
I consciously avoided:
- BaaS - I wanted full control over transactions and data flow.
- Microservices - the domain is tightly coupled; splitting it would add complexity without real benefit.
- Event sourcing - no clear need for replay mechanisms, so it would be unnecessary overhead.
I'd rather have a monolith I can reason about than a distributed system that fights me. The codebase stays straightforward to navigate and extend.
Closing Thoughts
Building IHA meant working across the entire stack - schema design, background workers, moderation workflows, deployment - and keeping it all coherent as a solo developer. It's been a rewarding exercise in domain-first modeling, operational reliability, and making architectural decisions that hold up over time.
The platform is live at ithappenedagain.fyi.
Top comments (0)