Last week I shared SkillFade, a skill tracking app that prioritizes honesty over gamification. Today I want to walk through the architectural decisions and technical choices that shaped it.
The Stack
Backend: FastAPI + SQLAlchemy 2.0 + PostgreSQL
Frontend: React 18 + TypeScript + TailwindCSS
Auth: JWT + bcrypt
Deploy: Docker + Nginx
Boring? Yes. That's the point. Every piece is battle-tested, well-documented, and easy to hire for.
Why FastAPI?
I'm primarily a backend engineer with Python experience. When choosing a framework, I wanted something that would let me move fast without fighting the tooling.
FastAPI checked every box:
- Familiar language - Python is my comfort zone, so I could focus on product logic instead of learning new syntax
- Type hints as documentation - Pydantic schemas generate OpenAPI docs automatically, no extra work
- Async when needed - But sync works perfectly fine for database-bound operations
- Great ecosystem - Need anything? There's probably a Python library for it
- Developer experience - Hot reload, clear error messages, intuitive routing
Could I have used Django? Sure. Flask? Also fine. But FastAPI hit the sweet spot between simplicity and features. For a solo developer shipping a product, that matters more than benchmarks.
The honest truth: I picked the tool I knew best so I could ship faster. No regrets.
Architecture: Monolith by Choice
No microservices. No message queues. No Kubernetes. A single FastAPI process handles everything.
This wasn't a compromise—it was a deliberate decision.
Why monolith?
- Simpler debugging - One log stream, one deployment, one place to look when things break
- Faster development - No service orchestration, no network calls between components, no distributed tracing
- Cheaper hosting - Everything runs on a modest VPS, no complex infrastructure costs
- Easier mental model - The entire application fits in my head
What about scale?
SkillFade is a personal productivity tool. Even with thousands of users, a single well-optimized server handles the load easily. PostgreSQL can manage millions of rows without breaking a sweat.
The microservices question I ask myself: "Do I have a team of 50 engineers who need to deploy independently?" No. So monolith it is.
I'll split it when I have the problem. Not before.
The Core Algorithm: Freshness Decay
The heart of SkillFade is the freshness calculation. Here's the thinking behind it.
The concept:
- Every skill starts at 100% freshness after you practice it
- Each day without practice, freshness decays exponentially
- Recent learning activity adds a small boost (max 15%)
- Result is clamped between 0-100%
The math (simplified):
Default decay rate is 2% per day, compounding. This means:
| Days since practice | Freshness |
|---|---|
| 0 | 100% |
| 7 | ~87% |
| 14 | ~75% |
| 30 | ~54% |
| 60 | ~30% |
| 90 | ~16% |
Why exponential decay?
Linear decay (lose 1% per day) doesn't match how memory works. In reality, forgetting is steep at first, then levels off. The exponential curve better models the Ebbinghaus forgetting curve that psychologists have studied for over a century.
Why does learning only "boost" and not reset?
This is the key philosophical choice. Watching a tutorial about React doesn't mean you can build a React app. Reading about Kubernetes doesn't mean you can debug a cluster.
Learning slows decay. Only practice resets the clock.
This forces users to confront an uncomfortable truth: passive consumption isn't the same as skill building.
Custom decay rates
Not all skills fade equally:
- Motor skills (cycling, typing) - slow decay, muscle memory persists
- Conceptual knowledge (design patterns, architecture) - medium decay
- Syntax and tooling (specific APIs, CLI flags) - fast decay, details fade quickly
Users can adjust decay rate per skill to match reality.
Database Design Decisions
The Schema Philosophy
Six core tables. No more than necessary, no fewer than useful.
users
categories
skills
learning_events
practice_events
skill_dependencies
Every table has clear ownership. Users own categories. Categories organize skills. Skills have events. Simple hierarchy, predictable queries.
Separate Tables for Learning vs Practice Events
I debated this one. A single events table with a type column would be simpler, right?
I went with separate tables because:
- Query patterns differ - Dashboard shows practice frequency, detail pages show both. Separate tables = simpler queries for common operations
- Different subtypes - Learning events have types like "video, article, course, book". Practice events have "project, exercise, teaching, code-review". One enum would be confusing
- Future flexibility - Practice events might get "difficulty" or "output_url" fields that don't make sense for learning
- Schema as documentation - Two tables makes the learning/practice distinction explicit in the data model itself
The tradeoff is some duplication in the codebase. Worth it for clarity.
Hard Deletes Over Soft Deletes
When a user clicks "Delete my account," everything goes. CASCADE deletes across all tables. No deleted_at columns, no recovery period, no "we keep your data for 30 days."
This aligns with the privacy philosophy. If someone wants out, they're out. Fully.
It also simplifies queries—no WHERE deleted_at IS NULL everywhere.
Categories as First-Class Entities
Early version had category as just a string field on skills. Worked fine until I wanted:
- Category-level analytics ("How fresh are my Frontend skills overall?")
- Rename a category everywhere at once
- Ensure no duplicate category names per user
Refactored to proper categories table with foreign keys. More work upfront, cleaner long-term.
Frontend Architecture
Why React + Vite (Not Next.js)
Next.js is great. I considered it. Went with plain React SPA instead.
Reasons:
- No SEO needed - It's a dashboard behind login. Google doesn't need to crawl it
- Simpler mental model - No SSR vs CSR decisions, no hydration bugs, no server components confusion
- Clear separation - API is API (FastAPI), frontend is frontend (React). Clean boundary
- Deployment flexibility - Static files served by Nginx, API proxied separately
For a marketing site or blog, I'd pick Next.js. For an authenticated dashboard app, SPA is simpler.
State Management: Context Over Redux
SkillFade uses React Context for global state. No Redux, no Zustand, no MobX.
What's in global state?
- Current user (authentication)
- Theme preference (dark/light)
- That's it
Everything else?
Fetched fresh per page. Server is the source of truth. When you navigate to the dashboard, it fetches current data. No stale cache problems, no sync issues.
Why not Redux?
Redux solves problems I don't have:
- Complex state interactions across many components? Nope, state is simple
- Time-travel debugging? Never needed it
- Middleware for side effects? fetch() works fine
For this app, Redux would add complexity without adding value. Context + useState + useEffect covers everything.
Component Structure
src/
components/ # Reusable UI pieces (Button, Card, Modal)
pages/ # Route-level components (Dashboard, SkillDetail)
context/ # Global state (AuthContext, ThemeContext)
services/ # API calls (skills.ts, events.ts)
hooks/ # Custom hooks (useSkills, useFreshness)
types/ # TypeScript interfaces
Nothing fancy. Pages fetch data on mount, pass to components via props. Components are mostly presentational.
Real-time Updates Without WebSockets
When you log a practice event on the skill detail page, the dashboard freshness should update.
Options considered:
- WebSockets - Real-time push from server
- Polling - Fetch every N seconds
- Browser events - Signal between components in same tab
Went with option 3. It's a SPA—dashboard and detail page components exist in the same JavaScript context. When detail page logs an event, it dispatches a custom event. Dashboard listens and refetches.
No socket server to maintain. No unnecessary network requests. Works perfectly for single-tab usage.
For multi-device sync (phone updates, desktop sees it), you'd need WebSockets or polling. Not implemented yet. Manual refresh works for now.
The Alert System: Calm by Design
Philosophy
Most apps optimize for engagement. More notifications = more opens = better metrics.
SkillFade optimizes for calm. Alerts should be:
- Infrequent - Maximum 1 email per week, total
- Actionable - Only alert if user can do something about it
- Respectful - Plain text, no tracking pixels, instant unsubscribe
Three Alert Types
1. Decay Alert
Triggers when a skill drops below 40% freshness. But not immediately—waits for 14 days since last alert for that skill. Prevents nagging about the same thing repeatedly.
2. Practice Gap Alert
Triggers when a skill has learning events but no practice events for 30+ days. The message: "You've been learning X but not practicing. Theory without application fades fast."
3. Imbalance Alert
Triggers when overall learning/practice ratio stays below 0.2 for two consecutive months. The message: "You're consuming a lot but producing little. Consider more hands-on work."
Implementation Notes
Alerts run via scheduled job (cron), not real-time triggers. Once daily, batch process checks all users who have alerts enabled.
Each alert type has independent cooldowns. You might get a decay alert and an imbalance alert in the same week, but never two decay alerts for the same skill within 14 days.
API Design Principles
RESTful, But Practical
Pure REST says everything should be a resource. I follow that mostly, but bend it when convenient:
-
GET /analytics/balance- Not really a "resource," but makes sense as an endpoint -
POST /auth/login- "login" isn't a noun, don't care
Pragmatism over purity.
Flat URL Structure
GET /skills
GET /skills/:id
POST /skills
PUT /skills/:id
GET /learning-events
POST /learning-events
PUT /learning-events/:id
DELETE /learning-events/:id
Not nested like /skills/:id/learning-events/:eventId. Flat is simpler to route, simpler to reason about, simpler to document.
Events reference skills via skill_id in the body/params, not URL hierarchy.
Consistent Response Patterns
Success: Return the data directly (object or array)
Error: Always { "detail": "Human readable message" }
Pagination: Offset-based with ?skip=0&limit=20, returns { items: [], total: number }
Frontend code stays simple when API is predictable.
Authentication: Keep It Simple
JWT Tokens
- Access token in Authorization header
- Stored in localStorage (yes, I know about XSS concerns—acceptable for this use case)
- 30-minute expiration
- No refresh tokens—expired means re-login
For a personal productivity tool used by one person at a time, this is plenty secure. No need for refresh token rotation, token families, or OAuth complexity.
Password Handling
- bcrypt with default work factor
- 72-byte input limit (bcrypt truncates silently, so I handle it explicitly)
- No password rules beyond minimum length—let users choose their own passwords
Deployment
Docker Compose
Three containers:
- PostgreSQL - Database
- FastAPI - Backend API
- Nginx - Serves frontend static files + proxies API requests
Single docker-compose up brings everything online. Environment variables for secrets, volumes for data persistence.
Why Not Serverless?
Lambda/Vercel functions would work for the API. Didn't go that route because:
- Cold starts matter for dashboard responsiveness
- PostgreSQL connection pooling is annoying in serverless
- I wanted full control over the environment
- Monthly cost is predictable with a VPS
For a side project, a $10-20/month VPS beats managing serverless complexity.
Lessons Learned
1. Start Without Auth
First two weeks, I built with a hardcoded user ID. No login, no registration, no password reset flow. Just the core features.
This prevented the classic trap: spending weeks on auth while the actual product stays unbuilt.
Added auth last, once everything else worked.
2. Migrations From Day One
Even solo, even on a side project: use database migrations (Alembic for Python).
Schema will change. "I'll just modify the table directly" becomes a nightmare when you have production data. Migrations saved me multiple times.
3. TypeScript Everywhere
Frontend and backend types should match. I define Pydantic schemas in Python, then manually keep TypeScript interfaces in sync.
Not ideal (would love auto-generation), but catching type mismatches at compile time is worth the maintenance.
4. Boring Tech Compounds
Every time I picked the "boring" option (PostgreSQL over MongoDB, REST over GraphQL, React over Svelte), I benefited from:
- Better documentation
- More Stack Overflow answers
- Easier debugging
- Smoother hiring (if I ever need help)
Novel tech is fun. Shipped products need boring tech.
5. Design for Deletion
GDPR requires "right to deletion." Even if you're not in the EU, it's good practice.
Plan for cascade deletes from day one. No orphaned data, no "soft delete" flags that accumulate forever.
What I'd Do Differently
E2E Tests Earlier
I relied on manual testing too long. Click through the app, check if it works. Tedious and error-prone.
Should have set up Playwright or Cypress from week two.
Design Alert System First
Retrofitting notifications into an existing schema was messy. Alert cooldowns, user preferences, email templates—all added later, all awkward.
If I started over, I'd design the alert data model upfront.
Use a Component Library
Built every button, input, modal, and dropdown from scratch with Tailwind. Educational, but slow.
Next time: Radix UI or Headless UI for primitives, custom styling on top.
The Anti-Complexity Manifesto
Every feature request, every "nice to have," every shiny technology gets filtered through one question:
"Does the added complexity justify the value?"
- Microservices? No. One process is fine until it isn't.
- GraphQL? No. REST handles every use case here.
- Redis caching? No. PostgreSQL is fast enough.
- Real-time sync? No. Manual refresh works.
- Mobile app? No. PWA covers it.
- AI recommendations? No. That's not what this product is.
The best code is code you don't write. The best infrastructure is infrastructure you don't maintain. The best feature is the one that solves the problem without adding moving parts.
That's the technical foundation of SkillFade. The theme throughout: simple, boring, maintainable. Complexity is easy to add. Simplicity is hard to maintain.
If you're building a side project, my advice: pick boring tech, ship fast, add complexity only when forced.
Questions about specific decisions? Drop them in the comments.
Top comments (0)