In 1885, Hermann Ebbinghaus sat in a room and memorized nonsense syllables. Then he measured how quickly he forgot them. The result — the forgetting curve — showed that memory decays exponentially. You lose the most in the first few days, then the decline slows.
140 years later, I used that same curve to build the core algorithm behind SkillFade, a skill tracking app. Here's exactly how it works, why I made the choices I did, and what I'd change.
The Problem: Quantifying "Rustiness"
Every developer knows the feeling. You spent two weeks deep in Docker, then three months pass, and suddenly docker-compose feels like a foreign language. You know the skill decayed, but by how much? When did it start? How does it compare to your Python knowledge, which you use daily?
I needed a single number — a freshness score from 0 to 100 — that answers: "How sharp is this skill right now?"
The constraints:
- Must decay automatically over time without user input
- Practice should reset the clock; learning should only slow the decay
- Different skills should decay at different rates
- The math should be simple enough to explain in a sentence
The Core: Exponential Decay
Linear decay (lose 1% per day) doesn't match reality. You don't forget at a constant rate — you forget fast at first, then slower. Exponential decay captures this:
freshness = 100 * (retention_rate ^ days_since_last_practice)
With a default retention rate of 0.98 (2% daily decay):
| Days Without Practice | Freshness |
|---|---|
| 0 | 100% |
| 7 | 87% |
| 14 | 75% |
| 30 | 55% |
| 60 | 30% |
| 90 | 16% |
| 120 | 9% |
This feels right. After a week, you're still sharp. After a month, things are fuzzy. After three months, you're starting over.
Why Practice Resets, But Learning Doesn't
This was the most important design decision in the algorithm.
When you practice a skill (build a project, solve exercises, write code at work), you're actively retrieving knowledge from memory. Cognitive science calls this the testing effect — retrieval strengthens memory far more than re-exposure.
When you learn (read an article, watch a tutorial, go through documentation), you're consuming information. It helps, but it doesn't create the same neural pathways as active recall.
So in the algorithm:
- Practice resets the decay clock. Your freshness restarts from 100%.
- Learning adds a small boost. Each recent learning event adds 2% freshness, capped at 15%.
This means you can read 10 articles about Kubernetes and your freshness might go from 40% to 55%. But one afternoon actually deploying a cluster? Back to 100%.
The Implementation
Here's the actual Python function, simplified from the production code:
from datetime import date, timedelta
from typing import Optional
def calculate_freshness(
practice_dates: list[date],
learning_dates: list[date],
skill_created: date,
decay_rate: float = 0.02
) -> float:
today = date.today()
# Find the anchor point: last practice, or skill creation
if practice_dates:
last_practice = max(practice_dates)
else:
last_practice = skill_created
# Days since last practice
days_elapsed = (today - last_practice).days
# Base decay: exponential
retention_rate = 1.0 - decay_rate
freshness = 100.0 * (retention_rate ** days_elapsed)
# Learning boost: recent learning slows the fade
thirty_days_ago = today - timedelta(days=30)
recent_learning = sum(
1 for d in learning_dates if d >= thirty_days_ago
)
learning_boost = min(recent_learning * 2, 15)
freshness += learning_boost
# Clamp to 0-100
return max(0.0, min(100.0, freshness))
That's it. ~30 lines of actual logic. No machine learning. No complex statistical models. Just exponential decay with a learning modifier.
Making Decay Rates Configurable
Not all skills decay at the same speed. Spoken languages fade fast without immersion. Bicycle-riding-type motor skills barely fade at all. Programming languages sit somewhere in between.
I added per-skill decay rates with sensible presets:
DECAY_PRESETS = {
"slow": 0.01, # 1% per day — motor skills, deep expertise
"default": 0.02, # 2% per day — most technical skills
"fast": 0.03, # 3% per day — languages, frameworks with churn
"very_fast": 0.05, # 5% per day — memorization-heavy, trivia
}
The difference is dramatic over 30 days:
| Decay Rate | Freshness at Day 30 |
|---|---|
| 1% (slow) | 74% |
| 2% (default) | 55% |
| 3% (fast) | 40% |
| 5% (very fast) | 21% |
Users can set custom rates per skill. Someone maintaining five programming languages might set Go (used daily) to slow decay and Rust (learning on weekends) to fast decay.
The Anchor Point Problem
One subtle decision: what happens when a skill has never been practiced?
Option A: Start at 0% freshness. But that feels wrong — you just created the skill, you presumably know something about it.
Option B: Start at 100% and decay from the creation date. This is what I chose. When you add "Python" to your tracker, you start fresh. The clock begins ticking immediately. If you don't log a practice event, your freshness will naturally decline.
This creates a useful pressure: adding a skill is a commitment. You're saying "I want to track this," and the app immediately starts showing you the decay. It's honest from day one.
# The anchor is either last practice or skill creation
if practice_dates:
last_practice = max(practice_dates)
else:
last_practice = skill_created # Decay starts at creation
Why I Capped the Learning Boost at 15%
Early versions had no cap on the learning boost. The problem: a user could watch 20 YouTube tutorials in a week and see their freshness jump from 30% to 70% without ever touching a keyboard.
That defeats the entire purpose. The app is supposed to show you that consumption isn't retention.
The 15% cap means learning can slow the bleeding, but it can't substitute for practice. The numbers:
Base freshness (60 days, no practice): 30%
+ 1 learning event in last 30 days: 32%
+ 3 learning events: 36%
+ 5 learning events: 40%
+ 8+ learning events: 45% (capped)
You can read your way from 30% to 45%, but to get back above 70%, you need to actually practice.
Freshness History: Tracking the Curve Over Time
A single freshness number is useful, but the trend is more interesting. I built a history endpoint that reconstructs what the freshness would have been on each day over the past N days:
def calculate_freshness_history(
practice_dates: list[date],
learning_dates: list[date],
skill_created: date,
days: int = 90,
decay_rate: float = 0.02
) -> list[dict]:
history = []
today = date.today()
for day_offset in range(days, -1, -1):
target_date = today - timedelta(days=day_offset)
# Only consider events before or on this date
practices_before = [d for d in practice_dates if d <= target_date]
learnings_before = [d for d in learning_dates if d <= target_date]
# Calculate what freshness was on that day
if practices_before:
anchor = max(practices_before)
else:
anchor = skill_created
days_elapsed = (target_date - anchor).days
retention = 1.0 - decay_rate
freshness = 100.0 * (retention ** max(0, days_elapsed))
# Learning boost (events within 30 days of target_date)
window_start = target_date - timedelta(days=30)
recent = sum(1 for d in learnings_before if d >= window_start)
freshness += min(recent * 2, 15)
freshness = max(0.0, min(100.0, freshness))
history.append({
"date": target_date.isoformat(),
"freshness": round(freshness, 1)
})
return history
This produces a line chart where you can see the exponential decay curves, the sharp jumps when you practice, and the small bumps when you learn. It tells a story that a single number can't.
Edge Cases I Hit in Production
1. Future-dated events
Users can log events in the past (backdating), but what about the future? A practice event dated tomorrow would make days_elapsed negative, pushing freshness above 100%.
Fix: max(0, days_elapsed) — treat future events as "today."
2. Same-day multiple events
If you practice and learn on the same day, the order shouldn't matter. Since we only look at dates (not timestamps), this works naturally.
3. Zero learning, zero practice
A skill with no events at all just decays from 100% starting at creation. No division-by-zero, no special cases needed.
4. Very old skills
A skill created 365 days ago with no practice: 100 * 0.98^365 = 0.06%. Effectively zero. The clamp handles this — it shows 0%, not a tiny decimal.
What I Considered But Didn't Build
Spaced repetition scheduling. Systems like Anki calculate optimal review intervals. I explicitly avoided this because SkillFade is a mirror, not a coach. It shows decay; it doesn't prescribe when to practice.
Weighted event types. Should a 3-hour project count more than a 15-minute exercise? Probably. But adding weights adds complexity and subjective tuning. I kept it simple: one practice event = one clock reset, regardless of duration.
Collaborative decay. In teams, you could model shared skill decay — if nobody on the team has touched Terraform in 60 days, flag it. Interesting, but it would require social features, which violates the design principles.
ML-based personal decay curves. With enough data, you could learn each user's actual forgetting rate per skill. But this violates the "no AI/ML" constraint, and honestly, the exponential model works well enough.
The Takeaway for Your Own Projects
If you need to model something that degrades over time — cache staleness, content freshness, user engagement risk — exponential decay is almost always the right starting point:
value = initial * (retention_rate ** time_elapsed)
Three parameters. One line of math. Adjust the retention rate to match your domain.
The hard part isn't the algorithm. It's deciding what resets the clock, what slows the decay, and what you refuse to let cheat the system. Those are product decisions disguised as math.
You can see this algorithm in action at skillfade.website. Create an account, add a skill, and watch the decay begin.
Because the first step to retaining knowledge is measuring how fast it disappears.
Built with Python, FastAPI, React, and PostgreSQL. The algorithm runs on every page load — no caching, no background jobs, just real-time math.
Top comments (0)