When we started working on AI for learning, we made a mistake that now feels obvious.
We tried to build features.
Smarter summaries. Faster answers. Better prompts.
They all worked—individually. But none of them solved the actual problem students were facing.
The real issue wasn’t access to information.
It wasn’t even explanation quality.
It was cognitive load.
The Problem We Misdiagnosed at First
Students today are surrounded by high-quality material:
• lectures
• PDFs
• videos
• notes
• assignments
• revision guides
Most tools assume the problem is content delivery. But students are not failing because they can’t get information—they’re failing because they can’t integrate, retain, and revisit it over time.
What we saw repeatedly was this:
• Students understood concepts during study sessions
• Performed well short-term
• Forgot most of it weeks later
The tools weren’t broken individually.
The system was missing.
Why “Smarter Tools” Still Failed
Most study tools—AI or not—share the same structural limitations:
1. They are stateless
Each session starts from scratch. The system doesn’t know:
• what the student struggled with last week
• what they almost understood
• what they forgot
2. They optimize for answers, not understanding
Fast responses feel productive, but they often bypass:
• reasoning
• explanation
• recall
3. They push responsibility back to the learner
The student becomes:
•scheduler
•memory tracker
•progress evaluator
•reviewer
Under cognitive load, humans are bad at all four.
This is not a motivation problem.
It’s a systems design problem.
A Shift in Perspective: From Tools to Systems
The breakthrough for us came when we stopped asking:
What feature should we add next?
and started asking:
What system would reduce cognitive overhead for a learner over time?
That reframed everything.
Instead of isolated capabilities, we focused on learning as a loop, not a session.
The Core Design Principles We Adopted
These principles weren’t theoretical. They came from watching real students struggle.
**1. Learning is cyclical, not linear
Understanding degrades unless reinforced. Systems must:
• revisit concepts
• adapt timing
• respond to mistakes
Static notes can’t do this.
2. State matters more than prompts
A good prompt can help once.
A system that remembers performance helps forever.
We learned that long-term learning requires memory at the system level, not just the human level.
3. Recall beats summarization
Summaries feel efficient but create an illusion of competence.
We designed around:
• questioning
• explanation
• application
Not because it’s harder—but because it works.
4. The system should do the bookkeeping
Humans should think.
Systems should:
• schedule
• track decay
• surface weak points
• reduce noise
When students manage the system themselves, the system fails.
Trade-Offs We Made (and Why They Matter)
Some of the hardest decisions were about what not to build.
We avoided:
• One-click “final answers”
• Static flashcards without feedback
• Over-automation that removes thinking
• Feature sprawl
Each of these increases short-term satisfaction but reduces long-term learning.
This was uncomfortable. It meant slower demos. More friction. More thinking required from users.
But friction isn’t the enemy—unproductive friction is.
Where CramX Fits (As a Case Study)
One system we built applying these principles is CramX.ai.
Not as a “tool”, but as an attempt to:
• treat studying as a system
• maintain learning state
• reinforce recall adaptively
• reduce cognitive overhead without removing agency
It’s not the only possible implementation—but it reflects the shift we believe is necessary.
What This Means for Builders and Educators
If you’re building AI for learning, the hardest problems are not:
• model quality
• speed
• UX polish
They are:
• state
• timing
• feedback
• cognitive load management
And if you’re teaching, the question isn’t whether students will use AI—but whether the systems they use support thinking or replace it.
The Real Opportunity
AI’s real contribution to education is not automation.
It’s orchestration.
When systems handle memory, timing, and structure, humans can focus on:
• reasoning
• synthesis
• judgment
• creativity
That’s not futuristic.
It’s overdue.
Final Thought
We didn’t need smarter students.
We needed better systems for human limits.
Designing AI-first study systems forced us to stop optimizing for features and start designing for cognition. Once we did, everything—from product decisions to pedagogy—changed.
The future of learning isn’t about replacing effort.
It’s about directing it where it actually matters.
Top comments (1)
One thing we struggled with while writing this was separating features from systems.
For builders here: what’s the hardest part you’ve faced when designing for cognitive load or long-term state, not just single interactions?