I used Kiro’s spec-driven patterns to turn EduQuest into an adaptive learning platform that personalizes study paths, tracks mastery, and recommends the next best resource. Below I break down architecture, key modules, and how the “Kiro” core drives recommendations, progress, and dynamic difficulty.
Links and Demo
GitHub Repository : https://github.com/kushal-narkhede/EduQuest.git
YouTube Live Demo : https://youtu.be/92PQPcWF3zE
What we built
We built a full adaptive learning environment with a clean UI and structured contents app called EduQuest. Here, the students can personalize their learning experience with our various themes like Halloween, giving users a learning space with spooky UI motivating the students to learn more. The students can also track their personal learning. The app offers multiple AP and IB courses for high school students with more SAT and ACT question banks coming up next. The in-built QuestAI helps students understand concepts better by giving detailed explanation and question generations.
Why Kiro for EduQuest
EduQuest isn't just a studying platform with a lot of content. It takes a step further to meet learners, starting at their current ability and boosting their education over time. Kiro's spec-driven approach let us define the learning system as data with rules.
*Adaptive penalization : * Evaluates the performance signals to configure the next step for each learner.
*Spec-driven design : * Content, mastery thresholds, and progression policies are defined as composable specifications.
*Scalable logic : * new subjects or difficulty tiers plug in seamlessly.
Architecture Overview
Core module (Kiro): Adaptive engine that evaluates learner events and updates mastery profiles.
Data layer: Question banks, lesson specs, difficulty tiers, and topic taxonomy.
UI: Learner dashboard showing personalized pathways and mastery indicators.
Integration: Event bus normalizes quiz submissions and activity events into signals Kiro can consume.
Implementation Highlights
Recommendation Spec
export type PerformanceSignal = {
topicId: string
score: number
attempts: number
timeSpentSec: number
lastSeenAt: string
}
export type MasteryState = "novice" | "developing" | "proficient" | "mastered"
export type Recommendation = {
nextItemId: string
reason: string
difficulty: "easy" | "medium" | "hard"
targetTopicId: string
}
Mastery Update Logic
export const updateMastery = (prev: MasteryState, score: number): MasteryState => {
if (score >= 0.9) return prev === "proficient" ? "mastered" : "proficient"
if (score >= 0.7) return "developing"
return "novice"
}
Difficulty Selection
export const selectDifficulty = (state: MasteryState): "easy" | "medium" | "hard" => {
if (state === "novice") return "easy"
if (state === "developing") return "medium"
return "hard"
}
Next Item Resolve
export const recommendNext = (signals, profile, contentIndex) => {
const weakest = findWeakestTopic(signals, profile)
const mastery = updateMastery(profile.mastery[weakest], recentScore(signals, weakest))
const difficulty = selectDifficulty(mastery)
const nextItemId = contentIndex.pick(weakest, difficulty)
return {
nextItemId,
reason: `Focusing on ${weakest} where your recent performance suggests improvement`,
difficulty,
targetTopicId: weakest,
}
}
Key Features Learners Feel
Personalized study paths: Each learner sees a custom sequence of lessons and reviews.
Mastery indicators: Clear status per topic (novice → mastered).
Smart reviews: Topics resurface based on time and performance decay.
Actionable recommendations: Dashboard explains every suggestion in plain language.
How to Run EduQuest Locally
Clone the repo: git clone https://github.com/kushal-narkhede/EduQuest
Install dependencies: Run npm install or yarn.
Configure content: Add/edit lesson and question specs in the data directory.
Start the app: Run npm run dev and open the local URL.
Test adaptivity: Complete quizzes with varied scores to watch recommendations shift.
What I learned
Spec‑first beats feature‑first: Easier to evolve logic without breaking the UI.
Signals matter: Attempts and time‑on‑task predict struggle better than raw scores.
Explainability builds trust: Learners engage more when they understand why they’re assigned something.
Roadmap
Advanced models: Item Response Theory or Bayesian Knowledge Tracing.
Content analytics: Measure item difficulty and discrimination.
Progress exports: Share mastery profiles with teachers or LMS systems.
Explainable AI: Richer rationale with links to prior performance.
Top comments (0)