DEV Community

Cover image for LearnMateAI: Building an Intelligent Teaching Assistant Platform
Jing
Jing

Posted on

LearnMateAI: Building an Intelligent Teaching Assistant Platform

LearnMateAI Jing Ng & Liuyi · Spring 2026

LearnMateAI: Building an Intelligent Teaching Assistant Platform

LearnMateAI: Making Online Learning Better

The Problem

Online education today is boring. Students scroll through endless PDFs and read flat text. Teachers don't see what students are actually learning—they only find out when they grade hundreds of assignments. It's like there's a wall between teaching and learning. Plus, students need real feedback fast, not days later.

The Solution

LearnMateAI fixes this. It's a smart app that helps both instructors and students:

For Instructors: A live dashboard shows exactly where the class is struggling. You can see patterns in mistakes across all students without invading anyone's privacy. It's like having radar for class confusion.

For Students: You get interactive flashcards that flip in 3D, quizzes that adapt to your speed, and instant feedback from an AI tutor. It's like having a personal teaching assistant grade your work while you're still thinking about it.

How We Built It: Two People, Two Jobs

We split the work so we could code without getting in each other's way.

Jing's Part (Backend):

  • Set up the server that handles all the data
  • Built the database that stores courses, students, and quizzes
  • Created the AI agents that generate study materials
  • Wrote strict tests to make sure the AI output is correct every time

Liuyi's Part (Frontend):

  • Built the interface students see
  • Built the interface instructors see
  • Made the 3D flashcards and quiz experience feel smooth and fun
  • Tested the whole system end-to-end

How We Used AI: Two Different Ways

Claude Web Chat — We used this for big-picture thinking:

  • Designing what the database should look like
  • Planning our testing strategy
  • Sketching out how the interface should flow

Antigravity & Claude Code (Terminal) — We used this for actual coding:

  • Writing and testing code in the IDE
  • Making sure the AI outputs are reliable
  • Preventing the AI from making stuff up (hallucinating)

The split was clean: Claude Web for strategy, Terminal Claude for execution.

Part 1: The Foundation

Authentication (Logging In Safely)

The app treats instructors and students completely differently. Once you log in, the system instantly knows what you can and can't see. Students can't peek at teacher dashboards. Teachers can't see individual student passwords. Everything is locked down.

Database Design

We designed the database to store:

  • User accounts (name, email, role)
  • Courses and modules
  • Student quiz submissions and scores
  • Learning goals for each module

The key insight: We store individual student data separately from class-wide statistics. This way, a teacher can see "the class average on this question is 62%" without knowing which student got it wrong.

Teacher Control: "Audience Customization"

Before the AI generates anything (quiz, flashcard, summary), the teacher sets an "audience profile." This tells the AI who it's writing for. For example:

  • "High school biology, age 14-15"
  • "No religious references"
  • "Use analogies from sports"

This prevents the AI from making weird or inappropriate content.

The Real-Time Dashboard

Instead of showing numbers in a boring table, we built a live dashboard that updates as students submit quizzes. It shows:

  • Where the class is strongest
  • Where the class is struggling most
  • How fast students are improving

The teacher can see this instantly and adjust their next lesson.

Part 2: The Brain (AI Features)

The AI Agents

We built three AI "workers" that each do one job:

  • Summary Agent — Reads the course material and writes a short study guide
  • Flashcard Agent — Creates question-and-answer pairs from the material
  • Quiz Agent — Generates multiple-choice and essay questions

These agents don't just output raw text. They output structured data (JSON) that the app can use immediately.

Keeping AI Honest: Testing

AI can make things up. So we force every AI output through tests before it gets used:

  • Does the JSON look right?
  • Are there enough questions?
  • Is the answer actually in the source material?

If any test fails, the AI tries again. We even set up Git hooks that prevent code changes if tests are broken.

Cool UI: 3D Flashcards & Smart Quizzes

Instead of just showing text, we made:

  • 3D Flashcards — Flip them over with a realistic 3D animation
  • Smart Quizzes — One question at a time. When you submit, the AI instantly tells you why you got it wrong (not just "that's incorrect")

Part 3: Quality & Testing

Automated Testing Pipeline (CI/CD)

Every time someone pushes code:

  • Tests run (Pytest for backend, Vitest for frontend)
  • The code is scanned for bugs (linting)
  • Browser tests run automatically (Playwright)
  • Security checks run (scan for passwords accidentally committed)
  • The app deploys automatically to the live server

All 9 stages run in parallel. If any stage fails, the deployment stops.

Checking if AI Is Good

When the AI generates a quiz, we ask another AI: "Is this a good quiz?" It scores it on:

  • Is it related to the lesson?
  • Is it clear and understandable?
  • Did the AI make up false information?

We track these scores over time to see if our AI is getting better or worse.

Part 4: How We Actually Used AI (The Workflow That Worked)

How MCP Solved the Context Problem

The biggest waste of time with traditional AI tools is copy-pasting. You paste your database schema into chat. Then your API routes. Then your data models. Back and forth, over and over.

We solved this with MCP (Model Context Protocol). It's a configuration file that lets Claude Code read our project files automatically. Claude could see our database structure, our API routes, and our data models without us having to explain them.

When we asked Claude Code to build the teacher dashboard API, it already knew our database setup. It wrote correct code immediately. This cut our bugs and back-and-forth by about 70%.

Git Hooks: Stopping Bad Code Before It Happens

AI can make mistakes. Code might have bugs, or data might be formatted wrong. We didn't want broken code in our main branch.

So we set up Git hooks. When Claude Code finished writing code, the system automatically ran all tests. If any test failed, the code couldn't be saved. Claude had to read the error message and fix it. This forced Claude to debug itself until everything passed.

We never once merged broken code to main.

The Workflow Pattern That Worked

Instead of just saying "write code," we created a repeatable process: Explore → Plan → Implement → Commit

Here's what that means:

  • Explore: Claude searches through our project files to understand the structure
  • Plan: Claude writes out the steps before writing any code
  • Implement: Claude writes the actual code
  • Commit: Claude saves the work with clear messages about what changed

This meant Claude Code wasn't just guessing. It followed the same workflow every time, which made the code better and more predictable.

Two Different AI Jobs

We used Claude in two completely different ways:

Claude Web (the browser) handled big-picture thinking: database design, testing strategy, interface wireframes. We didn't need to be super technical with it.

Claude Code (the terminal) handled actual coding: writing functions, running tests, debugging. It worked inside our IDE with access to our actual files.

This separation was crucial. The browser Claude didn't need every detail. The terminal Claude focused only on writing and testing. When you split the work like this, both tools get better at their specific jobs.

Lessons Learned

The biggest one: AI doesn't fix bad architecture. If your database design is messy, or your tests are weak, AI code will be messy too. We won the game by building strong foundations first, then using AI to fill in the details.

The key insight: Don't use AI like a chatbot. Use it like a tool in your development pipeline. When you give it clear context (MCP), a repeatable process (the Explore → Plan → Implement → Commit pattern), and automatic quality checks (Git hooks), AI can move incredibly fast while staying reliable.

For a two-person team, this made the difference between a 12-week project being impossible and being doable.

Tech Used

Frontend: React, Vite, Tailwind CSS
Backend: FastAPI, Python
Database: PostgreSQL
AI: Claude API with structured outputs
Hosting: Vercel (frontend), Render (backend)

Links

Code: https://github.com/MelanieLLY/LearnMateAI
Live App: https://learn-mate-ai-zeta.vercel.app
API: https://learnmate-api.onrender.com

Key Features

✅ Teachers see live class confusion radar (no privacy invasion)
✅ Students get AI tutors that give instant feedback
✅ 3D interactive flashcards and quizzes
✅ AI-generated study materials tested for accuracy
✅ Automatic testing pipeline for every code change

Top comments (0)