DEV Community

Cover image for I Built an AI-Powered Interactive Textbook for Robotics (Full-Stack + RAG)
Syed Affan Ali
Syed Affan Ali

Posted on

I Built an AI-Powered Interactive Textbook for Robotics (Full-Stack + RAG)

RAG Chatbot in Action
Chapter Content with Sidebar Navigation
Multilingual Support (English vs Urdu)

Building an Interactive AI-Powered Textbook: A Full-Stack Journey πŸš€

Introduction

What if textbooks could answer your questions in real-time, adapt to your learning style, and track your progressβ€”all while being accessible in multiple languages? That's exactly what I built: an interactive Physical AI and Humanoid Robotics textbook powered by RAG (Retrieval-Augmented Generation) technology.

In this article, I'll walk you through the architecture, key features, and implementation details of this full-stack educational platform that combines modern web technologies with cutting-edge AI.


🎯 The Problem

Traditional textbooks have limitations:

  • Static content that can't answer follow-up questions
  • One-size-fits-all approach that doesn't adapt to different skill levels
  • Language barriers limiting accessibility
  • No progress tracking or interactive learning features
  • Expensive to update and maintain

I set out to solve these problems by building a modern, AI-powered alternative.


✨ Key Features

1. RAG-Powered Chatbot

The crown jewel of this project is an intelligent chatbot that:

  • Answers questions based on the textbook content
  • Provides citations to source material
  • Maintains conversation context for follow-up questions
  • Prioritizes current chapter content for relevance

2. Multilingual Support (English & Urdu)

  • Full Urdu translation with RTL (right-to-left) support
  • Language toggle on every chapter
  • Persistent language preference

πŸ—οΈ Architecture Overview

The platform is built with a modern, scalable architecture:

Frontend Stack

  • Framework: Docusaurus 3 (React-based static site generator)
  • UI Library: React 18 with custom components
  • Authentication: Clerk for user management
  • Styling: CSS Modules for component-scoped styles
  • Deployment: Vercel

Backend Stack

  • API Framework: FastAPI (Python)
  • Database: Neon Serverless Postgres
  • Vector Store: Qdrant Cloud for embeddings
  • LLM: Groq (Llama 3.3 70B)
  • Authentication: JWT + Session management
  • Deployment: Render

Data Flow

  1. User asks a question in the chatbot
  2. Question is embedded using sentence transformers
  3. Qdrant performs semantic search on textbook content
  4. Retrieved context is sent to Groq LLM
  5. LLM generates answer with citations
  6. Response is returned to the user

πŸ”§ Implementation Deep Dive

Part 1: Setting Up the RAG Pipeline

The RAG service is the heart of the application. Here's how it works:

Step 1: Embedding the Textbook Content

# scripts/embed_content.py
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, VectorParams

# Initialize Qdrant client
client = QdrantClient(url=QDRANT_URL, api_key=QDRANT_API_KEY)

# Create collection for textbook content
client.create_collection(
    collection_name="textbook_content",
    vectors_config=VectorParams(size=384, distance=Distance.COSINE)
)

# Process markdown files and upload embeddings
# Each chunk contains: text, chapter, module, metadata
Enter fullscreen mode Exit fullscreen mode

Step 2: RAG Service Implementation
The RAG service handles query processing and response generation:

# backend/app/services/rag.py
class RAGService:
    async def answer_question(
        self, 
        question: str, 
        chapter_context: Optional[str] = None,
        conversation_history: List[Dict] = []
    ) -> Dict:
        # 1. Embed the question
        query_embedding = await self._embed_text(question)

        # 2. Search Qdrant for relevant content
        search_results = await self.qdrant_client.search(
            collection_name="textbook_content",
            query_vector=query_embedding,
            limit=5,
            score_threshold=0.7
        )

        # 3. Build context from retrieved documents
        context = self._build_context(search_results, chapter_context)

        # 4. Generate response using Groq
        response = await self._generate_response(
            question, context, conversation_history
        )

        # 5. Extract citations
        citations = self._extract_citations(search_results)

        return {
            "answer": response,
            "citations": citations,
            "confidence": search_results[0].score
        }
Enter fullscreen mode Exit fullscreen mode

Part 2: Building the Interactive Frontend

Chatbot Component
The chatbot UI is built with React and integrates seamlessly into each chapter:

// website/src/components/AdvancedChatbot.js
function AdvancedChatbot({ chapterContext }) {
  const [messages, setMessages] = useState([]);
  const [loading, setLoading] = useState(false);

  const sendMessage = async (userMessage) => {
    setLoading(true);

    // Add user message to chat
    setMessages(prev => [...prev, { role: 'user', content: userMessage }]);

    // Call backend API
    const response = await fetch(`${API_URL}/chat/ask`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        question: userMessage,
        chapter_context: chapterContext,
        conversation_history: messages
      })
    });

    const data = await response.json();

    // Add AI response with citations
    setMessages(prev => [...prev, { 
      role: 'assistant', 
      content: data.answer,
      citations: data.citations 
    }]);

    setLoading(false);
  };

  return (
    <div className={styles.chatContainer}>
      <MessageList messages={messages} />
      <ChatInput onSend={sendMessage} disabled={loading} />
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Part 3: Personalization System

The personalization engine adapts content based on user profiles:

# backend/app/services/personalization.py
class PersonalizationService:
    async def adapt_content(
        self, 
        content: str, 
        user_level: str
    ) -> str:
        """Adapt content based on user's knowledge level"""

        prompt = f"""
        Adapt this technical content for a {user_level} learner:

        Original content:
        {content}

        Requirements:
        - Beginners: Use simple language, more examples
        - Intermediate: Balanced technical depth
        - Advanced: Full technical details, assume knowledge

        Maintain all code examples and key concepts.
        """

        response = await self.llm_client.generate(prompt)
        return response.content
Enter fullscreen mode Exit fullscreen mode

Part 4: Gamification System

The gamification system keeps learners engaged:

# backend/app/services/gamification.py
class GamificationService:
    async def award_xp(self, user_id: int, action: str) -> Dict:
        """Award XP for learning actions"""

        xp_rewards = {
            "chapter_complete": 100,
            "quiz_passed": 50,
            "flashcard_studied": 10,
            "daily_streak": 25,
            "discussion_post": 15
        }

        xp = xp_rewards.get(action, 0)

        # Update user XP and check for level up
        user = await self.db.get_user(user_id)
        new_xp = user.xp + xp
        new_level = self._calculate_level(new_xp)

        # Check for new badges
        badges = await self._check_badges(user_id, action)

        return {
            "xp_earned": xp,
            "total_xp": new_xp,
            "level": new_level,
            "new_badges": badges
        }
Enter fullscreen mode Exit fullscreen mode

πŸ“Š Performance Optimizations

Backend Optimizations

  1. Connection Pooling: Reused database connections
  2. Caching: Redis for frequently accessed data
  3. Async Processing: FastAPI async endpoints for concurrent requests
  4. Vector Search: Qdrant's HNSW index for fast similarity search

Frontend Optimizations

  1. Code Splitting: Lazy loading for components
  2. Static Generation: Docusaurus pre-renders pages
  3. Image Optimization: WebP format with responsive sizes
  4. CSS Modules: Scoped styles prevent bloat

Results

  • Chatbot Response Time: <3 seconds (95th percentile)
  • Page Load Time: <5 seconds on 3G
  • Lighthouse Score: 95+ on all metrics

🌍 Multilingual Implementation

Supporting Urdu required careful attention to RTL (right-to-left) text:

// website/docusaurus.config.js
i18n: {
  defaultLocale: 'en',
  locales: ['en', 'ur'],
  localeConfigs: {
    ur: {
      label: 'اردو',
      direction: 'rtl',
      path: 'ur',
    },
  },
}
Enter fullscreen mode Exit fullscreen mode
/* website/src/css/custom.css */
[dir='rtl'] {
  text-align: right;
}

[dir='rtl'] .sidebar {
  order: 1; /* Move sidebar to right */
}
Enter fullscreen mode Exit fullscreen mode

πŸ§ͺ Testing Strategy

Backend Tests

# backend/tests/test_rag.py
async def test_rag_answer_quality():
    service = RAGService()

    result = await service.answer_question(
        "What is ROS2?",
        chapter_context="module1-ros2"
    )

    assert result["answer"] is not None
    assert len(result["citations"]) > 0
    assert result["confidence"] > 0.7
Enter fullscreen mode Exit fullscreen mode

Frontend Tests

  • Component unit tests with Jest
  • E2E tests with Playwright
  • Accessibility tests with axe-core

πŸš€ Deployment Pipeline

Frontend Deployment (Vercel)

# vercel.json
{
  "buildCommand": "cd website && npm run build",
  "outputDirectory": "website/build",
  "framework": "docusaurus"
}
Enter fullscreen mode Exit fullscreen mode

Backend Deployment (Render)

# render.yaml
services:
  - type: web
    name: book-api
    env: python
    buildCommand: "pip install -r requirements.txt"
    startCommand: "uvicorn main:app --host 0.0.0.0 --port $PORT"
Enter fullscreen mode Exit fullscreen mode

πŸ“ˆ Lessons Learned

What Went Well

  1. RAG Architecture: Qdrant + Groq provided excellent performance
  2. Docusaurus: Perfect for documentation-style content
  3. FastAPI: Async support made backend development smooth
  4. Component Architecture: Reusable React components saved time

Challenges Overcome

  1. Context Window Limits: Had to chunk content intelligently
  2. RTL Support: Urdu translation required CSS adjustments
  3. Rate Limits: Implemented caching and request throttling
  4. State Management: Complex state for chat history and user progress

What I'd Do Differently

  1. Use TypeScript for better type safety
  2. Implement server-side rendering for better SEO
  3. Add automated content ingestion pipeline
  4. Build mobile apps with React Native

πŸ”— Resources & Links

Tech Stack Documentation


πŸ—οΈ System Architecture

System Architecture


πŸ’­ Final Thoughts

Building this project taught me the power of combining traditional content with modern AI capabilities. The RAG approach provides accurate, grounded responses while the gamification keeps learners engaged.

The future of education is interactive, adaptive, and AI-powered. This project is just the beginning.

What would you add to an AI-powered textbook? Drop your ideas in the comments! πŸ’¬


Tags

ai #machinelearning #rag #webdev #python #react #fastapi #docusaurus #education #edtech

Top comments (1)

Collapse
 
mrafoo profile image
Syed Affan Ali

Do you like it?