Building an Interactive AI-Powered Textbook: A Full-Stack Journey π
Introduction
What if textbooks could answer your questions in real-time, adapt to your learning style, and track your progressβall while being accessible in multiple languages? That's exactly what I built: an interactive Physical AI and Humanoid Robotics textbook powered by RAG (Retrieval-Augmented Generation) technology.
In this article, I'll walk you through the architecture, key features, and implementation details of this full-stack educational platform that combines modern web technologies with cutting-edge AI.
π― The Problem
Traditional textbooks have limitations:
- Static content that can't answer follow-up questions
- One-size-fits-all approach that doesn't adapt to different skill levels
- Language barriers limiting accessibility
- No progress tracking or interactive learning features
- Expensive to update and maintain
I set out to solve these problems by building a modern, AI-powered alternative.
β¨ Key Features
1. RAG-Powered Chatbot
The crown jewel of this project is an intelligent chatbot that:
- Answers questions based on the textbook content
- Provides citations to source material
- Maintains conversation context for follow-up questions
- Prioritizes current chapter content for relevance
2. Multilingual Support (English & Urdu)
- Full Urdu translation with RTL (right-to-left) support
- Language toggle on every chapter
- Persistent language preference
ποΈ Architecture Overview
The platform is built with a modern, scalable architecture:
Frontend Stack
- Framework: Docusaurus 3 (React-based static site generator)
- UI Library: React 18 with custom components
- Authentication: Clerk for user management
- Styling: CSS Modules for component-scoped styles
- Deployment: Vercel
Backend Stack
- API Framework: FastAPI (Python)
- Database: Neon Serverless Postgres
- Vector Store: Qdrant Cloud for embeddings
- LLM: Groq (Llama 3.3 70B)
- Authentication: JWT + Session management
- Deployment: Render
Data Flow
- User asks a question in the chatbot
- Question is embedded using sentence transformers
- Qdrant performs semantic search on textbook content
- Retrieved context is sent to Groq LLM
- LLM generates answer with citations
- Response is returned to the user
π§ Implementation Deep Dive
Part 1: Setting Up the RAG Pipeline
The RAG service is the heart of the application. Here's how it works:
Step 1: Embedding the Textbook Content
# scripts/embed_content.py
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, VectorParams
# Initialize Qdrant client
client = QdrantClient(url=QDRANT_URL, api_key=QDRANT_API_KEY)
# Create collection for textbook content
client.create_collection(
collection_name="textbook_content",
vectors_config=VectorParams(size=384, distance=Distance.COSINE)
)
# Process markdown files and upload embeddings
# Each chunk contains: text, chapter, module, metadata
Step 2: RAG Service Implementation
The RAG service handles query processing and response generation:
# backend/app/services/rag.py
class RAGService:
async def answer_question(
self,
question: str,
chapter_context: Optional[str] = None,
conversation_history: List[Dict] = []
) -> Dict:
# 1. Embed the question
query_embedding = await self._embed_text(question)
# 2. Search Qdrant for relevant content
search_results = await self.qdrant_client.search(
collection_name="textbook_content",
query_vector=query_embedding,
limit=5,
score_threshold=0.7
)
# 3. Build context from retrieved documents
context = self._build_context(search_results, chapter_context)
# 4. Generate response using Groq
response = await self._generate_response(
question, context, conversation_history
)
# 5. Extract citations
citations = self._extract_citations(search_results)
return {
"answer": response,
"citations": citations,
"confidence": search_results[0].score
}
Part 2: Building the Interactive Frontend
Chatbot Component
The chatbot UI is built with React and integrates seamlessly into each chapter:
// website/src/components/AdvancedChatbot.js
function AdvancedChatbot({ chapterContext }) {
const [messages, setMessages] = useState([]);
const [loading, setLoading] = useState(false);
const sendMessage = async (userMessage) => {
setLoading(true);
// Add user message to chat
setMessages(prev => [...prev, { role: 'user', content: userMessage }]);
// Call backend API
const response = await fetch(`${API_URL}/chat/ask`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
question: userMessage,
chapter_context: chapterContext,
conversation_history: messages
})
});
const data = await response.json();
// Add AI response with citations
setMessages(prev => [...prev, {
role: 'assistant',
content: data.answer,
citations: data.citations
}]);
setLoading(false);
};
return (
<div className={styles.chatContainer}>
<MessageList messages={messages} />
<ChatInput onSend={sendMessage} disabled={loading} />
</div>
);
}
Part 3: Personalization System
The personalization engine adapts content based on user profiles:
# backend/app/services/personalization.py
class PersonalizationService:
async def adapt_content(
self,
content: str,
user_level: str
) -> str:
"""Adapt content based on user's knowledge level"""
prompt = f"""
Adapt this technical content for a {user_level} learner:
Original content:
{content}
Requirements:
- Beginners: Use simple language, more examples
- Intermediate: Balanced technical depth
- Advanced: Full technical details, assume knowledge
Maintain all code examples and key concepts.
"""
response = await self.llm_client.generate(prompt)
return response.content
Part 4: Gamification System
The gamification system keeps learners engaged:
# backend/app/services/gamification.py
class GamificationService:
async def award_xp(self, user_id: int, action: str) -> Dict:
"""Award XP for learning actions"""
xp_rewards = {
"chapter_complete": 100,
"quiz_passed": 50,
"flashcard_studied": 10,
"daily_streak": 25,
"discussion_post": 15
}
xp = xp_rewards.get(action, 0)
# Update user XP and check for level up
user = await self.db.get_user(user_id)
new_xp = user.xp + xp
new_level = self._calculate_level(new_xp)
# Check for new badges
badges = await self._check_badges(user_id, action)
return {
"xp_earned": xp,
"total_xp": new_xp,
"level": new_level,
"new_badges": badges
}
π Performance Optimizations
Backend Optimizations
- Connection Pooling: Reused database connections
- Caching: Redis for frequently accessed data
- Async Processing: FastAPI async endpoints for concurrent requests
- Vector Search: Qdrant's HNSW index for fast similarity search
Frontend Optimizations
- Code Splitting: Lazy loading for components
- Static Generation: Docusaurus pre-renders pages
- Image Optimization: WebP format with responsive sizes
- CSS Modules: Scoped styles prevent bloat
Results
- Chatbot Response Time: <3 seconds (95th percentile)
- Page Load Time: <5 seconds on 3G
- Lighthouse Score: 95+ on all metrics
π Multilingual Implementation
Supporting Urdu required careful attention to RTL (right-to-left) text:
// website/docusaurus.config.js
i18n: {
defaultLocale: 'en',
locales: ['en', 'ur'],
localeConfigs: {
ur: {
label: 'Ψ§Ψ±Ψ―Ω',
direction: 'rtl',
path: 'ur',
},
},
}
/* website/src/css/custom.css */
[dir='rtl'] {
text-align: right;
}
[dir='rtl'] .sidebar {
order: 1; /* Move sidebar to right */
}
π§ͺ Testing Strategy
Backend Tests
# backend/tests/test_rag.py
async def test_rag_answer_quality():
service = RAGService()
result = await service.answer_question(
"What is ROS2?",
chapter_context="module1-ros2"
)
assert result["answer"] is not None
assert len(result["citations"]) > 0
assert result["confidence"] > 0.7
Frontend Tests
- Component unit tests with Jest
- E2E tests with Playwright
- Accessibility tests with axe-core
π Deployment Pipeline
Frontend Deployment (Vercel)
# vercel.json
{
"buildCommand": "cd website && npm run build",
"outputDirectory": "website/build",
"framework": "docusaurus"
}
Backend Deployment (Render)
# render.yaml
services:
- type: web
name: book-api
env: python
buildCommand: "pip install -r requirements.txt"
startCommand: "uvicorn main:app --host 0.0.0.0 --port $PORT"
π Lessons Learned
What Went Well
- RAG Architecture: Qdrant + Groq provided excellent performance
- Docusaurus: Perfect for documentation-style content
- FastAPI: Async support made backend development smooth
- Component Architecture: Reusable React components saved time
Challenges Overcome
- Context Window Limits: Had to chunk content intelligently
- RTL Support: Urdu translation required CSS adjustments
- Rate Limits: Implemented caching and request throttling
- State Management: Complex state for chat history and user progress
What I'd Do Differently
- Use TypeScript for better type safety
- Implement server-side rendering for better SEO
- Add automated content ingestion pipeline
- Build mobile apps with React Native
π Resources & Links
- Live Demo: View the textbook
- GitHub Repository: Source code
- API Documentation: FastAPI Swagger docs
Tech Stack Documentation
ποΈ System Architecture
π Final Thoughts
Building this project taught me the power of combining traditional content with modern AI capabilities. The RAG approach provides accurate, grounded responses while the gamification keeps learners engaged.
The future of education is interactive, adaptive, and AI-powered. This project is just the beginning.
What would you add to an AI-powered textbook? Drop your ideas in the comments! π¬



Top comments (1)
Do you like it?