<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tanvi P</title>
    <description>The latest articles on DEV Community by Tanvi P (@tanvi_penu).</description>
    <link>https://dev.to/tanvi_penu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tanvi_penu"/>
    <language>en</language>
    <item>
      <title>Building AceExam: An AI-Powered Accessible Examination Platform with MongoDB</title>
      <dc:creator>Tanvi P</dc:creator>
      <pubDate>Mon, 27 Apr 2026 15:17:09 +0000</pubDate>
      <link>https://dev.to/tanvi_penu/building-aceexam-an-ai-powered-accessible-examination-platform-with-mongodb-5dha</link>
      <guid>https://dev.to/tanvi_penu/building-aceexam-an-ai-powered-accessible-examination-platform-with-mongodb-5dha</guid>
      <description>&lt;p&gt;By &lt;a href="https://dev.to/tanvi_penu"&gt;Tanvi&lt;/a&gt;, &lt;a href="https://dev.to/ranadeep_ponugoti_02cd7cb"&gt;Ranadeep&lt;/a&gt;, &lt;a href="https://dev.to/nachiketh"&gt;Nachiketh&lt;/a&gt;, Jeathraditya&lt;/p&gt;

&lt;p&gt;Developed under the guidance of Professor &lt;a href="https://dev.to/chanda_rajkumar"&gt;Chanda Raj Kumar&lt;/a&gt;, &lt;br&gt;
and we are thankful for his valuable support throughout this project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/NachtFaust2150/AceExam" rel="noopener noreferrer"&gt;Check out AceExam here&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Online exams aren't built for everyone.
&lt;/h2&gt;

&lt;p&gt;Most digital examination platforms assume every student sees the screen clearly, types comfortably, and processes text without difficulty. The reality for students with visual impairments, dyslexia, or motor disabilities is entirely different, they are left to patch together other methods like browser plugins, screen readers bolted on , or human invigilators typing on their behalf.&lt;br&gt;
The gap isn't just inconvenient. It's a systemic barrier to fair assessment.&lt;br&gt;
AceExam is our attempt to close it. We aim to implement accessibility at the architecture level, training custom AI models for exam interaction, and using MongoDB as the data backbone for a platform that serves genuinely different kinds of users from a single codebase.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Accessibility is not a feature. It is a starting point. AceExam treats accommodation types as first-class citizens, not optional add-ons."&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  How AceExam Is Structured
&lt;/h2&gt;

&lt;p&gt;AceExam follows a clean three-tier hierarchy with clear permissions at each level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Organisation (super admin) → Teacher / Admin → Student
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The system has three roles Organisation, Teacher, and Student each with their own dashboard. A teacher creates an exam, adds questions with difficulty ratings, and assigns it to students. When adding a student, they specify the student's disability type. That single field disability_type is the key that unlocks everything.&lt;/p&gt;

&lt;p&gt;When a student logs in and starts their exam, the system reads their disability_type from MongoDB, activates the right accessibility feature profile automatically, extends the exam timer, and transforms the UI — before the first question even loads. No manual setup. No settings menus.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;p&gt;Backend&lt;br&gt;
Python (Flask) with MongoDB — scoped queries and mutations per role, with MongoDB as the primary data store for flexible document structures across exam types.&lt;/p&gt;

&lt;p&gt;Frontend&lt;br&gt;
HTML, CSS, Vanilla JavaScript (Server-side rendering via Jinja2 templates) with a consistent light theme and a compact five-button accessibility toolbar that persists across all exam pages.&lt;/p&gt;

&lt;p&gt;AI / ML&lt;br&gt;
Custom PyTorch models for speech-to-text and eye tracking, plus Python NLP libraries for text simplification and TTS via the Web Speech API.&lt;/p&gt;

&lt;p&gt;Deployment&lt;br&gt;
Built as a web-based platform — no installation required for students. Accessibility features run client-side using browser APIs where possible.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why MongoDB?
&lt;/h2&gt;

&lt;p&gt;Exam data is not flat. A single student attempt bundles together their profile, accommodation type, question set, per-question response, timestamps, and accessibility state and these fields vary significantly between accommodation types. A blind student's record carries voice command logs; a dyslexic student carries font and overlay settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35phdri5541fjdwsnbh3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35phdri5541fjdwsnbh3.png" alt="mongo db" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Forcing that into a rigid relational schema means either sparse columns everywhere or a patchwork of JOIN tables that make every query more complex than it needs to be.&lt;br&gt;
MongoDB's document model handles this naturally. Each student attempt is one document. Loading an attempt for grading or review is one read — no assembling rows from multiple tables, no N+1 queries. The flexible schema also means adding a new accommodation type or a new accessibility field doesn't require a migration that touches every existing record.&lt;br&gt;
We use MongoDB as the primary data store across three collections:&lt;br&gt;
users — organisation, teacher, and student accounts with roles and accommodation types&lt;br&gt;
exams — question banks, assignment metadata, and per-exam settings created by teachers&lt;br&gt;
attempts — one document per student attempt, with responses, scores, and accessibility state embedded&lt;br&gt;
The once-only constraint on attempts is enforced at the data layer. Once a student submits, the attempt document is locked — subsequent submission calls check the document status before writing anything.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckwpq2019mer7zxwkmb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckwpq2019mer7zxwkmb1.png" alt="mongo db" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Accessibility by Accommodation Type
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjc44645az3brr1370je0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjc44645az3brr1370je0.jpg" alt="website" width="800" height="387"&gt;&lt;/a&gt;&lt;br&gt;
When a student logs in, AceExam reads their registered accommodation type and automatically enables the relevant feature set. No manual configuration. No settings menu to navigate.&lt;br&gt;
🔵 Blind&lt;br&gt;
Text-to-speech reads every question aloud. Voice commands handle navigation and answer selection.&lt;br&gt;
🟣 Dyslexic&lt;br&gt;
Dyslexia-friendly font, increased letter spacing, and pastel background overlays to reduce visual stress.&lt;br&gt;
🟢 Motor Impairment&lt;br&gt;
Full keyboard navigation with single-key shortcuts. Eye tracking allows gaze-based option selection without any physical input device.&lt;br&gt;
⚪ Standard&lt;br&gt;
The default interface, with the five-button accessibility toolbar available if needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u9a2xb559enmsolzu0n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u9a2xb559enmsolzu0n.jpg" alt="Attempting qs" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The AI We Built: Custom Speech-to-Text
&lt;/h2&gt;

&lt;p&gt;Rather than dropping in a general-purpose speech API, we trained a custom keyword classification model — exam_listener_v1.pt — specifically for exam interaction using PyTorch.&lt;br&gt;
The model classifies audio input into exactly 11 command categories:&lt;br&gt;
&lt;code&gt;cancel · confirm · next · option_a · option_b · option_c · option_d · previous · read · silence · submit&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why a narrow vocabulary?
&lt;/h2&gt;

&lt;p&gt;A general ASR model that understands free-form speech introduces significant surface area for misclassification during an exam. If a student says "I think it's option B" while thinking aloud, a general model might select option B unintentionally. Our model knows only the valid exam commands — and classifies only those.&lt;/p&gt;

&lt;p&gt;Accuracy Comparison&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i7b5n4iuiay5solrwnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i7b5n4iuiay5solrwnb.png" alt="Accuracy Comparison" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our model trades raw accuracy for domain specificity. The accuracy gap reflects limited training data and noise sensitivity — known drawbacks we're addressing through data augmentation and noise-robust training in the next iteration.&lt;/p&gt;
&lt;h2&gt;
  
  
  Eye Tracking for Motor Accessibility
&lt;/h2&gt;

&lt;p&gt;For students with motor impairments, AceExam includes a webcam-based eye tracking module built in Python using OpenCV. It maps pupil position to on-screen MCQ options — sustained gaze on an option triggers selection with no mouse or keyboard required.&lt;/p&gt;

&lt;p&gt;Accuracy Comparison&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnb7oy1dg0fpwd2j3v3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnb7oy1dg0fpwd2j3v3h.png" alt="Accuracy Comparison" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The current implementation is computationally intensive and struggles under variable lighting — an acknowledged limitation. It demonstrates the feasibility of gaze-based exam interaction; production-grade accuracy would require a calibrated backbone like MediaPipe. That replacement is already on the roadmap.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Exam Attempt Flow
&lt;/h2&gt;

&lt;p&gt;Every student attempt follows the same path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Login
  → Accommodation type read from MongoDB
    → Accessibility features auto-enabled
      → Exam loads
        → Attempt (once only)
          → Submit confirmation dialog
            → Score written to MongoDB (attempt locked)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During the attempt, three accessibility channels run in parallel: the five-button toolbar, single-key keyboard shortcuts, and voice commands. Students can use whichever channel suits them without any mode-switching.&lt;/p&gt;

&lt;h2&gt;
  
  
  What MongoDB Enables in Practice
&lt;/h2&gt;

&lt;p&gt;Flexible document structure across accommodation types. A blind student's attempt document carries different fields from a motor-impaired student's — MongoDB handles both without schema pressure.&lt;br&gt;
Single-read exam loading. The full context needed to render an exam attempt — questions, student profile, accessibility settings, prior responses — lives in one document. One read to load the exam. No assembly required.&lt;br&gt;
Atomic attempt locking. Once a student submits, a single $set on the attempt document marks it as locked. No race conditions, no partial writes, no separate status table to consult.&lt;br&gt;
Scalable across teacher and student volumes. MongoDB's horizontal scaling means adding new schools, new exam sets, and new student cohorts doesn't require redesigning the data layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;AceExam is a research prototype — a proof of concept that accessible digital examination is achievable with a focused stack and well-scoped AI models. The next directions:&lt;br&gt;
Automated question generation via NLP&lt;br&gt;
Semantic answer grading (not just keyword match)&lt;br&gt;
Text simplification for dyslexic users using transformer-based models&lt;br&gt;
MediaPipe-backed eye tracking to replace the current custom implementation&lt;br&gt;
Noise-robust STT training with data augmentation to close the accuracy gap&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;The decision to train domain-specific AI models rather than use off-the-shelf APIs was deliberate. A narrow vocabulary STT model makes fewer exam-context errors even at lower raw accuracy. A gaze-based selection system, even at 40% accuracy in prototype form, proves the interaction paradigm is viable and gives us a concrete upgrade path.&lt;br&gt;
MongoDB's document model was the right call for a platform where user data shapes vary significantly by role and accommodation type. Flexible schema meant faster iteration, and the single-document-per-attempt design kept the data access patterns clean.&lt;br&gt;
The goal was never to build a finished product. It was to demonstrate that the gap between standard exam platforms and genuinely accessible ones can be closed — and to show exactly where the hard problems still live.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>mongodb</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
