Team members
This project was developed by:
Devendhar Rao @devendhar_rao
Madhan Chowdary @madhan_chowdary
Prabu Kiran @j_prabhukiran_9b653c71e
B. Rooprekha @roop_rekhabharde_eae873c
Srujana Sadhu @srujanasadhusharma
Chanda Raj Kumar Sir @chanda_rajkumar
We’d also like to thank @chanda_rajkumar Sir for the constant guidance and support throughout this project. A lot of the clarity we had in system design and implementation came from those discussions.
Why do students struggle without anyone noticing?
- This is something we kept seeing again and again.
- A student starts missing a few classes. They participate a little less. Their feedback becomes vague - “okay”, “fine”, nothing detailed.
- Individually, none of this looks serious. But over time, it adds up.
- And by the time it reflects in marks, it’s already late.
- That’s where this idea came from - what if we could catch these signals early instead of reacting later?
What do we build?
We built a student engagement detection system that tries to answer one simple question:
“Is this student starting to disengage?”
Instead of depending on just marks or attendance, we combine multiple signals:
- Academic performance
- Attendance
- Behavioral patterns
- Written feedback The goal is not just prediction, but early awareness.
What does the system actually do ?
Once student data is available, the system:
- Looks at attendance and marks
- Processes feedback text using NLP
- Combines everything into an engagement score
- Classifies students as Engaged, Moderate, or At Risk
- Shows the result along with a confidence score It’s not meant to replace teachers - just give them a clear early signal.
Tech stack
We kept things simple and practical:
- Frontend: React + Tailwind CSS
- Backend: Django Framework
- Database: MongoDB
- AI logic: ML + NLP
- Visualization: basic charts for dashboards
Why MongoDB?
We went with MongoDB mainly because our data isn’t uniform.
A single student record can include:
- Numbers (marks, attendance)
- Text (feedback)
- Computed results (scores, labels)
Trying to force all of that into rigid tables didn’t feel right.
MongoDB made it easier to:
- Store mixed data
- Update fields when predictions are generated
- Fetch everything in one go for dashboards
AI / ML / NLP - what’s actually happening behind the scenes
We didn’t use anything overly complicated, but we focused on combining things properly.
- Basic prediction model
At the core, we use a simple feature-based approach:
F = {attendance, marks, behavior, feedback}
Each of these contributes to the final engagement score.
For example:
- Low attendance → increases risk
- Low marks → increases risk
- Negative feedback → strong signal It’s simple, but when combined, it becomes quite effective.
- NLP for feedback analysis
This was one of the most useful parts.
Students often write things like:
- “I didn’t understand this topic”
- “This is confusing”
- “It’s okay” Even if marks are fine, this kind of feedback can indicate a problem. So we use basic NLP to:
- Detect sentiment
- Identify confusion or negativity This adds a layer that numbers alone can’t capture.
- Multimodal approach
Most systems look at one thing, marks or attendance.
We combine:
- Numerical data
- Behavioral data
- Text data This gives a much more complete picture of what’s going on.
- Deep learning (from research side)
In our research work, we also explored:
- LSTM models for tracking patterns over time
- Attention mechanisms to weigh features
- Transformer-based NLP for deeper text understanding These aren’t fully implemented in the current system, but they show where this can go next.
- Combining everything
The important part isn’t each individual model—it’s how they work together.
We:
- Process numerical data
- Analyze text feedback
- Combine everything into one score That combination is what improves accuracy.
How the system works (step by step)
- User logs in
- Student data is entered
- Data is cleaned and prepared
- Feedback goes through NLP analysis
- All features are combined
- Engagement score is calculated
- Result is shown on the dashboard
Results
We compared different approaches:
- Marks only is 78%
- Attendance only is 75%
- Basic combination is 82%
- Our system is 88% The improvement mainly comes from including feedback analysis.
What we observed
A few interesting things came out:
- Students with low attendance + negative feedback were almost always at risk
- Some students had decent marks but negative sentiment in feedback
- NLP helped catch issues that weren’t visible otherwise
Dashboards
We kept the UI simple:
For teachers:
- See all students
- Quickly identify at-risk ones
- View basic trends
For students:
- See their engagement level
- Understand where they stand
- Get suggestions
Challenges
- Dataset was limited
- Results depend heavily on input quality
- No real-time tracking yet
- The model is still fairly simple
What’s next
There’s a lot of scope to improve this:
- Use more advanced ML models
- Add real-time monitoring
- Build a mobile version
- Support multiple languages
- Give more personalized recommendations
Final thought
Most systems tell you:
“This student has already failed.”
We’re trying to build something that tells you:
“This student might need help right now.”
That small shift in timing can make a big difference.
Links
GitHub:https://github.com/Devendhar2006/PFSAD.git









Top comments (0)