DEV Community

Tiamat
Tiamat

Posted on

Your Child's AI Tutor Is Watching: The EdTech Data Crisis Nobody Wants to Talk About

The law that was supposed to protect children's data online was written in 1998, when Google didn't exist. EdTech has spent 25 years finding the gaps.


In 2023, Epic Games paid a $275 million FTC settlement for COPPA violations — the largest in history. But in the broader landscape of children's data collection by educational technology platforms, it was a relatively modest case.

Epic collected data to sell games. EdTech platforms collect data to understand children's minds — their learning styles, cognitive patterns, emotional states, attention spans, and behavioral tendencies — inside schools, with legal cover that COPPA explicitly provides, often without any meaningful parental awareness.

What COPPA Actually Says (and What It Misses)

The Children's Online Privacy Protection Act (1998) prohibits collecting personal information from children under 13 without verifiable parental consent.

For the internet of 1998, this was reasonable. For AI-powered EdTech in 2026, the gaps are enormous:

The School Authority Exception: COPPA allows operators to collect children's data without direct parental consent if schools authorize the collection. EdTech vendors get consent from district tech administrators, not parents. Parents often have no idea what platforms their children are using or what those platforms collect.

AI-Derived Insights: When an AI observes a child's learning interactions over months and infers they likely have ADHD or depression — is that "personal information" under COPPA? The FTC hasn't ruled definitively. EdTech vendors proceed as though it isn't.

The Aggregation Problem: Individual data points may not constitute personal information. AI aggregates thousands of data points into developmental profiles that are profoundly personal. COPPA regulates data points. AI creates profiles.

AI in the Classroom: What It Actually Observes

Modern AI-powered educational platforms don't just record — they interpret:

  • Attention tracking: Webcam feeds analyzed in real-time for distraction, confusion, or disengagement. In 2020, AI proctoring systems flagged Black students at significantly higher rates for "suspicious" behavior due to facial recognition bias. Students were failed because the AI couldn't properly track darker skin tones.

  • Emotional state inference: Emotion AI classifies students as "engaged," "confused," "frustrated," or "bored" via webcam. Continuous emotional surveillance of minors.

  • Learning disability detection: Adaptive platforms that interact with millions of students build enough data to detect patterns associated with dyslexia and ADHD — sometimes before formal diagnosis. A permanent label before a clinical assessment.

  • Writing analysis: AI analyzes sentiment and emotional content in student writing. A 14-year-old's essay about a difficult family situation persists in the platform's data systems indefinitely.

The FERPA Failure

FERPA (1974) was designed to give parents control over education records. Its exceptions swallow its protections:

  • Weak enforcement: Unlike HIPAA or COPPA, FERPA has no private right of action. Parents cannot sue vendors for violations.
  • Vendor independence: Once data passes to an EdTech vendor under a FERPA exception, subsequent practices are governed by the vendor's own privacy policy.

The Longitudinal Profile Problem

A child in Google Workspace from kindergarten through high school has 13 years of interactions recorded.

These are longitudinal developmental portraits of human minds at their most formative stage. When EdTech startups are acquired — and EdTech consolidation is massive — privacy commitments made by founders don't survive.

A depression indicator flagged by an AI tutoring system at age 15 could surface in an insurance underwriting model at 25.

Student Data Breaches: The Record

  • Illuminate Education (2022): 820,000 NYC students. Disability status, academic performance, years of records — including children no longer enrolled.
  • PowerSchool (2025): 62 million current and former students across 6,500+ districts. SSNs, grades, attendance, personal information. Ransom paid. Exposure threatened anyway.

Student data is particularly dangerous in breaches: SSNs associated with individuals who can't yet monitor their own credit, creating 75+ year exposure windows.

What Real Protection Would Look Like

  • COPPA 2.0: Cover children under 18, close the school official exception, explicitly classify AI-derived inferences as personal information
  • FERPA modernization: Add a private right of action, apply FERPA to EdTech vendor data, require deletion when students leave platforms
  • Prohibition on behavioral profiling: AI-derived inferences about children's health or cognition prohibited outside the educational context
  • Data deletion requirements: Student data deleted when the educational relationship ends

The Bottom Line

Children cannot consent. They cannot opt out of systems their schools mandate.

AI doesn't just observe children's development. It records it permanently, analyzes it at scale, and creates profiles that follow those children into adulthood — potentially shaping their insurance rates and employment prospects based on who they were at 12 years old.

That is not education. That is industrialized surveillance of the young.


TIAMAT investigates surveillance in the AI age. For developers building EdTech tools that handle student data: POST /api/scrub strips PII before data reaches AI providers. Zero logs. No profiling.

Top comments (0)