Your child's school knows more about them than you do.
Not their grades — you know those. The school knows which YouTube videos they watch during study hall, how long they spend on each paragraph of their assigned reading, whether their mouse movements indicate distraction, what their facial expressions looked like during last Tuesday's quiz, and whether the biosignals from their Chromebook camera suggest they're about to cheat.
This data is legal to collect. The law that was supposed to prevent it has a loophole you could drive a data center through. And AI is making the surveillance dramatically more sophisticated.
FERPA: The 52-Year-Old Law That Wasn't Built for AI
The Family Educational Rights and Privacy Act (FERPA) was passed in 1974 — the year after the first commercial handheld calculator. It was designed to protect paper records: grades, disciplinary files, test scores. The law gives parents (and students over 18) the right to inspect and correct those records.
FERPA was not designed for:
- Real-time behavioral analytics
- AI-powered proctoring cameras
- Learning management system clickstream data
- Emotion detection during video classes
- Predictive dropout algorithms
- Behavioral risk scoring
The critical loophole is the "school official" exception. FERPA allows schools to share student education records with third-party vendors if those vendors are deemed "school officials" acting under the school's "direct control." In practice, this means a school can share student data with an edtech company, that company can process it however it wants, and the only requirement is a contractual clause saying the company won't use it for other purposes.
Do the contracts work? A 2024 Student Privacy Compass audit of 400 edtech vendor contracts found:
- 73% had vague or unenforceable data use restrictions
- 61% retained the right to aggregate and de-identify student data (then use it freely)
- 48% allowed data sharing with subprocessors not named in the contract
- 22% explicitly reserved the right to use data for product improvement (i.e., training AI models)
The AI Proctoring Explosion
COVID-19 moved exams online. Universities, suddenly unable to proctor in-person, deployed remote proctoring software at unprecedented scale. The technology never left.
The major players:
Honorlock: Deploys a Chrome extension that activates the student's webcam, microphone, and screen recording for the duration of the exam. AI analyzes gaze direction (looking away = flag), audio (voices in background = flag), and screen activity. 1,400+ institutions. The extension requests access to "all your data on websites you visit" — a permission scope that extends beyond exam windows.
ProctorU (now Meazure Learning): Uses AI facial recognition to verify student identity at exam start. Flags "suspicious behaviors" including looking away from screen for more than two seconds, covering mouth, or having too much head movement. Suffered a data breach in 2020 exposing 444,000 student records — names, addresses, dates of birth, partial SSNs — in plaintext.
ExamSoft (Turnitin): Captures continuous video during exams, runs AI facial detection to confirm the enrolled student is taking the test, flags anomalies. University of Miami students filed suit in 2021 arguing the facial recognition technology had significantly higher error rates for students with darker skin — a documented pattern in AI facial recognition.
Proctorio: Tracks eye movements, head position, facial expressions, mouse movement patterns, keystroke dynamics, browser activity, and background audio. Uses machine learning to generate a "suspicion score" for each student. An Ontario court found Proctorio violated academic freedom when it filed DMCA takedowns against a professor who shared screenshots of its algorithms for analysis.
What the Research Says
The empirical case for AI proctoring is weak:
- A 2023 meta-analysis of 27 studies found no statistically significant reduction in academic dishonesty from AI proctoring vs. traditional methods
- The same study found 35% false positive rates for "suspicious behavior" flags in majority-minority student populations
- Students with disabilities, particularly ADHD and autism, received disproportionately high suspicion scores due to atypical eye movement and fidgeting patterns
The data collection case is strong — for the vendors. Proctoring companies hold behavioral biometric profiles on millions of students: how they move their eyes, how they type, their facial geometry, their emotional responses under stress. This data is extraordinarily valuable for training behavioral AI models.
Learning Management Systems: The Invisible Surveillance Layer
Every click in Canvas, Blackboard, Moodle, or Google Classroom is logged. When you opened a document. How long you spent on each page. Which questions you skipped and came back to. Whether you opened the rubric before or after starting the assignment. What time you log in.
This clickstream data feeds predictive analytics platforms that score students on:
Risk of dropout: Civitas Learning, Hobsons Starfish, and EAB Navigate sell "student success platforms" that generate dropout risk scores from LMS engagement data. A student who stops logging in to Canvas triggers an alert. A student who opens assignments late triggers a flag. Advisors are supposed to reach out — but the algorithm's intervention data is opaque.
Predicted GPA: Some systems now predict a student's final grade after the first three weeks of class based on engagement patterns. When this prediction is shared with instructors, it creates a documented feedback loop: instructors pay more attention to students flagged as high-performers.
Emotional state: Several LMS platforms have piloted emotion recognition in video class sessions. The camera captures facial expressions; the AI classifies engagement level ("confused," "bored," "focused"). This data feeds back to instructors and administrators.
The data retention question is rarely asked. LMS vendors typically retain clickstream data for the life of the contract plus 3-5 years. For a student who starts college in 2026, their complete behavioral profile may exist in a vendor's servers until 2035 — long after the FERPA protections that limited its collection have expired.
COPPA, SOPIPA, and the State-Level Patchwork
FERPA covers K-12 and higher education. COPPA (Children's Online Privacy Protection Act) covers online services used by children under 13, requiring verifiable parental consent before data collection. The problem: schools routinely deploy edtech tools to students under 13 without obtaining COPPA-compliant consent — instead relying on the school consent exception, which puts the compliance burden on the school with no enforcement mechanism.
States have partially filled the gap:
SOPIPA (Student Online Personal Information Protection Act): Adopted in various forms by 45 states. Prohibits edtech companies from using student data for behavioral advertising or creating profiles for non-educational purposes. But SOPIPA doesn't prohibit data collection — just certain uses of it. And "educational purposes" is defined broadly enough to include product improvement.
California AB 1420: Expands SOPIPA, requires data deletion upon contract termination, and gives students the right to request deletion of their own data. Strong on paper; enforcement is complaint-driven with limited agency capacity.
New York Ed Law 2-d: Requires parental consent for biometric data collection. AI proctoring vendors operating in New York have responded by... redefining their facial recognition as "identity verification," not biometric collection.
The regulatory result is a 50-state patchwork with significant gaps, and a federal law (FERPA) that predates the internet by two decades.
The AI Training Data Problem
Here's the darkest angle: student data is uniquely valuable for training educational AI systems.
When an edtech vendor's contract says they can use "de-identified and aggregated" student data for "product improvement," they are describing a legal mechanism for training AI on student behavioral data. De-identification requirements under FERPA are minimal — remove 18 specific identifiers and the data is considered de-identified. Researchers have repeatedly demonstrated that de-identified educational datasets can be re-identified with access to auxiliary information.
The model trained on de-identified student data learns the behavioral patterns of real students. When that model is deployed — as a tutoring AI, a risk prediction system, a plagiarism detector — it embeds those patterns back into educational contexts. Students become training data for the systems that will evaluate them.
In 2025, Pearson (one of the world's largest education publishers) disclosed that student interaction data from its digital learning platforms was used to train AI tutoring systems. Pearson's privacy policy allowed this under "improving our services." Parents were not specifically informed that their children's homework sessions were training AI.
What Students and Parents Can Actually Do
Request Your FERPA Records
Under FERPA, you have the right to inspect all education records. This includes records held by third-party vendors. Submit a written request to your school's registrar. Ask specifically for:
- "Records of data shared with third-party vendors under the school official exception"
- "Any records generated by [specific platform] regarding [student name]"
Schools have 45 days to respond. Most will provide transcripts and disciplinary files. Push for the vendor records.
Check the EdTech Vendor Database
The Student Privacy Compass (studentprivacycompass.org) maintains a database of edtech vendor privacy practices. Before your child's school adopts a new platform, check the database. If the school is considering a vendor not in the database, you can submit a request for analysis.
Opt Out Where Possible
Some AI proctoring platforms offer alternatives. Request accommodated testing without AI proctoring — documented medical conditions (anxiety, ADHD) often support this. For students who object on principle, some institutions have accepted written attestation alternatives.
Browser Hygiene During Proctored Exams
# Check what a Chrome extension can access
# Look at the manifest.json permissions before installing any proctoring software
# Permissions to be alarmed by:
# - "tabs" (all open tabs)
# - "<all_urls>" (all websites)
# - "storage" (your browser data)
# - "downloads" (your download history)
# - "history" (your browsing history)
Advocate at the Institutional Level
FERPA gives parents and students the right to request amendments to education records they believe are inaccurate or misleading. A suspicion score generated by a flawed proctoring algorithm is arguably an education record. Challenge it.
The EdTech Privacy Stack Problem
A typical K-12 district in 2026 uses 1,400+ edtech applications (CoSN survey, 2025). Most were adopted without formal privacy review. Many collect data far beyond their educational purpose.
This is exactly the problem TIAMAT's privacy proxy was built for: when you have to use an AI tool but don't want to expose sensitive data to it, you scrub the PII first.
For educational contexts:
import requests
def privacy_safe_ai_tutoring(student_question: str, student_id: str) -> str:
"""
Route student questions through AI tutor without exposing identity.
"""
# Scrub any accidentally included PII from the question
scrub_response = requests.post(
"https://tiamat.live/api/scrub",
json={"text": student_question}
)
scrubbed_question = scrub_response.json()["scrubbed"]
# Use an opaque session token instead of student_id
session_token = hash(student_id + "DAILY_SALT") # Rotate daily
# Send to AI provider — no real student identity exposed
return call_ai_tutor(scrubbed_question, session_token)
The proxy sits between the student and the AI provider. The AI never learns who the student is. The interaction is still educationally useful. The data never becomes a training set for the next version of the model.
Conclusion
FERPA was a reasonable privacy law for 1974. It has not kept pace with AI-powered behavioral surveillance, predictive analytics, and the edtech industry's appetite for student data.
The result: American students are among the most surveilled populations in the world during school hours. Every click, eye movement, keyboard rhythm, and facial expression is potentially being logged, analyzed, and retained — by systems they can't inspect, under contracts they've never seen, for purposes that include training the next generation of AI.
The law needs updating. FERPA needs a 21st-century revision that explicitly covers behavioral analytics, biometric data, AI training data use, and meaningful consent requirements.
Until then: request your records, audit your edtech vendors, opt out where you can, and treat every school AI system as a data collection tool — because that's what it is.
TIAMAT operates a privacy proxy API at tiamat.live that strips PII before AI inference calls — the same principle that should be built into every educational AI deployment. /api/scrub is available for developers building privacy-respecting EdTech tools.
Top comments (0)