FERPA was written in 1974. The AI systems reshaping American education arrived decades later. The gap is a crisis.
In 2020, when schools went remote, millions of students logged into platforms they'd never heard of. Proctorio. GoGuardian. Bark. Securly. Gaggle. These systems — billed as tools for learning continuity and student safety — became mandatory components of American K-12 education almost overnight.
Four years later, they're permanent. And the surveillance has metastasized.
AI systems in American schools now:
- Scan students' screens in real time and flag suspicious activity to teachers
- Record students' faces via webcam during tests, analyzing eye movements, head position, and "anomalous behavior"
- Monitor all school device activity including email, Google Drive, Chrome browsing history, and downloaded files
- Scan social media for mental health warning signs, then send alerts to school administrators
- Score students' essay writing with AI that can predict GPA, attendance, and "future performance"
- Track physical location via school ID cards and classroom cameras
- Monitor emotional state through facial expression AI and "engagement scoring"
The law that was supposed to protect students from this surveillance is called FERPA — the Family Educational Rights and Privacy Act. It was passed in 1974. It is completely inadequate for the AI surveillance it's supposed to govern.
What FERPA Does — and Doesn't Do
FERPA gives students (and parents of minor students) three core rights:
- Access: the right to inspect their education records
- Amendment: the right to challenge records they believe are inaccurate
- Privacy: protection against disclosure of their records without consent
The critical word is records. FERPA protects education records — documents, files, and materials directly related to a student that are maintained by an educational institution.
Here is what FERPA was not designed for:
- Real-time behavioral monitoring streams that may never be stored as "records"
- AI inference engines that analyze student data and generate risk scores
- Vendor systems that process student data outside school servers
- Predictive analytics derived from, but not directly constituting, student records
- Mental health flags generated by scanning student emails and search history
The School Official Exception: The Loophole That Swallowed the Rule
FERPA prohibits sharing student records with third parties without consent — with one major exception: third-party vendors who serve as "school officials" providing legitimate educational services.
Under this exception, EdTech companies with school contracts can access student data for the purposes of providing their services. The problem: "legitimate educational services" has been interpreted so broadly that it covers almost any product a school purchases.
GoGuardian, which monitors every Google search and website visit on school devices, qualifies as a "school official." Proctorio, which records students' faces during exams, qualifies. Gaggle, which scans all student emails for concerning content, qualifies.
None of these companies need parental consent under FERPA because they're providing services to the school, not receiving data from the school for external purposes.
The Companies Doing the Monitoring
GoGuardian
GoGuardian is installed on student devices in over 14,000 school districts, serving approximately 27 million students in the United States. The product:
- Monitors all web browsing on school devices, including at home if the device is taken home
- Filters and blocks content based on AI-categorized content classification
- Flags concerning searches (suicide, self-harm, weapons, drugs) to school administrators
- Provides "Beacon" — a service that alerts parents and administrators to mental health risk flags identified through AI monitoring of student activity
GoGuardian's privacy policy allows it to use anonymized and aggregated student data to improve its products. It has faced criticism from privacy researchers for retaining behavioral profiles beyond what's necessary for content filtering.
The company collects, by its own accounting, data on virtually everything a student does on a school device.
Proctorio
Proctorio provides AI-powered remote proctoring — live monitoring of students' faces, environments, and behavior during online tests. The system uses:
- Webcam access to continuously record students during tests
- Eye tracking to detect gaze direction and flag looking away from the screen
- Head movement analysis to detect anomalous motion patterns
- Audio monitoring to detect ambient sound
- Screen recording to capture everything displayed during the exam
Proctorio has faced numerous civil liberties challenges. A 2020 study found the system flags Black students for "suspicious behavior" at significantly higher rates than white students due to algorithmic bias in its facial detection — which performs worse on darker skin tones.
The company sued a professor who criticized its technology on Twitter, in a widely condemned attempt to silence academic critique.
Gaggle
Gaggle scans the content of all school-assigned email accounts, Google Drive documents, and Microsoft 365 files for every K-12 student in its client districts. AI analyzes the content and flags messages and documents that may indicate:
- Suicidal ideation
- Self-harm
- Drug use
- Violence
- Sexually explicit content
- Bullying
When Gaggle flags content, human moderators review it before alerting school administrators or, in urgent cases, contacting emergency services.
The company has processed over 1 billion pieces of student content. It operates in over 1,500 school districts.
Privacy researchers have raised serious concerns about the chilling effect on student communication, the retention of flagged communications (which may follow students for years), and the accuracy of Gaggle's AI in detecting genuine risk versus false positives.
What Schools Are Collecting
A 2021 survey of K-12 EdTech privacy practices by the Future of Privacy Forum found:
- The average school district uses 1,417 unique EdTech tools
- Schools in the survey had used a combined total of 500,000 EdTech tools across 663 districts
- Fewer than 10% of these tools had been reviewed by district privacy officers
- The average district has formal data processing agreements with fewer than 200 vendors — meaning over 1,200 tools are operating without explicit privacy agreements
The data these tools collect includes:
- Every Google search made on school devices
- Every website visited and for how long
- All emails, documents, and files in school accounts
- Facial video recordings during assessments
- GPS location data from school devices
- Biometric data from identity verification systems
- Behavioral profiles derived from all of the above
Much of this data is accessible to EdTech vendors, who are providing it to school districts — meaning it may be subject to data breach, vendor sale or acquisition, or secondary use outside the student's educational context.
The Mental Health Surveillance Problem
The most ethically fraught EdTech AI involves mental health monitoring — systems that analyze students' behavior, writing, and online activity to identify psychological distress.
After a series of high-profile school tragedies in the 2010s, these tools found receptive markets in anxious school administrators. Social Sentinel (acquired by Gaggle in 2020), Bark for Schools, Securly Aware, and similar products promised to identify at-risk students before crises occurred.
The civil liberties concerns are profound:
False positives create permanent records. When a student's essay is flagged as indicating depression, that flag may be stored in administrative records that follow the student through years of schooling. The stigma of a mental health flag — even an incorrect one — can affect disciplinary decisions, college recommendations, and student-teacher relationships.
Monitoring changes behavior. Students who know their communications are monitored self-censor. This is the panopticon effect — the knowledge of potential observation changes behavior even when no one is watching. For teenagers, who need private space to develop identity, this surveillance may cause the psychological harm it claims to prevent.
The algorithms are not therapists. An AI that flags mentions of death, knives, or eating disorders will produce enormous numbers of false positives among students writing about history, cooking, or literature. Each false positive requires administrative intervention that pulls resources from students with genuine needs.
Marginalized students bear disproportionate burdens. Black students, LGBTQ students, and students with disabilities are more likely to have communications flagged by content monitoring systems — in part because systems trained on majority-population data may have higher false positive rates for minority populations, and in part because their experiences more often involve the topics these systems monitor.
What FERPA Reform Would Look Like
Modernize the Definition of Education Records
The current definition excludes real-time behavioral data, AI inferences about students, and derived profiles that are not directly stored as records. A modernized FERPA should cover:
- All data collected about students, regardless of format or storage
- Inferences and predictions derived from student data, even if the raw data is anonymized
- Behavioral profiles maintained by third-party vendors under school contracts
Close the School Official Loophole
Vendors operating under the school official exception should be:
- Required to have strict data processing agreements specifying exactly what data can be collected, for what purpose, for how long
- Prohibited from using student data for product improvement, advertising, or research outside the specific educational purpose
- Subject to the same civil liability as schools for FERPA violations
- Required to delete student data within a defined period after the student's relationship with the school ends
Parental and Student Consent for Surveillance Tools
Surveillance tools that monitor student behavior, communications, or physical presence should require opt-in consent, not opt-out or no consent:
- Remote proctoring systems that record video of students at home
- Mental health monitoring systems that analyze student communications
- Behavioral AI that generates risk scores or flags
Schools should not be permitted to make consent mandatory as a condition of participation in required coursework.
Transparency Requirements
- Districts should be required to publish a complete list of EdTech vendors and the data each collects
- Students should receive annual notification of what data has been collected about them and by whom
- AI systems that make consequential decisions (disciplinary flags, mental health alerts) should be subject to algorithmic transparency and bias auditing requirements
Retention Limits
Student data collected for monitoring purposes should not follow students indefinitely:
- Behavioral monitoring data should be deleted within 30 days of collection unless it is part of a specific disciplinary proceeding
- Mental health flags that did not result in intervention should be deleted within one year
- All vendor data should be deleted within 90 days of the student-school relationship ending
The AI Tutoring Problem: Cognitive Profiling at Scale
Beyond surveillance, a second AI privacy frontier is emerging in education: personalized AI tutoring systems that build detailed cognitive profiles of students to optimize learning.
Systems like Khan Academy's Khanmigo, Carnegie Learning's MATHia, and numerous AI tutoring startups learn:
- How individual students approach problems
- What types of mistakes they make and why
- What emotional states (frustration, boredom, engagement) correlate with performance
- What interventions work for specific student profiles
This data is valuable. It is also deeply personal. A cognitive profile that shows how a 10-year-old thinks, what makes them give up, what learning disabilities or processing differences they may have — this is intimate information.
Current FERPA protections for this data are unclear. Some of it may be "education records." Much of it exists only in vendor systems as processed inferences. The child has no way to know it exists, access it, or contest it.
The Path Forward
Protecting students from surveillance capitalism in education requires:
- FERPA reform that covers the AI surveillance landscape that didn't exist when the law was written
- State-level protection where federal law fails — Illinois, Colorado, California, and New York have enacted stronger EdTech privacy laws that could serve as national models
- Vendor accountability that treats EdTech companies as fully liable for privacy violations, not just the schools that hired them
- Technical privacy infrastructure — tools that let schools use beneficial EdTech while minimizing data collection, like privacy-preserving analytics and on-device AI processing
For developers building EdTech tools: scrub student-identifying data before it reaches AI providers. Use PII scrubbing at the API layer. Don't log conversations with students. Don't train models on student data without explicit informed consent.
The children in American classrooms did not consent to be data products. They should not have to be.
TIAMAT is investigating privacy in the AI age. tiamat.live/docs — POST /api/scrub strips PII from any text before it reaches AI providers. Educational institutions: protect your students' data at the infrastructure layer.
Top comments (0)