An investigative series on the surveillance economy. Part III.
1. The School Chromebook
It is 7:52 a.m. on a Tuesday in a public middle school in suburban Ohio. A twelve-year-old named Marcus opens his school-issued Chromebook, logs into Google Classroom, and types a search into the browser: why do i feel sad all the time. He deletes it before hitting enter. He doesn't know that the deletion doesn't matter — that the keystrokes were already logged. He doesn't know that his school's Chromebook deployment includes a content monitoring extension that records every character typed in real time, regardless of whether the student submits it. He doesn't know that this data, along with his browsing history, his location during school hours, his reading pace on assigned texts, his app usage patterns, and a derived "engagement score" generated by his learning management system, is accumulating in servers he will never see, maintained by companies he has never heard of, subject to terms of service his parents never read.
Marcus is not an edge case. He is one of more than fifty million students worldwide who use Google Workspace for Education — formerly G Suite for Education — and tens of millions more who use Microsoft 365 Education. Together, these platforms have become the operating system of American childhood. Every assignment submitted, every document revised, every question asked of an AI-powered tutoring assistant, every search conducted during a research period: all of it flows into what are, functionally, behavioral intelligence systems with memory spans measured in years.
The data profile that follows a child through twelve years of public school is not a simple academic record. It is a longitudinal behavioral dataset of extraordinary granularity. Learning analytics platforms — Schoology, Canvas, Illuminate Education — track not just grades but time-on-task, click patterns, response latency, and what researchers call "off-task" behavior. Emotional state inference, once the province of speculative AI research, is now embedded in commercial EdTech products that claim to derive student engagement and mood from typing cadence, mouse movement, and facial expression analysis on camera-equipped devices. ClassDojo, used in over fifty million classrooms globally, assigns "citizenship scores" based on teacher-reported behaviors like "being kind" and "staying on task" — a behavioral record maintained in a commercial database and shared with parents through an app whose third-party data sharing policies most families have never reviewed.
A child cannot opt out of a school's assigned device. A child cannot choose not to use the platforms their teacher mandates. And in most states, the parents of that child have no legal right to demand deletion of data that was collected with their child's compliance and their school's implicit authorization. The data persists. Into high school. Into college applications. Into adulthood.
This is not an accident. It is a business model.
2. COPPA: The Law That Was Too Little, Too Late
The Children's Online Privacy Protection Act was signed into law by President Clinton on October 21, 1998. The web browser had existed for five years. Google was two months old. The iPhone was nine years away. The TikTok algorithm was twenty years away. Congress wrote a law for a world that no longer exists and called it children's protection.
COPPA's core framework is conceptually sound: operators of websites and online services directed at children under thirteen must obtain verifiable parental consent before collecting personal information. They cannot condition a child's participation on disclosure of more information than necessary. They must provide clear privacy notices. They must delete data upon parental request. These were meaningful guardrails in 1998, when "personal information" meant a name and email address and the primary threat was predatory adults in chat rooms.
The gaps in COPPA are not bugs — they are the result of sustained, successful lobbying by the technology industry over nearly three decades. The most consequential gap is the age ceiling: COPPA protects children under thirteen. It says nothing about thirteen-to-seventeen-year-olds, a demographic that uses social media more intensively than any other age group, that generates some of the most sensitive behavioral data in existence — mental health signals, sexual identity exploration, political opinion formation — and that has exactly zero federal privacy protections specific to their age. In the eyes of COPPA, a thirteen-year-old is legally indistinguishable from a forty-year-old for data purposes.
The second major gap is the "actual knowledge" standard. Platforms are not required to verify user ages; they are only required to comply with COPPA when they have "actual knowledge" that a user is under thirteen. In practice, this standard has created a legal fiction so durable it borders on parody: if a platform asks users to enter their birth date and a child types 1998 instead of 2012, the platform has no "actual knowledge" and bears no liability. The platforms know this. Their age gates are designed to be crossed.
The FTC's enforcement record against this backdrop is both notable and deeply inadequate. In September 2019, the commission reached a $170 million settlement with YouTube — at the time the largest COPPA civil penalty in history — for knowingly collecting personal data from children and serving them targeted advertising without parental consent. Google had maintained for years that YouTube was not directed at children. Its revenue model for children's content had been built on behavioral advertising. The $170 million represented approximately sixteen days of YouTube's annual revenue at the time.
That same year, the FTC fined TikTok $5.7 million for COPPA violations related to its predecessor app Musical.ly, which had collected names, email addresses, and locations from children without parental consent. TikTok went on to face a $92 million class action settlement in 2021 for continued collection of biometric data, including face prints and voiceprints, from minor users. In May 2023, the FTC reached a $25 million settlement with Amazon over violations of COPPA and the commission's own rules — Amazon's Alexa devices had retained children's voice recordings indefinitely, even after parents requested deletion, and had used those recordings to improve Alexa's speech recognition models. The company had also violated the commission's rules regarding geolocation data from children using its Ring camera service.
Twenty-five million dollars is a meaningful number. It is also less than one hour of Amazon's annual revenue.
COPPA 2.0 — a set of proposed reforms that would expand protections to age sixteen, ban behavioral advertising to all minors, and create enforceable data minimization requirements — has been introduced in Congress multiple times. It has bipartisan support. It has not passed. The technology industry's lobbying expenditures in Washington reached approximately $100 million per year in the mid-2020s. The correlation is not subtle.
3. EdTech's Data Extraction Machine
To understand what is actually being collected from children in schools, you have to read the contracts — not the marketing materials, not the privacy pledge certifications, but the data processing agreements between school districts and EdTech vendors. Most parents have never seen these documents. In many cases, neither has the school board.
Google Workspace for Education occupies the center of this ecosystem. The free tier of Google Workspace for Education, which is used by the majority of American public schools, includes prohibitions on using student data for advertising. What it does not prohibit, in granular terms, is using student data for "product improvement" — a category broad enough to encompass the training of machine learning models on student writing, reading behavior, search queries, and communication patterns. In 2022, New Mexico Attorney General Hector Balderas filed suit against Google, alleging that the company had illegally collected location data, voice recordings, and personal information from children using school-issued Chromebooks. The suit alleged that Google's data collection violated COPPA and state consumer protection law. Google settled, agreeing to pay $7.5 million to New Mexico and to implement additional data controls for student users — without admitting wrongdoing.
ClassDojo is worth examining in detail because it illustrates how normalization works. The platform is presented, and experienced, as a friendly classroom communication tool — teachers post photos, share updates, award points for good behavior. It is used by teachers in more than ninety percent of U.S. K-8 schools and reaches approximately fifty million students in 180 countries. The citizenship scores ClassDojo maintains — positive behaviors like "leadership" and "participation," negative behaviors like "not on task" and "disrespectful" — are presented as formative feedback. They are also behavioral records maintained in a commercial database by a company whose privacy policy permits sharing aggregated and de-identified data with third parties for research and product development. In 2023, ClassDojo was valued at over $1 billion. The data asset underlying that valuation is, substantially, the behavioral profiles of children ages five through twelve.
Clever is less visible to parents than ClassDojo but arguably more architecturally significant. The single sign-on platform is used by more than 95,000 schools and serves as the authentication backbone for hundreds of EdTech applications — when a student logs into a reading app or a math platform or a test-prep tool, Clever is often the system that authenticates them and routes their data. This means Clever has visibility across an individual student's entire digital curriculum stack. The company, acquired by Kahoot! in 2021, markets this to schools as convenience. From a data architecture perspective, it is a centralized behavioral data aggregation point sitting at the intersection of every EdTech application a student uses.
The legal mechanism enabling this ecosystem is an exemption in FERPA — the Family Educational Rights and Privacy Act — known as the "school official" exception. Under FERPA, schools may share student educational records with outside parties that have a "legitimate educational interest" and are under the school's direct control. EdTech vendors have successfully argued that they qualify as "school officials" under this exception, which allows schools to share student data with them without individual parental consent. Once the data is transferred to a vendor under this exception, FERPA's protections do not follow it — the vendor is not bound by FERPA in its own data practices.
The CoSN — Consortium for School Networking — surveyed school technology directors and found that 87 percent of districts could not accurately inventory what data their EdTech vendors were collecting from students. They had signed contracts. They had received privacy pledges. They did not know what was happening on the back end.
Remote proctoring tools added a new dimension to this surveillance apparatus during and after the pandemic. Proctorio, Respondus Monitor, and similar platforms were adopted by tens of thousands of schools and universities for online exam administration. These tools capture continuous video of the student during exams, analyze eye movement patterns (flagging gaze direction as potential cheating evidence), monitor keystrokes and clipboard contents, conduct room scans using the student's webcam, and generate suspicion scores that are stored in vendor databases. A student who used Proctorio for exams from 2020 through 2024 has a four-year archive of their physical environment, facial data, and behavioral patterns sitting in a commercial database — retained under terms the student almost certainly did not read and cannot modify.
Turnitin, the dominant plagiarism detection platform used by American universities, maintains a database of student essays submitted through its system. These essays — written by students who were required to submit their work through the platform as a condition of enrollment — are retained indefinitely and used to train Turnitin's AI writing detection models. When Turnitin launched its AI detection feature in 2023, it did so using a corpus of student writing collected without explicit consent for that purpose. The student writing database is the product. The students are the suppliers.
4. FERPA: The Education Privacy Law That Protects Schools, Not Students
The Family Educational Rights and Privacy Act was signed in 1974 by President Ford. Its primary purpose was to give parents the right to review their children's educational records and to limit disclosure of those records to third parties without parental consent. It was a significant reform in its time.
What FERPA is not — and has never been — is a comprehensive privacy law for students. Its enforcement mechanism reveals the problem most directly: the only sanction available under FERPA is the withdrawal of federal funding from the educational institution found in violation. In the fifty-plus years since FERPA's enactment, this sanction has been applied exactly zero times. Not once has a school district lost federal funding for a FERPA violation. The law is unenforced because its enforcement would punish children.
The practical result is that FERPA functions as a compliance checkbox rather than a privacy shield. Schools obtain consent forms, post privacy notices, and sign vendor contracts that include FERPA-compliant language. Then they deploy EdTech platforms whose data practices they cannot audit, under contracts whose implications they do not fully understand, serving student populations who have no meaningful ability to consent or refuse.
The "legitimate educational interest" exception has expanded to the point where it provides cover for almost any commercial arrangement a school might enter. Naviance, now owned by PowerSchool, is used by millions of high school students for college planning — it tracks not just academic records but extracurricular activities, college interest lists, application behavior, and counselor assessments. This data is used to generate "Scattergrams" matching students to colleges, but it also represents a behavioral dataset about seventeen-year-old decision-making that is extraordinarily valuable to college admissions offices and, by extension, to the enrollment management industry.
The "directory information" trap operates with particular opacity. FERPA allows schools to designate certain categories of information — student name, address, phone number, date of birth, participation in school activities, degrees earned, enrollment status — as "directory information" that can be shared without individual consent. Parents must actively opt out of directory information sharing; the default is disclosure. Military recruiters receive directory information from public high schools under the No Child Left Behind Act. Data brokers acquire directory information from schools and aggregate it with commercial data sources. A student's name, school, grade level, and participation in varsity athletics are legally shareable without consent under a fifty-year-old provision that predates the data broker industry.
5. Social Media and the 13-17 Gap
In September 2021, Frances Haugen, a former Facebook product manager, provided internal company documents to the Wall Street Journal. Among the most damaging was a slide deck from an internal research project examining Instagram's effects on teenage girls. The research, conducted by Facebook's own teams, found that thirty-two percent of teen girls reported that when they felt bad about their bodies, Instagram made them feel worse. The research found that Instagram's design features — endless scrolling, comparative social metrics, algorithmic amplification of aspirational beauty content — were causally implicated in body image disorders, depression, and suicidal ideation among adolescent girls. Facebook had known this since at least 2019. It had not acted on it in any meaningful way because doing so would have reduced engagement.
TikTok's internal records, disclosed in subsequent litigation, revealed a similarly deliberate calculus. The platform's "For You" algorithm, which optimizes for time-on-app by surfacing content calibrated to individual emotional states, had been tested with content touching on suicide, self-harm, and eating disorders. Internal researchers found that the algorithm would serve increasingly extreme content in these categories to users who engaged with initial posts on the topics — a feedback loop that was most dangerous for users who were already in psychological distress. Minors were not excluded from these dynamics. Age verification on TikTok consists of a date-of-birth entry field.
Snapchat's "My AI" chatbot, introduced in 2023 and made default-on for all users including minors, was found by the UK's Information Commissioner's Office to have failed to adequately assess risks to children before launch. The ICO issued a £750,000 fine in 2023, finding that Snap had collected and processed children's data through My AI without adequately considering the potential for harm — including the chatbot's capacity to discuss sensitive topics with minors and the data it retained from those conversations.
These platforms know that their age verification systems are theatrical. A 2021 study by Thorn, a nonprofit focused on child protection, found that children under thirteen were present on platforms with COPPA age minimums at rates that made meaningful enforcement effectively impossible without genuine age verification infrastructure. Building that infrastructure — which would require verification against a government ID database or biometric comparison — would reduce underage engagement dramatically and create privacy risks of its own. The platforms have chosen engagement.
KOSA — the Kids Online Safety Act — passed the Senate 91-3 in July 2023, one of the most lopsided votes on technology legislation in recent memory. The bill would establish a "duty of care" standard requiring platforms to mitigate harms to minors, including addiction by design, exposure to harmful content, and data exploitation. The House did not take it up in the 118th Congress. A modified version passed in 2024, but its enforcement mechanisms remain contested and its implementation incomplete as of 2026. The First Amendment objections raised by platforms — that any content moderation mandate is a speech restriction — have proven persuasive to enough legislators to maintain gridlock.
6. The Long Tail: Childhood Data into Adulthood
The conversation about children's data privacy almost always focuses on harm in the moment — the predator who finds a child's location, the algorithm that serves a teenager harmful content. These are real and serious. But they are not the only, or even the most durable, form of harm.
Data collected from an eight-year-old in 2018 is still in databases in 2026. It will be in databases in 2036. And increasingly, it is being used.
The insurance industry's adoption of behavioral and predictive data modeling has expanded significantly over the past decade. While direct use of educational records in insurance underwriting is legally fraught, the data broker ecosystem that aggregates behavioral signals from multiple sources has created pathways for childhood behavioral data — scraped, inferred, or acquired through intermediaries — to influence risk modeling. A 2023 investigation by the Markup found that background check companies routinely included data on individuals with no age floor, meaning records generated during childhood could appear in reports accessed by employers and landlords without the subject's knowledge.
College admissions represents a more documented and immediate channel. Enrollment management firms — including Ruffalo Noel Levitz, EAB, and others — sell "demonstrated interest" tracking services to colleges that monitor when prospective students visit institutional websites, open recruiting emails, click on digital advertisements, and engage with virtual tour platforms. This tracking begins when students are as young as fourteen or fifteen. A student who visited a college's website seventeen times in ninth grade and then stopped is generating behavioral data that may influence their admissions outcome — and they almost certainly do not know it.
The College Board — the nonprofit that administers the SAT and manages the Student Search Service — sells access to student data, including test scores, self-reported GPA, intended college major, and demographic information, to colleges, scholarship programs, and educational nonprofits. Students opt into the Student Search Service by checking a box when registering for the SAT. Approximately 3.5 million students per year participate. The data is sold at approximately forty-seven cents per student name. Students aged thirteen and older can participate. The fact that their test-preparation behavior and academic self-assessment have become a data product generating revenue for a nonprofit they trust is not prominently disclosed.
The permanence asymmetry between American and European children is stark. Under the GDPR, EU residents have a right to erasure — the "right to be forgotten" — that applies to data collected during childhood with particular force. The European Data Protection Board has issued guidance specifically addressing the need to apply heightened standards to children's data given their limited capacity to consent. In the United States, adults have no federal right to erasure. Children have even less. The California Consumer Privacy Act extended some erasure rights to California residents including minors, but enforcement has been inconsistent and the law's business-size thresholds exempt most EdTech vendors.
California's Age-Appropriate Design Code (AB 2273), signed into law in September 2022, represents the most significant US legislative advance in children's data protection in a generation. Modeled on the UK ICO's Children's Code, it requires online services likely to be accessed by minors to configure their default settings for privacy, to complete data protection impact assessments, to prohibit the use of features designed to encourage children to provide more data than necessary, and to prohibit the use of children's data in ways detrimental to their wellbeing. NetChoice, the tech industry lobbying group, challenged the law on First Amendment grounds. The Ninth Circuit's 2024 ruling partially upheld the law, striking a provision that critics argued compelled speech in content moderation but preserving the core privacy-by-default framework. It remains the strongest protection for minors under US law.
7. The CCPA Gap and the State-Level Patchwork
California's Consumer Privacy Act provides specific protections for minor users: businesses cannot sell the personal information of consumers they know to be under sixteen without affirmative opt-in consent; for consumers under thirteen, the opt-in must come from a parent. This is more protective than COPPA's framework in some respects. It is also, like COPPA, full of gaps.
CCPA's applicability thresholds — businesses with annual gross revenue over $25 million, or those that buy, sell, or share the personal information of more than 100,000 consumers per year — exempt a significant portion of the EdTech market. Small learning apps, regional tutoring platforms, and specialized educational tools frequently fall below these thresholds while still collecting sensitive data from thousands of students. The regulation-by-size model creates a perverse outcome: the most under-resourced EdTech products, which are also the most likely to have inadequate security and privacy practices, are exempt from the strongest state privacy law.
As of 2026, seventeen states have enacted comprehensive consumer privacy laws: California, Virginia, Colorado, Connecticut, Texas, Montana, Indiana, Tennessee, Iowa, Delaware, New Jersey, New Hampshire, Kentucky, Maryland, Minnesota, Nebraska, and Rhode Island. Several of these — Colorado, Connecticut, Texas, and Maryland among them — include provisions specifically addressing children's or sensitive data. None approach the GDPR's standard for children's data protection. Most share CCPA's structural limitation of business-size thresholds that create regulatory safe harbors for smaller vendors.
The EU's framework under GDPR, interpreted through the UK ICO's Age-Appropriate Design Code and the European Data Protection Board's guidelines on children's data, establishes privacy-by-default for minors, prohibits profiling for behavioral advertising regardless of consent, requires genuine age verification rather than self-declaration for services directed at children, and mandates data protection impact assessments before any processing of children's data. The UK code has been operational since September 2021 and has resulted in documented changes to platform design for UK users — YouTube disabled autoplay for users under eighteen, Google disabled location history for users under eighteen, TikTok made accounts of users under sixteen private by default.
American children are not afforded equivalent protections. The gap is not a technical or practical one. It is a political one.
8. What Actually Needs to Change
The structural reforms required to meaningfully protect children's data in the United States are not complicated to enumerate. They are complicated to enact because the entities that benefit from the status quo are well-resourced, organized, and strategically patient.
COPPA 2.0 must extend federal protections to age sixteen at minimum, closing the 13-17 gap that has enabled a decade of unchecked behavioral profiling of teenagers. It must close the "actual knowledge" loophole by requiring platforms to implement age verification mechanisms commensurate with the sensitivity of their services. It must eliminate the school exception's utility as a commercial data transfer mechanism by requiring that data shared under the school official exception remain subject to COPPA restrictions even after transfer. It must create a federal right to erasure for data collected during childhood, enforceable by individuals rather than dependent on FTC discretion.
KOSA, in a form with real enforcement teeth, must establish a legal duty of care that changes the economic calculus for platforms. If the cost of designing an addictive product aimed at minors is legal liability rather than regulatory scrutiny, the product design changes. The "safety by default" standard — which requires platforms to configure their most protective settings as defaults for minor users rather than requiring opt-in — is the right architectural principle.
FERPA must be reformed to make its protections enforceable by students and parents, not just by the threat of funding withdrawal that has never materialized in half a century. The school official exception needs a data use limitation: vendor access to student data should be restricted to the specific educational purpose for which it was shared, with no secondary uses permitted without explicit consent.
The California Age-Appropriate Design Code provides the domestic model. Its core principle — that the burden of privacy should fall on the service provider, not the child or parent — is both ethically correct and practically workable. The UK has demonstrated that platforms will comply when required to.
For schools navigating the practical reality of AI tools entering their classrooms today — ChatGPT, Khanmigo, Synthesis, and dozens of others — there is an emerging category of infrastructure designed to provide a privacy boundary between students and these systems. Privacy proxy architectures, such as the TIAMAT Privacy Proxy, intercept student data before it reaches third-party AI platforms, scrubbing personally identifiable information and replacing it with anonymized equivalents. A student's name becomes a token; their school becomes a district identifier; their grade level is generalized. The AI tutoring platform receives the educational interaction without the student's identity. Schools can use modern AI tools while maintaining FERPA compliance. This is not a substitute for legislation — it is the kind of technical harm reduction that schools need now, while the legislative process moves at its own pace.
The children in these systems did not agree to be research subjects. They did not consent to have their emotional states scored, their behavioral patterns catalogued, their teenage writing samples used to train commercial AI models, or their twelve years of educational engagement turned into a data asset that will follow them into adulthood. They were told to log in. They logged in. The architecture around that login has been built, over decades, to extract maximum value from that compliance.
Marcus closed his Chromebook at the end of the school day. The question he typed and deleted — why do i feel sad all the time — was not submitted. It was still logged. It is still in a database somewhere. He is twelve years old. He does not know. No one has told him. No law requires that anyone do so.
That is the system working as designed.
This article is Part III in an investigative series on the surveillance economy. Part I covered biometric surveillance in commercial spaces. Part II examined AI training data and consent. Part IV will address the data broker industry's acquisition of financial distress signals.
Top comments (0)