You thought switching to an AI search engine meant you were done with Google tracking you. That's what I thought too.
Turns out, according to a class-action lawsuit filed on April 1, 2026, Perplexity AI may have been sending your conversations to Google and Meta the entire time. The kicker? Even their "Incognito" mode allegedly didn't stop it.
What the Lawsuit Actually Claims
A Utah man filed a proposed class-action suit in the Northern District of California (Doe v. Perplexity AI Inc., 3:26-cv-02803). The allegations are pretty damning.
According to the complaint, Perplexity embedded "undetectable" tracking software into their search engine's code. These trackers allegedly download onto your device the moment you log into Perplexity's homepage. From there, they transmit your conversations -- complete transcripts of your chats -- to Meta and Google.
Not summaries. Not metadata. Full transcripts.
The plaintiff shared family financial information, tax details, investment portfolios, and financial strategies through Perplexity's chatbot. He thought it was a private conversation with an AI. Instead, that data allegedly got piped straight to the two largest advertising companies on the planet.
And the Incognito mode? The lawsuit claims the tracking worked through server-side mechanisms that bypassed private browsing entirely. So even if you thought you were being careful, the data still flowed out.
The suit alleges violations of the California Consumer Privacy Act (CCPA) and California's Electronic Communications Privacy Act (CalECPA).
Perplexity's response so far: "We have not been served any lawsuit that matches this description so we are unable to verify its existence or claims." Meta pointed to their policy page saying it's against their rules for advertisers to send them sensitive data. Make of that what you will.
Why This Should Bother You as a Developer
Think about what you ask AI tools every day. I use them constantly. Code reviews, debugging, architecture questions, sometimes even pasting in snippets from private repos to get help.
If the tool I'm using is quietly forwarding all of that to third-party ad networks, that's not just a privacy issue -- it's a security incident. You might be leaking proprietary code, API patterns, internal business logic, customer data structures. Stuff that has no business leaving your browser.
And this isn't just about Perplexity. It's the pattern that matters. How many AI tools have you signed up for this year without reading the privacy policy? I know my number, and it's embarrassing.
What You Can Actually Do About It
If this bothers you -- and it should -- here's what I did after reading the lawsuit.
I opened DevTools on the AI tools I use most. Network tab, started a conversation, and filtered traffic for facebook, google-analytics, meta, doubleclick. If you see requests hitting those domains while you're chatting with an AI, that's your data leaving. Check the page source for fbq( or gtag( while you're at it. If you want to go deeper, mitmproxy will show you the actual payloads being sent.
The Incognito thing is worth testing separately. The lawsuit claims server-side tracking that bypasses private browsing entirely. So open Incognito, log in, send a query, and watch the network traffic. If the same third-party calls happen, your browser settings aren't helping.
And yeah -- read the privacy policy. Nobody does. But searching for "third party" and "advertising" takes about 90 seconds and tells you most of what you need to know.
AI companies raise billions promising to be the "privacy-first" alternative. They attract users who are tired of Big Tech surveillance. Then they quietly embed the same tracking infrastructure from the companies their users were trying to escape. Same business model, different product.
And honestly? AI conversation data is way more useful for ad targeting than search queries. When you Google something, you type 3-5 words. When you talk to an AI, you share context, reasoning, personal details, business strategy. Stuff we'd never type into a Google search.
So what now
This lawsuit hasn't been proven yet. Perplexity might have a perfectly reasonable explanation.
But even if this specific case turns out to be overblown, the underlying problem is real. Most AI companies are burning cash and desperately need revenue. Your data is the obvious monetization path. Perplexity might just be the first to get caught.
I've started treating AI search the same way I treat public Slack channels -- don't type anything I wouldn't want forwarded. For anything actually sensitive, Ollama on a local GPU gives you a private AI that literally cannot phone home.
We build these systems. We know how easy it is to add a tracking pixel. Maybe take 10 minutes today and actually check what your favorite AI tool is sending out.
The case is Doe v. Perplexity AI Inc., 3:26-cv-02803, US District Court, Northern District of California.
How safely are you using AI to code? Take the free Vibe Code Risk Assessment — 10 questions, 2 minutes, no signup required.
Top comments (0)