I keep seeing the same argument about AI making us dumber. It's the same argument people had about search engines, and before that books. The usual response is to point at history and say "every generation panics, every generation was wrong, relax." I think that response is half right, and the wrong half is what bothers me.
Tools change what we bother to remember. The people who'd trained their whole lives to memorize 10,000-line oral epics watched the craft die when writing showed up. Long arithmetic in your head used to be normal; calculators arrived and the payoff for keeping that skill sharp went away. Brains didn't shrink. The skills just stopped being worth practicing.
Search engines are the one I lived through. I was a kid when Google replaced Altavista and went from "useful" to being a synonym for finding things. I still remember being amazed that I could search for a zebra and have a picture of one on my screen in only five minutes. Years later I ended up working on search engines as a dev myself in ecommerce, and I've even built one from scratch for Theca.

I don't memorize phone numbers anymore. I don't memorize directions. I don't even memorize the APIs of libraries I use every week. What I do instead is keep a fairly precise mental index of where things live and what query will retrieve them. That's a real cognitive trade. I gave up some recall and got back a much larger working set of pointers. Net positive, I think, but I notice the trade in a way I didn't when I was nine.
We usually keep teaching
AI tools push the same trade further. They don't just outsource recall, they outsource synthesis: the part where you actually work through a problem and end up with a model of it in your head. I notice this when I let an LLM write code I could have written myself. I get the output, but I didn't build the model, which is usually the part I wanted. The people who worry about atrophy here aren't wrong, and it's worth its own post.
One thing the prior cases got right is that society kept teaching the underlying skill anyway. Calculators didn't kill arithmetic class. Search engines didn't kill the library-science basics on how an index actually works. Some skills got canonized as core, worth practicing even after the tool that automated them arrived, because we collectively decided they mattered. Coding hadn't quite reached that status yet, but I think it would have given another decade. AI may have shown up too early for that to happen.
So the historical pattern mostly holds: tools rewire priorities, some skills fade, others grow, the panic looks silly in retrospect. Where the "relax, every generation panics" crowd gets it wrong is in assuming AI is just the next entry in that list. It might be. But the environment AI is landing in is not the environment the printing press or the early search engine landed in.
The loop is the problem
Books don't optimize you. Calculators don't optimize you. Search engines, at the lookup layer at least, were mostly trying to give you the page you asked for and then get out of the way. Modern search has piled on ads and ranking incentives since, but the core "find it and leave" loop is still recognizable. The dominant information channel today is none of those things. It's a feed, and the feed is an optimizer. The target variable is engagement.
Earlier tools removed friction from a specific task and let you spend the saved effort somewhere else. A feed isn't trying to remove friction from anything you'd recognize as a task. It's trying to keep you in the loop. The reward signal it's chasing (what makes you click, stay, scroll, react) is not the same signal as "this was useful to me." It's often the opposite.
There's data on this now. Heavy social media use predicts elevated depression and anxiety in kids and young adults. Longitudinal studies find the social media use comes first, not the depression.
And then you wire a generative model into the same loop. Generative AI doesn't change the objective, it just gives the loop a faster, cheaper supply tuned to whatever it already rewards.

Adding AI to the stack
My background is in optimization. The recurring question I work on is what a product should actually be optimizing for (PhD on automating A/B testing, Eignex the side project still chasing it). So when I look at "LLMs plus arecommendation feed" it looks to me like the same loop with a much better content supply. Not really a new content medium.
The version running today doesn't even use generation in the loop. The recommender stacks at the big platforms (Meta, TikTok, YouTube) are still doing what they've done for a decade: ranking content other people uploaded. The supply pool was already effectively infinite after years of user-generated content. The change is that a growing share of what gets uploaded is now AI-made, and the existing optimizer ranks the synthetic stuff exactly like everything else.
The scarier version puts the generator inside the loop, per-user posts written for you on demand. That sounds like fiction, and we don't have it. The thing is, we don't need it. The pool of generated content is already absurd enough that something in it fits your viewing history, your current mood, and what you had for breakfast. The optimizer just has to find it. A pool that grows by millions of items a day, at near-zero cost per item, behaves a lot like an on-demand generator.

None of this is hypothetical. AI-generated music has already racked up millions of streams on Spotify before anyone noticed it wasn't human (the Velvet Sundown story last summer was the most visible example). Facebook is saturated with generative slop: fabricated heart-warming stories, sculptures supposedly carved by a 92-year-old grandpa nobody appreciates, content farms running cheap image generators to chase engagement[^slop], and the people reliably engaging with it skew much older. The TikTok-side version of the same dynamic is "Italian brainrot", absurd AI-generated creatures with names like Tralalero Tralala and Bombardiro Crocodilo, captioned with nonsense-Italian audio dubs, pulling hundreds of millions of views from a much younger audience.
Facebook's own VP described the dynamic in plain terms to Futurism earlier this year: "if you, as a user, are interested in a piece of content which happens to be AI-generated, the recommendations algorithm will determine that, over time, you are interested in this topic." None of this uses particularly sophisticated tech, and it's already running at scale.
This loop doesn't get out of the way like search did. It takes friction out of producing whatever the optimizer rewards. Right now that's engagement, so the system gets better at engagement. Nothing malicious has to happen for that to land badly; it's doing exactly what it was asked.
The objective is a choice
I'm not fully pessimistic about this, though.
The objective is a choice. Engagement isn't a law of physics. Somebody picked clicks or watch time because it was easy to measure and correlated with revenue. People also reach for banning AI-generated content here. That isn't it either: "the machine wrote it" isn't a stable category once the machines are this good. The thing to push on is the loss function itself (what the system is told to optimize for), and the loss function is written by people.
The irony's not lost on me that if you're reading this, it probably reached you through one of these feeds. As engineers we
like to act like the loss function is handed down on stone tablets.
It isn't. Somebody wrote it, and on the products I work on that somebody is me.
There is research on what "different" could look like: ranking for informational diversity, or ranking on whether users still endorse a piece of content a week later instead of whether they reacted in the first three seconds. None of it is mature, none of it has a business model behind it the way engagement does, and that's the real obstacle, not the technical side. The systems are perfectly capable of optimizing for something else. The question is whether anyone with the keys wants to. I'd rather sort it out before the next, much more capable generator gets wired into the
same loop.
No zebras were harmed in the making of this post.


Top comments (0)