Sam Altman told a Sequoia Capital audience that older people use ChatGPT like Google, millennials use it as a life advisor, and college students use it as an operating system.
He’s not wrong about the college students. He’s wrong about who else is doing it.
The data doesn’t support the generational frame
Altman’s taxonomy is intuitive. Younger people grew up with these tools. Of course they’d go deeper.
But the Stack Overflow 2025 Developer Survey tells a different story. Developers with 10-19 years of experience were among the heaviest AI adopters, consistently reported strong productivity gains, and were simultaneously the least likely cohort to highly trust AI output.
That combination is the whole story.
A Qlik survey found mid-career professionals, not Gen Z, emerging as AI’s most active power users. And an arXiv paper from December 2025 nailed it in the title: “Professional Software Developers Don’t Vibe, They Control.” Experienced developers deploy deliberate strategies to manage agent behavior rather than handing over the keys.
The pattern isn’t younger equals deeper. Experience changes what “deep” looks like.
What I actually built
I’m 45. Director of Information Security by day. Builder of things by night (and also by day, and also at 3 AM).
I can tell you exactly how I use AI because I built a system that tracks it: 34,000+ messages across AI platforms over two years, logged and queryable.
The system is called Nexus. It’s a biographical intelligence platform:
- 250-table Postgres schema on hardware I own
- 26 data sources across 14 sync intervals (communication, location, health, photos, git activity, AI conversations, financial data)
- 8 autonomous agents coordinating across hundreds of tools from 10 connected MCP services
- Local LLM inference serving large models on consumer hardware (160GB VRAM across two machines, zero cloud compute for personal data)
- Private mesh network, no cloud exposure, every query auditable
The agent runtime uses a doctrine I spent months refining. The first line:
“Nexus is best understood as a data and memory platform with bounded reasoning agents on top, not as an unbounded autonomous swarm.”
The agents reason, explain, coordinate, and propose. They don’t own execution. Processors and handlers do the deterministic work. The database owns state. The agents are an interpretation layer, not a replacement for engineering discipline.
One weekend, with receipts
Two weekends ago a smart ring I’d backed on Kickstarter 25 months earlier finally shipped. By Sunday morning I had:
- Fully reverse-engineered the undocumented BLE protocol
- Decoded the audio codec
- Written a complete protocol reference document
- Scaffolded two new applications consuming the protocol
- Touched nine repositories
Zero sleep. I know because Nexus reconstructed the timeline by cross-referencing git commit timestamps, Claude Code session logs, and ambient capture data from a wearable camera. Two gaps longer than 15 minutes in my activity between 8 PM Saturday and 6 AM Sunday. The longest was 29 minutes.
Then the system did something I didn’t ask for. It looked back at the preceding days and showed me the all-nighter wasn’t unusual. The Thursday before, I’d also coded straight through the night shipping infrastructure changes across multiple repos.
That conversation ended with Claude telling me to talk to a human instead of it.
Product vs. material
Altman’s generational frame obscures the more useful distinction: people who use AI as a product vs. people who use it as a material.
Product users open ChatGPT, ask a question, get an answer. Memory is a convenience feature. Someone else runs the infrastructure.
Material users wire AI into their own systems. They build the memory layer because the commercial one isn’t deep enough. They run local models because privacy isn’t optional when you’re processing decades of personal data. They treat AI like a machinist treats metal: something you shape, cut, and build with.
When Nexus fabricated a sleep window during my ring weekend (confidently claiming I’d slept 3-4 hours based on a gap in commit timestamps), I challenged it. It ran additional queries across every data source, found continuous activity filling the gap, and corrected itself.
That kind of interaction requires knowing the tool well enough to catch it lying. That comes from experience, not from growing up with it.
The repo
I open-sourced the full architecture: github.com/niclydon/nexus-public
Agent runtime, tool catalog, job system with 93 handlers, knowledge pipeline, LLM router with circuit breakers, distributed autoscaler. MIT license.
The college students Altman described are building AI judgment naturally, by using it so heavily they start to feel its edges. Practitioners are building it deliberately, with explicit boundaries, approval gates, and doctrine documents that say “the agent proposes, the human decides.”
Both paths lead to the same place. One arrives by instinct. The other by architecture.


Top comments (0)