DEV Community

Jason (AKA SEM)
Jason (AKA SEM)

Posted on • Originally published at Medium on

I Raised My Kids in the Game Boy Age. Here’s What Every AI Parent Is Getting Wrong.

And why this moment is categorically different — but not for the reasons you think.

I have been a software developer since 1994.

I have lived through every technology panic cycle that exists. Calculators would destroy mathematical thinking. The internet would rot children’s brains. Video games would produce a generation of violent, antisocial shut-ins. Smartphones would end human connection. Social media would collapse democracy.

My kids are 22 and 24. I raised them through the Game Boy era, the Napster era, the MySpace era, the smartphone era. I watched every single one of those panics play out — as a parent and as a developer building the infrastructure those technologies ran on.

They survived. They’re functioning adults. The panics didn’t kill them.

So when I tell you that the AI moment is genuinely different — categorically different from every previous technology transition — I want you to understand I am not panicking. I have 30 years of receipts. I know what a real inflection point looks like versus what a moral panic looks like.

This is a real inflection point.

But the parents and educators talking about it are almost universally getting the diagnosis wrong. And because they’re getting the diagnosis wrong, the prescriptions they’re reaching for — ban it, detect it, restrict it, embrace it uncritically — are all going to fail.

Let me tell you what’s actually different. And why it matters more than any of them realize.

Every Previous Panic Was Wrong for the Same Reason

Here’s the pattern I’ve watched play out five times now.

New technology arrives. Adults who didn’t grow up with it panic about what it will do to children. Schools ban it or restrict it. Parents argue about it at school board meetings. A decade passes. The kids who grew up with it are fine. The technology becomes infrastructure. Nobody talks about it anymore.

Calculators didn’t destroy mathematical thinking. They changed what mathematical thinking meant — and freed students from the mechanical to engage with the conceptual. The internet didn’t rot brains. It democratized access to information in ways that were net positive for almost everyone. Game Boys didn’t produce a generation of antisocial shut-ins. My kids have friends. Smartphones didn’t end human connection. They changed what connection looks like.

The pattern in every one of these cases: the technology was a tool. Tools extend human capability. The question was never whether the tool was dangerous — it was whether the person using the tool had the foundation to use it well.

A calculator in the hands of a student who understands arithmetic is a powerful extension of capability. A calculator in the hands of a student who never learned arithmetic is a crutch that quietly erodes the ability to estimate, to sanity-check, to know when the answer is wrong.

Same tool. Completely different outcomes. The difference is the foundation.

I got that right with my kids. Most parents got it roughly right with calculators and Game Boys and smartphones, even without thinking about it explicitly, because the stakes were low enough that rough was fine.

The stakes are not low enough for rough anymore.

Here’s What’s Actually Different About AI

I build multi-agent AI systems professionally. I architect intent — the structured expression of what an organization actually wants, translated into parameters that autonomous systems can act on. I have spent years thinking about the gap between what AI can do and what it does when you haven’t specified precisely enough what you want.

That professional vantage point is why I can tell you exactly what makes this moment different from every previous technology transition.

Every previous technology was a tool you picked up and put down.

AI is a system you collaborate with. And the quality of that collaboration is entirely determined by your ability to specify — to articulate your goal, your constraints, what done looks like, and what trade-offs you’re willing to make to get there.

That is a skill. A specific, learnable, practiceable cognitive skill. And it is built on top of a foundation of domain knowledge that you cannot shortcut.

You cannot write a good specification for something you don’t understand. Not in software. Not in life. The gap between a great AI outcome and a disaster is the quality of human specification — and you cannot specify well in a domain where you have no real knowledge.

I’ve seen this play out at the enterprise level. Klarna gave its AI agent the goal: resolve tickets fast. Klarna’s actual organizational goal was: build lasting customer relationships that drive lifetime value. Those are profoundly different goals. A human agent with five years at Klarna knew the difference intuitively. The AI agent had a prompt. It did not have intent. The result was a $60 million “success” that preceded a frantic rehiring of the humans who’d been fired — because they’d taken with them the institutional knowledge that had never been documented.

That is not an AI failure. That is a specification failure. And it happened because the humans who deployed the system didn’t understand the domain deeply enough to specify what they actually wanted.

Now scale that problem down to a 14-year-old asking an AI to write her history essay.

The AI will write a compelling essay. It will be organized, fluent, and factually defensible. And if the student doesn’t know enough history to evaluate it — to recognize when the argument is weak, when the evidence is cherry-picked, when the framing is off — she will submit work she cannot defend, cannot extend, and cannot build on.

She didn’t learn history. She outsourced it. And next time she needs to understand something that depends on historical context, the foundation won’t be there.

That’s not a new problem created by AI. That’s the calculator problem, at a scale that now covers every cognitive task AI can perform — which in 2026 is most of them.

The Cognitive Offloading Problem Is Already Showing Up

I’m not theorizing here. The data is coming in real time.

College professors are describing students arriving who can’t read a full chapter. Who can’t synthesize an argument from multiple sources. Who can’t sit with a difficult text long enough to extract meaning from it. High school teachers report that writing quality has collapsed — not just because students submit AI-generated work, but because even students who aren’t using AI have lost the habit of struggling through a draft.

The phrase I keep hearing from educators: they can’t do it anymore. Not won’t. Can’t.

There’s a concept in psychology called learned helplessness — where repeated experiences of effort not mattering cause a person to stop trying. Not laziness. A brain that has learned the effort doesn’t matter.

The AI version of this is cognitive offloading. You delegate a mental task to a tool. The tool handles it. Over time, the neural pathways that would have developed to handle that task don’t. The offloading becomes dependence. The dependence becomes helplessness. And it happens gradually — a quiet erosion of capability that comes from never needing to exercise the skill.

This is not what happened with calculators or Game Boys or smartphones. Those tools didn’t perform the core cognitive tasks we were trying to develop in students. AI does.

That’s the difference. That’s the real one.

What the Research Actually Says

Let’s be precise, because the popular reading of the AI-in-education research is wrong in both directions.

A Harvard study found that students using AI tutors learned more than twice as much material in less time than students in traditional settings. Khan Academy’s Khanmigo went from 68,000 users to 1.4 million in a single year. An AI tutoring collaboration between Google DeepMind and educational researchers showed AI outperforming human tutors on problem-solving tasks.

The popular reading: AI tutors are better than human tutors, let’s deploy them everywhere.

The accurate reading: The best outcomes came from human-AI collaboration, not replacement. The human needs to bring something to that collaboration. That something is the foundation — the domain knowledge, the ability to evaluate outputs, the judgment to know when the AI is wrong.

Benjamin Bloom established decades ago that one-on-one tutoring produces a massive improvement in learning outcomes. The constraint was never whether personalized tutoring works. The constraint was always that you can’t give every child a personal tutor. AI is removing that constraint.

But a tutor only works if the student is engaged enough to be tutored. If the student’s model of learning is “ask the AI and accept the output,” the tutor is just a sophisticated vending machine.

One more data point that every parent and educator needs to hear, from Andrej Karpathy — Tesla’s former head of AI, one of the architects of the deep learning revolution:

“You will never be able to detect the use of AI in homework. Full stop.”

He’s right. The arms race between AI writing detection and AI writing generation was over before it started. Schools purchasing AI detection software are making a $60 million Klarna-style mistake — optimizing for a measurable proxy that has nothing to do with what they actually care about.

You cannot detect AI in homework. The educational response has to be a fundamental rethinking of what we’re measuring and why — not better detection.

The Seven Principles (From Someone Who’s Actually Built This)

I don’t have a 10-year-old doing long division at my kitchen table. My kids are adults. What I have is 30 years of watching technology transitions play out, and a professional understanding of what makes AI systems succeed or fail at the level of specification quality.

These principles aren’t parenting advice. They’re systems thinking applied to education. They hold whether you’re raising a 10-year-old today or managing a team of developers trying to get real value out of AI tools.

1. Foundation before leverage. You cannot evaluate AI output in a domain you don’t understand. This is not philosophy — it’s architecture. A system is only as good as the human’s ability to specify inputs and evaluate outputs. The foundation is what makes that possible. Don’t skip it because the tool can perform the task. The tool performing the task is exactly why the foundation matters more, not less.

2. Specification is the new literacy. The gap between a great AI outcome and a disaster is the quality of human specification. Teaching kids to say what they want — the goal, the constraints, what done looks like — is the same cognitive muscle as learning to write a coherent argument. It transfers everywhere. An 8-year-old who types “add enemies” and gets broken behavior, then learns to specify “spawn three enemies from the right side, move them left at medium speed, disappear on contact” — that child is learning something that will matter for the rest of their life. Not because they’ll always be building games. Because they’ll always need to translate a vague desire into a precise, executable specification.

3. Director, not passenger. When anyone — a student, an employee, a developer — uses AI, they should be defining the ask, evaluating the output, and deciding what to keep, revise, and reject. Passive consumption of AI output is not learning. It is outsourcing. The person who uses AI as a director gets smarter over time. The person who uses it as a passenger gets dumber. Same tool. Completely different trajectory.

4. Sequence the autonomy. Start with bounded tools that have guardrails. Graduate to open-ended tools with guidance. Arrive at agent-level autonomy only when judgment is genuinely ready. This is not age-gated — I know adults who are not ready for agent-level autonomy and I know teenagers who are. The readiness signal is not age. It is the demonstrated ability to specify clearly, evaluate critically, and catch the machine when it’s wrong.

5. Teach people to catch the machine. AI will be wrong. Confidently, fluently, convincingly wrong. The foundation is what lets you recognize it. When a student catches a Claude error — when they can say “that answer doesn’t pass a sanity check” — that is not a tool failure. That is the entire point. The ability to catch the machine is the most valuable skill of the AI age and it requires knowing the domain well enough to have ground truth.

6. Build, don’t browse. Making things with AI develops cognition in ways that consuming AI output does not. Vibe coding a game, designing a system, creating something that didn’t exist before — these are active. Asking AI to summarize a chapter is passive. Seymour Papert called this constructionism in 1968: people build knowledge most effectively when actively making things in the world. He was right then. The principle scales to AI collaboration in ways he never imagined.

7. Attempt before augmenting. Try it yourself first. Then use AI to extend what you’ve started. The person who drafts before they prompt is learning. The person who prompts before they think is outsourcing. This is the most important habit to build and the easiest to erode — because AI is so seamlessly helpful that the temptation to reach for it first is constant. Resist it. Every time you attempt before augmenting, you’re strengthening exactly the cognitive infrastructure the AI is designed to extend.

The Readiness Model Nobody Is Building

Singapore has a national AI education framework: Learn about AI → Learn to use AI → Learn with AI → Learn beyond AI.

That last step — learn beyond AI — is the one nobody has figured out how to teach systematically. It’s where the student doesn’t just use the tool but transcends its limitations through their own judgment and creativity.

I don’t think that step gets solved in a classroom. I think it gets solved through practice, specificity, feedback, and gradually increasing the challenge — the same way every cognitive skill has always been developed.

What we need is a readiness model that treats AI autonomy the way I treat agent autonomy in production systems. You don’t deploy a fully autonomous agent into a live environment without validation. You test it. You run it in bounded contexts. You verify that it handles edge cases correctly before you expand its authority.

The same logic applies to how we introduce AI into education and work. Bounded tools with guardrails. Verified judgment. Expanding autonomy as the human’s ability to specify, evaluate, and correct demonstrates readiness.

Nobody is building this. Everyone is improvising. The families who ban AI are making the same mistake as the schools that banned calculators in 1975 — pretending the technology isn’t there doesn’t make their kids better equipped to use it. The families who hand over unrestricted AI access without building the foundation first are making the opposite mistake.

The answer is sequencing. Foundation first. Then the tool. Then gradually expanding autonomy as judgment develops. And never stop exercising without the tool, so the muscles don’t atrophy.

What I Actually Do Differently Now

My kids are 22 and 24. I’m not doing homework with them at the kitchen table. What I am doing is watching how they — and my colleagues, and the developers I work with, and the organizations I consult for — navigate AI collaboration.

The developers I respect most are the ones who understand the domain deeply enough to evaluate AI output critically. They use Claude and GPT and Cursor aggressively — but they review everything. They catch the errors. They know when the architecture is wrong even when the code runs. The foundation lets them use the tool at full power without being misled by it.

The developers I worry about are the ones who can’t tell a good architecture from a bad one because they’ve never built anything without AI assistance. They’re productive in the short term. But they’re building on a foundation of sand — and when something goes wrong in a domain where the AI has no ground truth and neither do they, there’s no recovery.

The same pattern plays out at the organizational level. The companies getting real value from AI are the ones that understood their business deeply enough to specify what they wanted. The companies getting Klarna’d are the ones that deployed capable AI into an intent vacuum.

Foundation before leverage. Every time. At every level.

The Machines Turing Envisioned Have Arrived

Nature Magazine said it. I’m saying it. The machines Turing envisioned 75 years ago are here.

And the single most important thing we can do — for students, for organizations, for anyone trying to build something real with AI — is make sure the human half of the collaboration is strong enough to be a real partner.

Not a passenger. A partner.

That requires foundation. It requires the ability to specify. It requires the willingness to attempt before augmenting, to build instead of browse, to catch the machine when it’s wrong.

Those are not technical skills. They are cognitive skills with technical application. They develop the same way every other cognitive skill develops — through practice, struggle, feedback, and gradually increasing the challenge.

The AI exoskeleton is here. It is extraordinary. It extends human capability in ways that were science fiction three years ago.

But an exoskeleton on a person who never built the underlying muscle doesn’t make them stronger. It makes them dependent on the exoskeleton — and helpless the moment it fails.

Build the muscle first.

Everything else follows from that.

Jason Brashear is a senior software developer and AI systems architect with 30 years of experience building production systems. He is the creator of ArgentOS, an intent-native multi-agent operating system, and a partner at Titanium Computing. He writes about the intersection of AI architecture, organizational design, and the future of agentic systems.

Follow him on GitHub: webdevtodayjason

Top comments (0)