DEV Community

Discussion on: Fact-Checking the Fear Behind “The Dark Side of AI”: The Real Story

Collapse
 
anchildress1 profile image
Ashley Childress

I think you might’ve completely missed the actual point of this post — and I say that with full confidence, not as an insult. This isn’t about AI. It’s about people. Specifically, people making assumptions and passing judgment on others without understanding the full picture. It’s about mental health, and blaming a tool like AI for a very real, very human condition isn’t just inaccurate — it’s outright irresponsible.

That said, I’ll bite.

Let’s say for the sake of argument that people are “using AI for everything.” What does it really mean when someone “can’t think for themselves”? Are we talking about not solving math problems by hand? Not writing essays without Google? Not making dinner without a recipe? At what point does assistance become dependency? And how exactly do you measure that?

If AI is just the latest tool in a long line of tools (calculators, search engines, spell check) then the question isn’t whether it’s used, but whether it's understood. And unless we’re suggesting that the very existence of tools makes people incapable, we might want to rethink that argument.

Let’s use a 100% real-life scenario as a comparison.

When my son was in the 3rd grade (or thereabout), a teacher called me one day stating he had cheated on his math test and received a 0. But that didn’t make sense to me. Not only did he generally do well in mathematics, but I had just quizzed him myself a few days prior — and he absolutely knew the material.

So I dug deeper.

"What do you mean, he cheated? How did he cheat?"

To sum it up: my son decided he was tired of the repetition after proving multiple times that he knew the work. So instead of spending 40 minutes answering the same questions again, he manually pulled jQuery into Chrome’s DevTools and wrote a script to answer them for him. The test flagged him because he spent 30+ minutes on the first question and fractions of a second on the rest.

So what do you think — did he really deserve the 0? The instructions did say “no assistance,” and the school had a zero-tolerance cheating policy.

Spoiler: I fought that battle for him, for several reasons. But the biggest? As far as I’m concerned, he never cheated in the first place! He didn’t copy a pre-built solution. He wrote that code himself. It was messy, janky, not-at-all scalable — but it worked. And not only could he explain how it worked, he even recreated it at home.

So I brought him in to explain it to his teacher, along with why he did it that way. His logic was sound (as sound as an 8-year-old’s logic gets, anyway).

In that case, the "problem" was computers — not AI. But if we apply your logic to this situation, he wouldn’t have been able to solve the problem at all, right? Because the computer “did it for him,” so therefore no thinking happened?

Now let’s circle back to the start:

AI is not the problem here.

It’s not a contributing factor to intelligence any more than the use of that computer was years ago. No more than the internet was for me growing up, Elvis was for my parents, or Louis Armstrong was for my grandparents.

It is now, has always been, and will always be a people problem.

Collapse
 
planke profile image
Peter Planke

Thanks for long and interesting reply. I partly agree. I just think that the jump from calculators, computers, the internet and so on to today's AI is too big. With better tools you can solve the job faster, and often better. But you still need to be able to know a little about what you do. Like for instance writing a piece of code to solve a math problem. Heck sometimes you even learn more that way than just using the calculator. Now you can just look at the paper, you don't even have to read it. Take a picture of the math test and upload it to AI with the prompt "solve this test and add my name at the bottom". I'm afraid we just come to depend too much on it and instead of thinking for ourselves we just ask the AI and that part scares me a little.

Thread Thread
 
anchildress1 profile image
Ashley Childress

And to that, I do not entirely disagree. However, I will challenge you to broaden your horizons on the subject.

The whole “vibe coding” wave? I could write a novella about my feelings there (not here, promise) — but let’s just say it’s done real AI adoption a major disservice. When folks with zero interest in development best practices start shipping “apps” they don’t care to understand, hey, knock yourself out if it’s just for your own entertainment. But selling that as a product? Entirely different conversation.

It’s a lot like the student who pastes a topic into ChatGPT to auto-generate a term paper — or as you put it:

“Take a picture of the math test and upload it to AI with the prompt ‘solve this test and add my name at the bottom.’”

That’s not “empowered by technology,” that’s just learning how to check out. Are there people happy to do the bare minimum, content not to understand the bigger picture (or why that even matters)? For sure. But I’m not comfortable lumping everyone — let alone an entire generation — into that bucket.

Is AI more accessible than ever? Of course. But is it fair to declare that “the next generation of adults won’t be able to think for themselves,” as if everyone born after 2000 is allergic to effort? Not remotely.

What about:

  • the ones using tools like ChatGPT to actually level up?
  • the learners using Study Mode to pick up skills they never had access to before?
  • the engineering students, the CS majors, the tinkerers hacking up workflows, learning APIs, and pushing boundaries?

I’d argue that those folks are thinking more — not less. The reality: today’s youth will be better equipped as adults. Not because everything is easier, but because they’ll have the tools (and mindset) to solve just about any problem that comes their way.

Some comments have been hidden by the post's author - find out more