It seems like everywhere you turn, every company is trying to force AI down your throat and tell you just how much you need it.
Searching Google? Let AI come up with the answer... Which would be great, except for the technical things I'm typically searching, it's wrong 80% of the time. It's maddening.
Writing a post to LinkedIn? Let LinkedIn AI write it for you! Ugh, no, it loses the tone and my voice. Which isn't good or "refined" but it's... human.
To be honest, I think the thing that frustrates me most about AI is that it's SO CONFIDENTLY wrong. And when I try to correct it, "You're absolutely right! Let me fix that." The agreeableness of AI drives me mad. Sometimes I just wish it would acknowledge it DOESN'T know the answer.
That being said, Apple released research explaining why these models are confidently incorrect, and much of that issue is wrapped up in the fact that they can't "reason" in the same way you or I can. Their study, called "The Illusion of Thinking," tested how well AI models handle basic math word problems. When they changed irrelevant details in the problems, like adding information about kiwis being smaller than other fruits, the AI's accuracy dropped dramatically, sometimes by over 65%.
Think about that. If you're solving a math problem about how many kiwis someone picked, the size of the kiwis doesn't matter. A human knows this instinctively. But AI doesn't actually understand the problem. It's pattern matching based on similar examples it's seen before. When you throw in irrelevant details, it gets confused because it's not truly reasoning through what matters and what doesn't.
The researchers found that even the most advanced models struggle with this. They're not thinking through problems step-by-step like we do. They're essentially really sophisticated autocomplete systems that have gotten scarily good at sounding like they know what they're talking about. Which explains why they're so confident even when they're completely wrong. They don't actually know they're wrong because they don't "know" anything in the way humans do.
So we're left with tools that can't reason, can't admit when they don't know something, and are being shoved into every aspect of our lives whether we asked for them or not. A solution desperately searching for a problem.
Now I do want to caveat that statement, "A solution desperately searching for a problem" I know there are some solutions where AI DOES in fact solve problems. Medical imaging analysis, protein folding research, and accessibility tools for people with disabilities. I agree, these are legitimate use cases where AI is making a real difference. But as I browse Reddit and see the literal hundreds of posts where people are insisting their new AI solution will change the world... another chatbot, another "AI-powered" note-taking app, another tool to generate generic content that sounds like it was written by a committee... I can't help but think we've lost the plot.
We're automating things that don't need automating. Things that were better when humans did them. And we're doing it not because it makes life better, but because we can. Because investors want to see "AI" in the pitch deck. Because every company is terrified of being left behind in the AI arms race.
Automating Our Humanity (and resources) Away
Someone posted to reddit asking if someone would use a solution to automate reddit post replies. I personally answered "no," but clarified it with the deeper unlerlying issue. Humans want to save time. But humans also want to conenct with other humans. And Human relationships take time. The isseu is that we have more connections and relationships that any other point in history and we just can't have it both ways. If you want to connect with humans, it's GOING to take time. You can't automate humanity. Nor do I want to automate humanity.
But that's just one piece of the problem. Because while we're busy trying to automate everything from writing LinkedIn posts to customer service interactions, we're burning through resources at an alarming rate. These systems require massive data centers, incredible amounts of water for cooling, and energy consumption that's frankly staggering.
We need to rethink this. We can't just keep building bigger computers and burning more resources while pretending it's all fine because AI is the future. It's time to think about the implications and acknowledge that what we're doing now isn't sustainable. There has to be a better way forward.
It's a small start right now, but at Carpathian, we do software development and provide cloud infrastructure while also recycling old computers to reduce e-waste and incorporating those into the infrastructure. Yes, older hardware can be more power hungry, but we're writing software in a way that runs more efficiently, optimizing for the hardware we have rather than just throwing more resources at the problem.
Alonfside of that, I'm researching more sustainable methods of computing and working to build ultra-low power data centers. It's just the start, but I'm fully devoted to figuring this out. We want to be transparent about what works and what doesn't, so I'm going to start publishing articles and documenting the research.
You can follow along at https://carpathian.ai/publications/
-Samuel Malkasian, Founder
Top comments (0)