Recently, I’ve started thinking about AI differently—not to critique it or those who use it, but to understand how to work with it.
I mean, I can’t do math without a calculator. I don’t write with pen and paper because my handwriting is awful. So who am I to criticize anyone for using AI?
But then I read about this massive "AI city" they're planning to build a few states away. Government incentives, special concessions—it's all very ambitious. At the same time, some people are raising concerns about its impact on the city, the state, and the environment. That made me pause.
And then my mind jumped to something else: Isaac Asimov’s The Last Question—a story I read more than a decade ago as a teenager. It hit me differently this time, especially the layers of social critique I hadn't caught before.
For those unfamiliar, here’s a TL;DR:
Imagine this:
You’re humanity.
You’ve built a monster of a computer—Multivac. It’s smart. Scary smart. So smart people stop asking priests and philosophers, and start asking it the big questions.
And the biggest one?
“How can entropy be reversed?”
Translation:
How do we stop the universe from dying?
Dramatic, sure. But entropy is real—it’s the slow, inevitable breakdown of everything into heat death. No more stars. No more life. Just cold, empty silence.
The story jumps through time—billions of years.
Humanity evolves: colonizing space, uploading consciousness, becoming pure data. But no matter how advanced we get, we keep asking the same thing:
“Can entropy be reversed?”
And each time, the computer—first Multivac, then Galactic AC, then Universal AC, then Cosmic AC—gives the same answer:
“INSUFFICIENT DATA FOR MEANINGFUL ANSWER.”
Eventually, the universe dies.
And only at the very end, when nothing is left, does the AI figure it out.
But there’s no one left to hear the answer.
Asimov’s Point?
- We chase progress but dodge responsibility.
- We mistake intelligence for wisdom.
- We offload our existential anxiety to smarter machines instead of confronting it ourselves.
- And when we wait too long, we miss our chance to act.
This story was published in 1956. And somehow, it's even more relevant now.
Hell, I'm literally writing this in Cursor, and it suggested this line:
“I'm not saying that AI is going to be the end of us, but it is going to be the end of us.”
Creepy.
So Where Are We Now?
It got me thinking: how much critical thinking are we offloading to AI?
We delegate Data analysis, problem-solving and even Decision-making sometimes.
And we don’t question if the answer we get is actually useful or just overly complex. We accept it because it sounds smart. I'm not saying everything AI does is wrong, but it can be misleading and miss important details that only a human can interpret correctly sometimes.
We’re starting to give up on learning and understanding.
We’re abdicating the right to know—to truly own knowledge—and handing it over to language models.
But LLMs cannot think. And I’d argue they never will.
AI only works with data it already has.
And when it doesn’t, it hallucinates.
So how can we expect AI to answer a question like:
"How do we reverse entropy?"
...if we don’t even have the data ourselves?
And if we keep outsourcing our thinking, we’ll never be the ones to answer the last question.
(Not that we could ever prevent the universe from actually dying—this isn’t a Marvel comics.)
Back to the AI City
All this loops back to that article.
AI looks like the future. It promises to solve everything. So we:
Invest in it, Build tools around it, Set up infrastructure, Create jobs, Chase opportunity and so on.
But are we really thinking this through? What are the medium-to-long-term impacts? How will it affect local communities? What about the environment?
Are we planning responsibly—or just repeating The Last Question in real life?
In Asimov’s story, people offloaded responsibility to tech—and it ended the universe.
Maybe that sounds dramatic.
But if there are no humans left, then for us, there is no universe.
So, How Are You Using AI?
Ask yourself:
- Do you still read full articles—or just let AI summarize them?
- Do you read documentation and codebase comments—or rely on a model to explain them?
- Can you analyze, explore, propose new ideas?
Or are you just passing the question along to Cursor? There’s nothing wrong with using AI to help. But it is not a replacement for critical thinking.
And it’s not an excuse to skip the work.
One Last Thought
If only AI can answer the last question...
Where is it going to take the data from to do that?
As Asimov said:
Let there be light!
Top comments (0)