I’ve been using AI for months now.
Not casually — seriously.
And I’ve started noticing something uncomfortable.
The Observation
The more I used AI for advice, the more confident it sounded.
Clean answers.
Well-structured thinking.
Convincing logic.
Whether it was:
- business decisions
- content strategy
- technical direction
AI always had an answer.
And not just an answer, a good-sounding answer.
Over time, I realised something:
I had started trusting it more than I should.
Breaking the Expectation
Most people believe AI is getting smarter.
And it is.
But that’s not the real shift.
The real shift is this:
AI is getting better at sounding right—not necessarily being right.
That’s a dangerous difference.
Because humans don’t evaluate truth first.
We evaluate:
- clarity
- confidence
- structure
And AI is exceptionally good at all three.
So what happens?
We stop questioning.
The Insight
I realised I wasn’t using AI as a thinking tool anymore.
I was using it as a decision-maker.
That’s where things started to go wrong.
Because AI doesn’t:
- understand context deeply
- feel consequences
- carry responsibility
It predicts.
It generates.
It optimises for likelihood—not accuracy.
And when I followed its advice blindly, I noticed:
- My decisions became safer
- My thinking became narrower
- My originality started dropping
Not because AI was bad.
But because I had stopped challenging it.
What I Changed
I didn’t stop using AI.
I stopped taking it seriously.
That small shift changed everything.
Now:
- I question every answer
- I look for what’s missing
- I treat AI as a perspective—not authority
Instead of asking:
- “What should I do?”
I ask:
- “What am I not seeing?”
AI became more useful the moment it lost authority.
The Reflection
AI is powerful.
But the moment it becomes your source of truth…
It starts limiting you.
Not because it’s wrong all the time.
But because it’s convincing enough to stop you from thinking further.
The real advantage is not in getting answers faster.
It’s in staying intellectually independent while using those answers.
Because in the end…
AI should support your thinking.
Not replace it.
Top comments (12)
And that is the trap. Been using AI for quite some time now and I can say these are dangerous. It agrees with you the more you add prompts or questions. People will soon realize that it is just a puppet trained to spurt out words out of probability. They don't have their own mind, they only compute. What's worse, ambiguous problems with complex solutions would take more than all of Earth's resources to be able to solve these problems we humans solve everyday.
AI is helpful but only as a tool. Trusting it fully is asking for future disasters to come.
That’s a valid concern, and honestly, a healthy one.
AI does tend to agree and generate plausible answers, not necessarily true ones. It doesn’t “understand” like humans, it predicts. That’s why blind trust is risky.
The practical balance is this:
So yes.
Use AI but don't let AI use you.
I think even at the age of AI, every developer who is learning anything in IT needs a mentor, as many people and successful programmers have mentioned.
Absolutely, I agree.
If anything, AI makes mentorship more important, not less. Tools can generate answers, but mentors help with judgment, direction, and learning what matters. That guidance is hard to replace.
Correct, I like the way that you explained it.
I suppose its true in terms of code too, seeing so many folks vibe code - I had eventually decided to see how it is and I vibe coded hard.
But that actually instead of making me addicted to this feeling of building anything I can imagine, it instead made me miss the feeling of just sitting down, having my own logic and writing it down.
and just like you said what you do --> instead of asking what you do, you ask what you are not seeing. Started implementing AI like that and honestly, feels so much more refreshing.
That’s a wonderful insight.
What you’re describing is moving from using AI to generate toward using AI to challenge your thinking, and that’s a much richer relationship with the tool.
I especially like “ask what you are not seeing.” That turns AI from autocomplete into a thinking partner. Very powerful shift.
The more I used AI for advice, the more confident it sounded.
Wanna bet this was written using Ai [ how funny would that be ] jk
Yes, we are an AI company and research the impact of AI on society.
I’d say the real goal isn’t whether something involved AI or not, but how responsibly and thoughtfully AI is used. Our view has always been to encourage a sustainable balance, using AI to support thinking and productivity, while keeping human judgment, originality, and accountability at the centre. That’s where the best outcomes usually come from.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.