For further actions, you may consider blocking this person and/or reporting abuse
A Workflow Copilot. Tailored to You.
Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.
Read next
New Framework Reveals How to Monitor and Control AI Agents Built on Foundation Models
Mike Young -
Top Generative AI-Based Testing Tools in the Market
Ronika Kashyap -
What was your win this week?
Jess Lee -
Top Generative AI Use Cases Revolutionizing Healthcare in 2025
Hana Sato -
Top comments (9)
It's generally funner to interact with AI if you're both being nice (just like it is for humans). I don't know about GPT or Claude's training data, but the data in a lot of open-source AI models is designed to make the model help you whether you are nice or not. Some weaker AI models freak out, refuse to help, and say βI'm sorry you're feeling this wayβ if you won't be nice, though. π
I know itβs designed to be nice no matter what, but I anticipate its actions are far more complex than to adhere to just that component of the design. Iβm covering my bases. π
Of course, they will always be nice, but most are designed to always be helpful as well.
Maybe somebody should do a benchmark on ChatGPT while being mean and while being nice π
I read somewhere if you treat them badly they answer better but I feel bad to do that I always say hello to them and ask nicely, recently I wrote a blog about my relationship with them:
Me and ChatGPT Are Pals Now!
Ali Navidi γ» Nov 26
Good post
AI seems to apologise to me a lot... Mostly when I tell it that it missed something. That's ok, though; I also make mistakes!
Canadian AI
AI seems to mirror the energy we put into it. Being nice feels like planting seeds for a future where kindness might matter, even to algorithms. But it also makes me wonderβ¦ are we prepping for a world where we expect machines to 'remember' how we treated them? A strange, slightly uneasy thought!π π π
these days even if AI gives me the wrong answer, its usually me who ends up apologizing for a confusing prompt