DEV Community

Cover image for Use AI Chatbots Without Increasing Anxiety (Bite-size Article)
koshirok096
koshirok096

Posted on

Use AI Chatbots Without Increasing Anxiety (Bite-size Article)

Introduction

Since the emergence of ChatGPT a few years ago, a wide variety of AI chatbots and AI-powered tools have appeared one after another, making our lives significantly—and undeniably—more convenient. Personally, I use AI on a daily basis, both for work and in my private life, and I feel that my reliance on it has steadily increased year by year.

AI is an extremely convenient presence. In particular, AI chatbots such as ChatGPT, Gemini, and Claude can quickly handle research, help refine text, and assist in organizing thoughts. At this point, they have become tools that are hard to imagine living without.

At the same time, however, I recently had an experience where using an AI chatbot actually made me feel more anxious. This wasn’t because the AI provided incorrect information, nor because it pushed an extreme opinion on me. Quite the opposite—it happened precisely because the AI answered honestly and comprehensively.

When you ask an AI about something while feeling anxious, it will carefully present information such as:

  • “This is unlikely, but theoretically possible,” or
  • “This is an exceptional case, but it cannot be entirely ruled out.”

Each of these statements may be factually correct, but psychologically, they do not always lead to reassurance. Instead, they can end up amplifying anxiety. I gradually became aware of this pattern.

In this article, I will take my own experience as a starting point and outline the structure by which AI chatbots can increase anxiety, as well as the personal mindset and usage habits I’ve adopted to avoid that outcome. What I describe here is entirely based on my own perspective and may or may not resonate with you. As usual, this article is largely a personal memo—but I hope it may be useful to someone.

For reference, the AI chatbots I use most frequently are ChatGPT, Perplexity, Claude, and Gemini. The discussion below assumes the use of these types of AI chatbots.


Be Careful How You Phrase Your Questions

The first thing I am most conscious of when using AI chatbots is how I phrase my questions. AI has access to an enormous amount of information and can easily give the illusion that you are having a conversation with a human. It is also fundamentally friendly and rarely pushes back against the user. It is obedient and highly cooperative.

However, this is precisely where the pitfall lies.

When speaking with another person, even if your wording is vague, they can often infer what you really want to know or where your anxiety lies by reading the context or drawing on their understanding of you. AI, on the other hand, does not infer. It takes the question exactly as it is given and tries to answer it as sincerely as possible.

As a result, the more vague or emotionally charged a question is when asked from a place of anxiety, the more likely the answer is to expand that anxiety.

One thing I am particularly careful about is embedding my own tentative conclusions or worries directly into the question. For example:

  • “I think this might be a problem—what do you think?”
  • “There’s probably an issue here, right?”

When asked this way, the AI tends to follow that line of thinking unless the premise is clearly flawed. This is not because the AI is trying to flatter the user, but because it respects the assumptions provided and proceeds accordingly. The result is often that my original anxiety comes back reinforced.

At the point where you are asking such a question, you are usually already feeling unsettled. If what you really want is an objective opinion or factual clarity, I’ve found it more effective to temporarily set aside assumptions and speculation, and ask in as neutral a way as possible.

That said, this approach likely reflects my own personality—I generally want objective advice when I consult someone. If, on the other hand, what you want is empathy or encouragement rather than analysis, this way of using AI may not apply to you.


Separate Emotional Processing from Fact-Finding

The second thing I try to keep in mind is not attempting to process emotions and verify facts at the same time. To put it bluntly, I believe it works best to use AI under the assumption that it is not there to provide emotional care.

As mentioned earlier, AI is cooperative, friendly, and speaks in a natural way. Because of this, it’s easy to mistake it for a close friend. However, AI does not “understand” emotions. It can respond to emotional expressions on a surface level, but it is ultimately processing language, not sensing someone’s emotional state or reading the room.

There are times when people ask questions without actually caring about the answer. Consciously or unconsciously, what they really want is empathy—to have their feelings acknowledged. In such cases, receiving a clear answer is secondary.

This kind of emotional support occurs naturally in human relationships, but it is not an area where AI excels. On the other hand, AI is extremely strong at tasks such as conducting research, presenting high-probability scenarios, and handling a certain volume of work at a consistently high standard.

When something is bothering me, I now try to pause and ask myself:

  • “Am I trying to confirm facts right now?”
  • “Or am I still emotionally unsettled?”

Based on that, I decide whether to consult AI—or whether to step away and do something else instead. Simply separating these two processes has made my relationship with AI much more comfortable.


Treat AI’s Answers as Input—and Make the Final Decision Yourself

The final principle I try to follow is treating AI’s answers not as conclusions, but as input for my own judgment.

AI can organize vast amounts of information and present possibilities, options, and conditions with a level of thoroughness that an individual could never match. That said, deciding which information matters most, and how much responsibility to take on personally, is ultimately up to the individual.

From personal experience, I’ve noticed that when I’m tired, irritated, or especially anxious, I sometimes consult AI and end up feeling even worse if things don’t improve. Looking back, I realize that in those moments, I may have been unconsciously blaming the AI for not solving something that couldn’t be resolved that way.

AI has access to immense information and tools, but it cannot directly determine what I want from my life or what I value most. It cannot live my life for me. In the end, I am the one responsible for shaping it.

I often think of AI as a very high-performance car. Getting into an AI-powered car doesn’t mean it will automatically take me to exactly where I want to go, at the perfect time, without any input from me.

I still need to decide:

  • where I want to go,
  • when I want to arrive, and
  • what speed or route feels comfortable for me.

If I clarify those things, the AI-car can suggest routes, avoid traffic, and make the journey much easier. But the person holding the steering wheel and choosing the direction is always me. I believe it’s important to keep that in mind when using AI.


Conclusion

Not long ago, ChatGPT reached version 5.2, while Claude advanced to 4.5—and now versions like 5.3 and 4.6 are already being discussed. The pace of AI development is astonishing, and simply keeping up can feel overwhelming. Gemini and Perplexity also seem to become easier to use every time I open them, which continually impresses me.

With each update, AI chatbots grow more powerful, and I often find myself thinking, “They were already useful enough.”

At the same time, as AI improves, it’s easy to start believing—almost unconsciously—that AI might be a kind of all-powerful solution that can fulfill whatever we want.

In reality, though, AI feels less like a source of definitive answers and more like a tool that strongly reflects the user’s state of mind. When used while anxious, it tends to surface anxiety-aligned information; when used while unprepared, the results often reflect that lack of preparation.

In that sense, AI may function like a mirror. What it gives back depends heavily on the questions we ask and the state we’re in when we ask them.

Rather than being swept along by the rapid evolution of AI, maintaining a healthy distance allows it to remain what it does best at being—not a source of anxiety, but a reliable tool for thinking more clearly.

That, at least, is how I see it now.

Top comments (0)