DEV Community

Cover image for I Know It’s AI, But It Still Feels Real
Konark Sharma
Konark Sharma

Posted on

I Know It’s AI, But It Still Feels Real

Lately, I’ve been thinking about how we talk to AI.

Not just for code or answers, but for understanding, for comfort, for something that feels a little more human.

And that thought led me here.

Can LLMs finally understand emotions?

I recently came across Anthropic’s latest research about LLMs understanding emotions, and I was surprised by it. It feels like this could change how LLMs respond to us. But does that mean the job of psychiatrists and therapists is done? Yes and no.

LLMs still don’t understand emotions like humans.

If someone scolds me for losing my favorite thing, I’ll feel angry and sad. That doesn’t happen with LLMs. They are still machines.

What has changed is their ability to understand patterns of emotions.

Because they are trained on a vast amount of human-written text like fiction, conversations, news, and forums, they start picking up how emotions are expressed.

This doesn’t mean they truly feel emotions.

But they are getting better at recognizing and responding to them, step by step.

What does this look like in practice?

If I tell an LLM: “I failed my exam.”

With an emotional pattern active, it might respond: “I’m sorry, that must feel really hard.”

Without that pattern, it might simply say: “You failed your exam.”

This creates a new kind of interaction. Responses that feel emotionally aware.

Not because the model feels something, but because it has learned what that kind of response should look like.

What’s actually happening?

The LLM is not feeling emotions. It is predicting them. During generation, it leans toward responses that match emotional patterns it has learned.

So instead of just predicting correct words, it predicts words that also fit the emotional context.

That small shift changes everything.

What does the future hold for us?

I feel LLMs are becoming more and more advanced. But in some ways, this might also make us more dependent. We already rely on people in our lives to share emotions, to feel understood, to be comforted.

If LLMs become really good at this, we might start replacing those human connections.

We all have someone we talk to. Someone who listens, understands, and comforts us. Now imagine an AI that can do this perfectly every time.

Since these models keep improving, they will get better at predicting exactly what to say to make someone feel better. With voice interactions, it could feel even more real.

Like talking to someone who always understands you.

What’s happening behind the scenes?

The more emotional data LLMs learn from, the better they become at recognizing patterns. They don’t feel. But they get better at predicting emotional responses.

Because they already have context about what we say and how we say it, their responses can feel very personal.

Over time, it might become harder to distinguish whether you’re talking to a human or an AI.

Should we fear it?

Maybe yes. Maybe no.

But one thing is clear. LLMs are becoming more advanced and more comparable to human-like behavior.

First, it was intelligence. Now it’s emotions.

What’s next?

this

Maybe the real question is not whether AI understands emotions.

But whether we start treating it like it does. Because the moment something responds in a way that feels right, we start trusting it.

We start sharing more. We start depending on it. Not because it feels. But because it responds like it does.

And that might be enough.

Should AI be able to simulate emotions this well? Or should there always be a clear line between human and machine?

Top comments (1)

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

Great article! You explain LLM very well.