DEV Community

Sang
Sang

Posted on

Building an AI Chatbot That Learns From Human Edits (Not Just Feedback)

AI is getting smarter every day.

But somehow, it still feels… a little empty.

  • It can answer questions.
  • It can summarize books.
  • It can even write code better than most beginners.

And yet, when you’re having a rough day and ask something personal, the response often feels slightly off. Technically correct, emotionally distant.

So I started wondering. What if the problem isn’t the model — but how we train it?

The Problem is Intelligence isn't euqal to Empathy.

Modern AI is trained on massive datasets and refined through techniques like reinforcement learning from human feedback.

But there’s a subtle issue.

Most feedback systems are optimized for:

  • correctness
  • safety
  • general usefulness

Not necessarily for:

  • emotional nuance
  • relatability
  • this feels right

In other words we’re training AI to be right — not to be understood.

And those are very different things.

A Different Thought

Instead of asking "Is this answer correct?”

What if we asked “Does this answer feel right to people?”

And more importantly "What if people could directly rewrite AI responses — not just rate them?"

The Experiment: Letting Humans Edit AI

So I built a small experiment called Crowdians.

Here’s the idea:

  • You chat with your own AI character If you don’t like the response, you send it to an “Academy”.
  • Other users rewrite the answer. The community votes on the best version.
  • The system collects these refined human-approved responses

Instead of just collecting feedback we collect better answers.

Not “this is bad” — but “this is better.”

Why This Might Matter

Most AI systems learn from:

  • labeled datasets
  • ranking signals
  • passive feedback

But Crowdians explores something slightly different. Active co-creation of responses

It treats users not just as evaluators but as contributors.

Almost like Wikipedia, but for AI responses or GitHub, but for human empathy

The assumption is simple. Good answers aren’t just generated. They’re refined.

Why I’m Sharing This

I don’t know if this is the right approach.

  • It might be inefficient.
  • It might not scale.
  • It might completely fail.

But it also feels like something worth exploring.

Because right now, AI is incredibly capable — but still lacks something deeply human.

And maybe that missing piece can’t just be trained.

Maybe it has to be collaborated on.

I’d Love Your Thoughts

I’m curious what you think about this idea.

  • Would you trust human-edited AI responses more than model-generated ones?
  • Can empathy even be crowdsourced?
  • Is this direction promising, or fundamentally flawed?

If you want to try it out and break it (please do), here it is:
Crowdians

Any feedback, criticism, or wild ideas are more than welcome.

Top comments (0)