DEV Community

Moin Shaikh
Moin Shaikh

Posted on

I tried turning messy product signals into AI decisions. Here’s what broke.

I’ve been exploring a simple idea:

Can messy, real-world product signals be turned into structured AI decisions?

Not dashboards. Not reports.

Actual decisions.

So I started building small systems around this.

Things like support signal triage, a recall monitoring experiment I’ve been building (currently calling it Recall Radar), and trying to detect patterns across product feedback.

Nothing fancy. Just trying to move from noise → signal → decision.

And very quickly, things started breaking.


1. The input is never clean

In theory, “signals” sound structured.

In reality, they look like:

  • vague complaints
  • partial context
  • emotional reactions
  • duplicated issues
  • completely unrelated noise

Even before AI comes in, the first problem is:

What exactly is a “signal”?

Here’s what incoming signals actually look like:

Messy product feedback signals

Different sources. Different tones. Different intents.

Nothing is structured. Nothing is consistent. And everything overlaps.


2. Classification sounds easy. It isn’t.

You think you can just label things:

  • Bug
  • Feature request
  • Churn risk
  • Feedback

But real signals don’t behave like that.

A single message can be:

  • frustration
  • feature gap
  • churn risk

All at once.

So now the system has to decide:

What matters more?

At some point, you force structure into something like this:

Signal triage system

On the surface, it looks clean:

  • signals are categorized
  • priorities are assigned
  • actions are recommended

But underneath, ambiguity doesn’t go away.

You’re just making a decision about it.


3. AI works great in isolation

If you test prompts in isolation, things look promising:

  • clean inputs
  • clear instructions
  • predictable outputs

But once you plug it into a workflow:

  • context is missing
  • inputs are inconsistent
  • outputs become unstable

What looked like “intelligence” starts looking like:

pattern matching with confidence


4. AI doesn’t fix messy systems

This was the biggest shift for me:

AI doesn’t clean up bad structure.

It amplifies it.

If your signal layer is weak:

  • AI won’t create clarity
  • it will create more noise, faster

5. Classification is not a decision

Even if you classify signals correctly, you still don’t have:

  • priority
  • business impact
  • timing
  • trade-offs

Which means:

classification ≠ decision

That gap is where most “AI workflows” break.


Where this leaves me

I’m still exploring this space through small systems and experiments.

Right now, the direction looks like:

  • less focus on “AI features”
  • more focus on signal design
  • treating AI as a layer, not the system itself

Context (for those curious)

These experiments are fairly lightweight but grounded in real workflows.

Mostly working with:

  • LLM-based classification (prompt-driven)
  • lightweight orchestration for signal routing
  • structured outputs for prioritization and tracking

Nothing complex. The challenge hasn’t been the tech.

It’s been defining the structure around it.

Open question

If you’re working on AI workflows or product systems:

How are you defining and structuring “signals” before they ever reach AI?

Because that seems to matter more than the model itself.

Top comments (0)