<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Moin Shaikh</title>
    <description>The latest articles on DEV Community by Moin Shaikh (@moingshaikh).</description>
    <link>https://dev.to/moingshaikh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/moingshaikh"/>
    <language>en</language>
    <item>
      <title>I tried turning messy product signals into AI decisions. Here’s what broke.</title>
      <dc:creator>Moin Shaikh</dc:creator>
      <pubDate>Tue, 31 Mar 2026 16:49:40 +0000</pubDate>
      <link>https://dev.to/moingshaikh/i-tried-turning-messy-product-signals-into-ai-decisions-heres-what-broke-18g8</link>
      <guid>https://dev.to/moingshaikh/i-tried-turning-messy-product-signals-into-ai-decisions-heres-what-broke-18g8</guid>
      <description>&lt;p&gt;I’ve been exploring a simple idea:&lt;/p&gt;

&lt;p&gt;Can messy, real-world product signals be turned into structured AI decisions?&lt;/p&gt;

&lt;p&gt;Not dashboards. Not reports.&lt;br&gt;&lt;br&gt;
Actual decisions.&lt;/p&gt;

&lt;p&gt;So I started building small systems around this.&lt;/p&gt;

&lt;p&gt;Things like support signal triage, a recall monitoring experiment I’ve been building (currently calling it Recall Radar), and trying to detect patterns across product feedback.&lt;/p&gt;

&lt;p&gt;Nothing fancy. Just trying to move from noise → signal → decision.&lt;/p&gt;

&lt;p&gt;And very quickly, things started breaking.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The input is never clean
&lt;/h2&gt;

&lt;p&gt;In theory, “signals” sound structured.&lt;/p&gt;

&lt;p&gt;In reality, they look like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vague complaints
&lt;/li&gt;
&lt;li&gt;partial context
&lt;/li&gt;
&lt;li&gt;emotional reactions
&lt;/li&gt;
&lt;li&gt;duplicated issues
&lt;/li&gt;
&lt;li&gt;completely unrelated noise
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even before AI comes in, the first problem is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What exactly is a “signal”?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here’s what incoming signals actually look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh6p87yj5vzr7k7aflt4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh6p87yj5vzr7k7aflt4.png" alt="Messy product feedback signals" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Different sources. Different tones. Different intents.&lt;/p&gt;

&lt;p&gt;Nothing is structured. Nothing is consistent. And everything overlaps.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Classification sounds easy. It isn’t.
&lt;/h2&gt;

&lt;p&gt;You think you can just label things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bug
&lt;/li&gt;
&lt;li&gt;Feature request
&lt;/li&gt;
&lt;li&gt;Churn risk
&lt;/li&gt;
&lt;li&gt;Feedback
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But real signals don’t behave like that.&lt;/p&gt;

&lt;p&gt;A single message can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;frustration
&lt;/li&gt;
&lt;li&gt;feature gap
&lt;/li&gt;
&lt;li&gt;churn risk
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All at once.&lt;/p&gt;

&lt;p&gt;So now the system has to decide:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What matters more?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At some point, you force structure into something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdavkxg48oym178kjpkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdavkxg48oym178kjpkj.png" alt="Signal triage system" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the surface, it looks clean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;signals are categorized
&lt;/li&gt;
&lt;li&gt;priorities are assigned
&lt;/li&gt;
&lt;li&gt;actions are recommended
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But underneath, ambiguity doesn’t go away.&lt;/p&gt;

&lt;p&gt;You’re just making a decision about it.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. AI works great in isolation
&lt;/h2&gt;

&lt;p&gt;If you test prompts in isolation, things look promising:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clean inputs
&lt;/li&gt;
&lt;li&gt;clear instructions
&lt;/li&gt;
&lt;li&gt;predictable outputs
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But once you plug it into a workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;context is missing
&lt;/li&gt;
&lt;li&gt;inputs are inconsistent
&lt;/li&gt;
&lt;li&gt;outputs become unstable
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What looked like “intelligence” starts looking like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;pattern matching with confidence&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  4. AI doesn’t fix messy systems
&lt;/h2&gt;

&lt;p&gt;This was the biggest shift for me:&lt;/p&gt;

&lt;p&gt;AI doesn’t clean up bad structure.&lt;br&gt;&lt;br&gt;
It amplifies it.&lt;/p&gt;

&lt;p&gt;If your signal layer is weak:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI won’t create clarity
&lt;/li&gt;
&lt;li&gt;it will create more noise, faster
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  5. Classification is not a decision
&lt;/h2&gt;

&lt;p&gt;Even if you classify signals correctly, you still don’t have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;priority
&lt;/li&gt;
&lt;li&gt;business impact
&lt;/li&gt;
&lt;li&gt;timing
&lt;/li&gt;
&lt;li&gt;trade-offs
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Which means:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;classification ≠ decision&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That gap is where most “AI workflows” break.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where this leaves me
&lt;/h2&gt;

&lt;p&gt;I’m still exploring this space through small systems and experiments.&lt;/p&gt;

&lt;p&gt;Right now, the direction looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;less focus on “AI features”
&lt;/li&gt;
&lt;li&gt;more focus on signal design
&lt;/li&gt;
&lt;li&gt;treating AI as a layer, not the system itself
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Context (for those curious)
&lt;/h2&gt;

&lt;p&gt;These experiments are fairly lightweight but grounded in real workflows.&lt;/p&gt;

&lt;p&gt;Mostly working with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLM-based classification (prompt-driven)&lt;/li&gt;
&lt;li&gt;lightweight orchestration for signal routing&lt;/li&gt;
&lt;li&gt;structured outputs for prioritization and tracking
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing complex. The challenge hasn’t been the tech.&lt;/p&gt;

&lt;p&gt;It’s been defining the structure around it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open question
&lt;/h2&gt;

&lt;p&gt;If you’re working on AI workflows or product systems:&lt;/p&gt;

&lt;p&gt;How are you defining and structuring “signals” before they ever reach AI?&lt;/p&gt;

&lt;p&gt;Because that seems to matter more than the model itself.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>product</category>
      <category>systems</category>
    </item>
  </channel>
</rss>
