<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shreya Raghav</title>
    <description>The latest articles on DEV Community by Shreya Raghav (@shreyaaraghav).</description>
    <link>https://dev.to/shreyaaraghav</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shreyaaraghav"/>
    <language>en</language>
    <item>
      <title>ML Without Blind Faith Systems, Constraints, and Why “Just Using AI or ML” Often Fails</title>
      <dc:creator>Shreya Raghav</dc:creator>
      <pubDate>Sat, 17 Jan 2026 18:34:09 +0000</pubDate>
      <link>https://dev.to/shreyaaraghav/ml-without-blind-faithsystems-constraints-and-why-just-using-ai-or-ml-often-fails-3ffd</link>
      <guid>https://dev.to/shreyaaraghav/ml-without-blind-faithsystems-constraints-and-why-just-using-ai-or-ml-often-fails-3ffd</guid>
      <description>&lt;p&gt;&lt;strong&gt;Artificial Intelligence&lt;/strong&gt; and &lt;strong&gt;Machine Learning&lt;/strong&gt; are powerful.&lt;br&gt;
But they are not &lt;em&gt;magic&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Somewhere between “just add AI” product pitches, demo-ready chatbots, and accuracy charts, we forgot something fundamental:&lt;/p&gt;

&lt;p&gt;AI and ML only work when the &lt;em&gt;system around&lt;/em&gt; them makes sense.&lt;/p&gt;

&lt;p&gt;This article is not anti-AI.&lt;br&gt;
It is anti-blind-faith.&lt;/p&gt;

&lt;p&gt;Using real project experiences, I’ll explain when AI/ML actually help, when they silently break systems, and why intelligence without structure is dangerous.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Myth: “If We Add AI, the Product Becomes Smart”
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A pattern I see often:&lt;/li&gt;
&lt;li&gt;There is a problem&lt;/li&gt;
&lt;li&gt;Someone suggests AI&lt;/li&gt;
&lt;li&gt;A model is plugged in&lt;/li&gt;
&lt;li&gt;The demo looks impressive&lt;/li&gt;
&lt;li&gt;Real users get confused or misled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why does this happen?&lt;/p&gt;

&lt;p&gt;Because AI is &lt;strong&gt;not intelligence&lt;/strong&gt; by default.&lt;br&gt;
It is pattern amplification.&lt;/p&gt;

&lt;p&gt;Without constraints, reasoning, and design, AI systems become confident, wrong, and untrustworthy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvijam6uycr35gjvd253c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvijam6uycr35gjvd253c.png" alt="AI ≠ Brain" width="756" height="461"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Case 1: Telemetry Analysis — Why ML Cannot Come First
&lt;/h2&gt;

&lt;p&gt;GitHub:[(&lt;a href="https://github.com/ShreyaaRaghav/telemetry-analysis-with-report)" rel="noopener noreferrer"&gt;https://github.com/ShreyaaRaghav/telemetry-analysis-with-report)&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;I worked on a Formula 1 telemetry analysis project, using race data like speed, RPM, braking, throttle, and DRS.&lt;/p&gt;

&lt;p&gt;At first glance, this feels like a pure ML problem:&lt;/p&gt;

&lt;p&gt;“Predict lap time or performance using a model.”&lt;/p&gt;

&lt;p&gt;But telemetry data is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Noisy&lt;/li&gt;
&lt;li&gt;High-frequency&lt;/li&gt;
&lt;li&gt;Context-sensitive&lt;/li&gt;
&lt;li&gt;Governed by vehicle physics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you directly do:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;model.fit(X_telemetry, lap_time)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You get &lt;strong&gt;predictions&lt;/strong&gt; — but no understanding.&lt;/p&gt;

&lt;p&gt;Instead, the system had to be designed before ML:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Acceleration derived from speed&lt;/li&gt;
&lt;li&gt;Braking intensity isolated&lt;/li&gt;
&lt;li&gt;Stints separated to reduce race-condition noise&lt;/li&gt;
&lt;li&gt;Reasoning documented using vehicle dynamics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only after this structuring did ML or statistical analysis make sense.&lt;/p&gt;

&lt;p&gt;ML was useful because the system was intelligent first.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2frtyj2kdav0io69scs2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2frtyj2kdav0io69scs2.png" alt="ML After Reasoning" width="758" height="547"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Case 2: Contexto — When AI Semantics Break Human Intuition
&lt;/h2&gt;

&lt;p&gt;GitHub: [(&lt;a href="https://github.com/ShreyaaRaghav/semantic-word-game)" rel="noopener noreferrer"&gt;https://github.com/ShreyaaRaghav/semantic-word-game)&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;In Contexto, a semantic word-guessing game, the system ranks guesses by &lt;strong&gt;meaning&lt;/strong&gt;, not spelling.&lt;/p&gt;

&lt;p&gt;This is clearly an &lt;strong&gt;AI problem&lt;/strong&gt;, not just ML:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic understanding&lt;/li&gt;
&lt;li&gt;Language representation&lt;/li&gt;
&lt;li&gt;Human perception of similarity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core logic was simple:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;similarity = cosine_similarity(guess_embedding, target_embedding)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But reality was &lt;em&gt;not&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Problems appeared immediately:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High similarity scores felt “wrong” to users&lt;/li&gt;
&lt;li&gt;Different embedding models behaved inconsistently&lt;/li&gt;
&lt;li&gt;Rare words broke feedback logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This wasn’t a model issue.&lt;/p&gt;

&lt;p&gt;It was a human-AI alignment issue.&lt;/p&gt;

&lt;p&gt;Fixing it required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comparing static embeddings (GloVe) vs contextual ones (Sentence-BERT)&lt;/li&gt;
&lt;li&gt;Calibrating similarity ranges for human intuition&lt;/li&gt;
&lt;li&gt;Designing AI feedback logic, not just computing scores&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI did not fail.&lt;br&gt;
The system design around the AI did.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ggppuf4vn8nn07retx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ggppuf4vn8nn07retx6.png" alt="Embedding Space vs Human Intuition" width="756" height="454"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Case 3: Verbatim — Why AI Cannot Be Trusted Blindly
&lt;/h2&gt;

&lt;p&gt;If Verbatim was an AI-driven document &lt;strong&gt;simplification&lt;/strong&gt; platform for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Medical&lt;/li&gt;
&lt;li&gt;Legal&lt;/li&gt;
&lt;li&gt;Bureaucratic text&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This would be a &lt;strong&gt;dangerous&lt;/strong&gt; platform.&lt;/p&gt;

&lt;p&gt;A fully AI-driven pipeline risks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hallucinating facts&lt;/li&gt;
&lt;li&gt;Removing legally critical details&lt;/li&gt;
&lt;li&gt;Oversimplifying sensitive information&lt;/li&gt;
&lt;li&gt;Creating false confidence for users&lt;/li&gt;
&lt;li&gt;Data Retention and Leakage (Model Memorization)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the system was designed with &lt;strong&gt;AI boundaries&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NLP Rule-based preprocessing&lt;/li&gt;
&lt;li&gt;Controlled transformations&lt;/li&gt;
&lt;li&gt;AI-assisted explanations instead of AI-generated truths&lt;/li&gt;
&lt;li&gt;Focus on accessibility, not replacement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here, AI is a &lt;strong&gt;support system&lt;/strong&gt; , not an authority. The main tech stack was &lt;strong&gt;NLP&lt;/strong&gt; , a safe way to simplify sensitive data by automatically identifying, classifying, and masking.&lt;/p&gt;

&lt;p&gt;This distinction matters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8ghpzw6ezophoqw5xe5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8ghpzw6ezophoqw5xe5.png" alt="AI with Guardrails" width="723" height="540"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Problem: AI Without Constraints Lies Convincingly
&lt;/h2&gt;

&lt;p&gt;AI systems are excellent at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pattern completion&lt;/li&gt;
&lt;li&gt;Confident output&lt;/li&gt;
&lt;li&gt;Fluent responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They are &lt;em&gt;terrible&lt;/em&gt; at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowing when they are wrong&lt;/li&gt;
&lt;li&gt;Understanding real-world consequences&lt;/li&gt;
&lt;li&gt;Respecting domain boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why blindly adding AI often makes systems worse, not better.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Better Way to Think About AI and ML
&lt;/h2&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;p&gt;“Can we use AI or ML here?”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What must never be wrong?&lt;/li&gt;
&lt;li&gt;What needs human trust?&lt;/li&gt;
&lt;li&gt;What can be deterministic?&lt;/li&gt;
&lt;li&gt;Where does intelligence actually add value?&lt;/li&gt;
&lt;li&gt;What should AI not decide?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only then introduce models.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI Is a Tool, Not a Brain&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI and ML are &lt;strong&gt;multipliers&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Good system design -&amp;gt; powerful intelligence&lt;/li&gt;
&lt;li&gt;Bad system design -&amp;gt; scalable misinformation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best &lt;strong&gt;AI systems&lt;/strong&gt; I’ve worked on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with reasoning&lt;/li&gt;
&lt;li&gt;Add intelligence carefully&lt;/li&gt;
&lt;li&gt;Respect human judgment&lt;/li&gt;
&lt;li&gt;Treat models as components, not answers&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;The future does not belong to people who use AI everywhere.&lt;/p&gt;

&lt;p&gt;It belongs to those who know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;when to use AI,&lt;/li&gt;
&lt;li&gt;when to limit it,&lt;/li&gt;
&lt;li&gt;and when not to trust it at all.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI without structure is &lt;strong&gt;noise&lt;/strong&gt;.&lt;br&gt;
AI with systems is &lt;strong&gt;power&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Build systems first.&lt;br&gt;
Then make them intelligent.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>computerscience</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
