DEV Community

Shreya Raghav
Shreya Raghav

Posted on

ML Without Blind Faith Systems, Constraints, and Why “Just Using AI or ML” Often Fails

Artificial Intelligence and Machine Learning are powerful.
But they are not magic.

Somewhere between “just add AI” product pitches, demo-ready chatbots, and accuracy charts, we forgot something fundamental:

AI and ML only work when the system around them makes sense.

This article is not anti-AI.
It is anti-blind-faith.

Using real project experiences, I’ll explain when AI/ML actually help, when they silently break systems, and why intelligence without structure is dangerous.


The Myth: “If We Add AI, the Product Becomes Smart”

  • A pattern I see often:
  • There is a problem
  • Someone suggests AI
  • A model is plugged in
  • The demo looks impressive
  • Real users get confused or misled

Why does this happen?

Because AI is not intelligence by default.
It is pattern amplification.

Without constraints, reasoning, and design, AI systems become confident, wrong, and untrustworthy.

AI ≠ Brain


Case 1: Telemetry Analysis — Why ML Cannot Come First

GitHub:[(https://github.com/ShreyaaRaghav/telemetry-analysis-with-report)]

I worked on a Formula 1 telemetry analysis project, using race data like speed, RPM, braking, throttle, and DRS.

At first glance, this feels like a pure ML problem:

“Predict lap time or performance using a model.”

But telemetry data is:

  • Noisy
  • High-frequency
  • Context-sensitive
  • Governed by vehicle physics

If you directly do:

model.fit(X_telemetry, lap_time)

You get predictions — but no understanding.

Instead, the system had to be designed before ML:

  • Acceleration derived from speed
  • Braking intensity isolated
  • Stints separated to reduce race-condition noise
  • Reasoning documented using vehicle dynamics

Only after this structuring did ML or statistical analysis make sense.

ML was useful because the system was intelligent first.

ML After Reasoning


Case 2: Contexto — When AI Semantics Break Human Intuition

GitHub: [(https://github.com/ShreyaaRaghav/semantic-word-game)]

In Contexto, a semantic word-guessing game, the system ranks guesses by meaning, not spelling.

This is clearly an AI problem, not just ML:

  • Semantic understanding
  • Language representation
  • Human perception of similarity

The core logic was simple:

similarity = cosine_similarity(guess_embedding, target_embedding)

But reality was not.

Problems appeared immediately:

  • High similarity scores felt “wrong” to users
  • Different embedding models behaved inconsistently
  • Rare words broke feedback logic

This wasn’t a model issue.

It was a human-AI alignment issue.

Fixing it required:

  • Comparing static embeddings (GloVe) vs contextual ones (Sentence-BERT)
  • Calibrating similarity ranges for human intuition
  • Designing AI feedback logic, not just computing scores

The AI did not fail.
The system design around the AI did.

Embedding Space vs Human Intuition


Case 3: Verbatim — Why AI Cannot Be Trusted Blindly

If Verbatim was an AI-driven document simplification platform for:

  • Medical
  • Legal
  • Bureaucratic text

This would be a dangerous platform.

A fully AI-driven pipeline risks:

  • Hallucinating facts
  • Removing legally critical details
  • Oversimplifying sensitive information
  • Creating false confidence for users
  • Data Retention and Leakage (Model Memorization)

So the system was designed with AI boundaries :

  • NLP Rule-based preprocessing
  • Controlled transformations
  • AI-assisted explanations instead of AI-generated truths
  • Focus on accessibility, not replacement

Here, AI is a support system , not an authority. The main tech stack was NLP , a safe way to simplify sensitive data by automatically identifying, classifying, and masking.

This distinction matters.

AI with Guardrails


The Core Problem: AI Without Constraints Lies Convincingly

AI systems are excellent at:

  • Pattern completion
  • Confident output
  • Fluent responses

They are terrible at:

  • Knowing when they are wrong
  • Understanding real-world consequences
  • Respecting domain boundaries

This is why blindly adding AI often makes systems worse, not better.


A Better Way to Think About AI and ML

Instead of asking:

“Can we use AI or ML here?”

Ask :

  • What must never be wrong?
  • What needs human trust?
  • What can be deterministic?
  • Where does intelligence actually add value?
  • What should AI not decide?

Only then introduce models.

AI Is a Tool, Not a Brain

AI and ML are multipliers :

  • Good system design -> powerful intelligence
  • Bad system design -> scalable misinformation

The best AI systems I’ve worked on:

  • Start with reasoning
  • Add intelligence carefully
  • Respect human judgment
  • Treat models as components, not answers

Final Thought

The future does not belong to people who use AI everywhere.

It belongs to those who know:

  • when to use AI,
  • when to limit it,
  • and when not to trust it at all.

AI without structure is noise.
AI with systems is power.

Build systems first.
Then make them intelligent.

Top comments (0)