<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Khiari Hamdane</title>
    <description>The latest articles on DEV Community by Khiari Hamdane (@khiari_hamdane).</description>
    <link>https://dev.to/khiari_hamdane</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/khiari_hamdane"/>
    <language>en</language>
    <item>
      <title>Stop saying AI can't code. Start asking what it actually produces.</title>
      <dc:creator>Khiari Hamdane</dc:creator>
      <pubDate>Sat, 28 Mar 2026 20:57:15 +0000</pubDate>
      <link>https://dev.to/khiari_hamdane/stop-saying-ai-cant-code-start-asking-what-it-actually-produces-223m</link>
      <guid>https://dev.to/khiari_hamdane/stop-saying-ai-cant-code-start-asking-what-it-actually-produces-223m</guid>
      <description>&lt;p&gt;Let's set the debate: on one side, those for whom AI writes bad code. On the other, those for whom AI has revolutionized their workflow, they move ten times faster, the question is settled.&lt;/p&gt;

&lt;p&gt;Both sides are potentially asking the wrong question.&lt;/p&gt;

&lt;p&gt;The real question isn't "does AI code well or badly?" It's "what does it actually produce, and under what conditions does that become a problem?"&lt;/p&gt;

&lt;p&gt;What AI is trained on&lt;/p&gt;

&lt;p&gt;To understand what AI produces, you have to understand what it learned from. Code models are trained on massive amounts of public code — millions of repositories, years of contributions, dozens of languages.&lt;/p&gt;

&lt;p&gt;Do we know exactly what that training contains? But we can reasonably assume it mixes very good code with much more average code. Clean, well-architected code written by experienced developers. And code written fast, under pressure — for an MVP that never got refactored.&lt;/p&gt;

&lt;p&gt;If AI was trained on good patterns, it can reproduce them. But it doesn't choose a pattern by "intelligence" — it picks the one that is statistically most present in its data. The problem? The most common pattern isn't always the most performant or the most suited to your specific architecture.&lt;/p&gt;

&lt;p&gt;AI isn't incompetent. It can lack context. And that's a fundamental difference.&lt;br&gt;
What it doesn't see&lt;/p&gt;

&lt;p&gt;AI sees the code — but not its life after deployment. It can spot a classic security flaw because it learned to recognize those patterns. But it doesn't know whether this architecture held up under a thousand users or collapsed. For AI, code that works on screen has the same value as code that survives in production. It reproduces forms, not robustness&lt;/p&gt;

&lt;p&gt;So what do we do?&lt;/p&gt;

&lt;p&gt;We don't throw AI away. What changes is the developer's role — they become the one who guides what AI produces. The one who brings the context AI can't have: real constraints, security requirements, expected load, acceptable technical debt.&lt;/p&gt;

&lt;p&gt;AI can be excellent, but it requires a rich and relevant context to truly perform. And knowing what to ask, how to frame it, which constraints to specify — that's precisely what AI can't do on its own.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;The developer who truly understands how AI works uses it better — and combines it with code analysis tools to add an extra layer of control.&lt;/p&gt;

&lt;p&gt;In the next article, we'll come back to our initial promise — exploring how an AI capable of correcting itself and doubting could become a 24/7 researcher, in medicine to try to discover new drugs, in energy to try to analyze and find new alternatives.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>We don't need to copy the human brain — we need to learn from it</title>
      <dc:creator>Khiari Hamdane</dc:creator>
      <pubDate>Sun, 22 Mar 2026 16:55:56 +0000</pubDate>
      <link>https://dev.to/khiari_hamdane/we-dont-need-to-copy-the-human-brain-we-need-to-learn-from-it-10od</link>
      <guid>https://dev.to/khiari_hamdane/we-dont-need-to-copy-the-human-brain-we-need-to-learn-from-it-10od</guid>
      <description>&lt;p&gt;In the two previous articles, we identified two problems. The first: LLMs don't always know when they're wrong — they generate, invent, fill the gap, sometimes without the slightest warning signal. The second: in the physical world, this behavior becomes dangerous. A system that improvises in an unknown situation doesn't produce a bad answer — it produces an unexpected movement in an environment where humans may be present.&lt;/p&gt;

&lt;p&gt;The question that follows: where are we headed to move past this? And does the answer lie somewhere in the direction of the human brain?&lt;/p&gt;

&lt;p&gt;Why reproducing the brain is unthinkable&lt;/p&gt;

&lt;p&gt;The human brain — we don't fully understand how it works. Saying it's unthinkable doesn't mean it's useless to draw inspiration from it. The question isn't "how do we copy the brain" — it's "what specific mechanisms are we missing today, and does the human brain give us any leads to build them?"&lt;/p&gt;

&lt;p&gt;What we can extract from it&lt;/p&gt;

&lt;p&gt;Three mechanisms seem particularly important to me, and all three are absent from current LLMs in their native form.&lt;/p&gt;

&lt;p&gt;The first is metacognition — the ability to know what you don't know. A human who doesn't know the answer to a question can recognize that and stop. An LLM will produce an answer regardless, even without solid ground to stand on.&lt;/p&gt;

&lt;p&gt;The second is real-time self-correction. The human brain adjusts continuously — while we speak, while we act. Current LLM architectures work in one direction: generate first, verify later if at all.&lt;/p&gt;

&lt;p&gt;The third is active doubt. Faced with an unknown situation, a human slows down, questions, asks for clarification. They don't keep acting with the same momentum. This is precisely the property that current autonomous systems lack.&lt;/p&gt;

&lt;p&gt;What we're looking for isn't machines that think like humans. It's machines that know how to doubt like humans — without the emotions, without the fatigue, without the biases.&lt;br&gt;
Where research stands&lt;/p&gt;

&lt;p&gt;These questions aren't new. Researchers have been working for years on what are called neuro-inspired architectures — systems that attempt to go beyond pure statistical generation and integrate mechanisms closer to reasoning. The idea isn't new, but it remains largely open.&lt;/p&gt;

&lt;p&gt;Some approaches try to ground models in verifiable sources to limit invention. Others explore systems where planning and verification are separated — the model proposes, another mechanism checks. None of them fully solve the problem. But all point in the same direction: making models bigger isn't enough. We need to change what they do with uncertainty.&lt;/p&gt;

&lt;p&gt;What this would change in practice&lt;/p&gt;

&lt;p&gt;In robotics first — an autonomous agent capable of detecting that it's outside its domain and stopping rather than improvising would fundamentally change the reliability of physical systems.&lt;/p&gt;

&lt;p&gt;In medicine, a diagnostic support system that signals its level of certainty — and refuses to conclude when data is insufficient — is infinitely more useful than a system that always produces a confident answer.&lt;/p&gt;

&lt;p&gt;In critical infrastructure — power grids, water management, transport — agents capable of flagging an anomaly they can't interpret, rather than continuing to operate normally, could prevent cascading failures.&lt;/p&gt;

&lt;p&gt;In education, a pedagogical agent that adapts its explanations based on the learner's progress — and recognizes when it has reached its own limits — is much closer to what a good teacher actually does.&lt;/p&gt;

&lt;p&gt;In the energy sector, systems capable of distinguishing a known situation from an unfamiliar one — and treating them differently — could transform grid management at a time when networks are becoming increasingly complex with the integration of renewable energy.&lt;/p&gt;

&lt;p&gt;We're not trying to build an artificial brain. What we're looking for is more precise and more humble than that: systems capable of doubting, self-correcting, and recognizing the limits of what they know.&lt;/p&gt;

&lt;p&gt;The human brain isn't a model to copy. It's a source of inspiration to build something different — more reliable, more honest about its own limits, and for that reason, genuinely useful in the real world.&lt;/p&gt;

&lt;p&gt;That leap isn't just a matter of computing power. It's a matter of design.&lt;/p&gt;

&lt;p&gt;In the next article, we'll try to explore concretely how an AI capable of correcting itself and doubting could become a 24/7 researcher — in medicine to try to discover new drugs, in energy to try to analyze and find new alternatives.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>science</category>
    </item>
    <item>
      <title>AI in machines: why the problem runs deeper than we think</title>
      <dc:creator>Khiari Hamdane</dc:creator>
      <pubDate>Sun, 15 Mar 2026 21:13:07 +0000</pubDate>
      <link>https://dev.to/khiari_hamdane/ai-in-machines-why-the-problem-runs-deeper-than-we-think-3896</link>
      <guid>https://dev.to/khiari_hamdane/ai-in-machines-why-the-problem-runs-deeper-than-we-think-3896</guid>
      <description>&lt;p&gt;There's something strange about the way we talk about artificial intelligence right now. We keep hearing that models will soon "take control" of physical machines — robots, self-driving cars, industrial drones. As if AI only needed a little more computing power to get there.&lt;/p&gt;

&lt;p&gt;The reality seems more interesting, and harder.&lt;/p&gt;

&lt;p&gt;What LLMs actually do&lt;/p&gt;

&lt;p&gt;A language model is not an intelligence that reasons about the world. It's an extraordinarily efficient system for reproducing patterns found in text. When it "answers" a question, it's not looking for the truth — it's calculating which sequence of words is most plausible in that context.&lt;/p&gt;

&lt;p&gt;This works remarkably well on text-based tasks, precisely because human language is full of regularities. But this property becomes a problem the moment you step outside the domain of anticipation: a model can anticipate the next word in a sentence, but it cannot anticipate the behavior of an object slipping through a gripper at 40-millisecond intervals.&lt;/p&gt;

&lt;p&gt;An LLM looks for the most plausible answer. A physical control system must reach a precise objective within a precise timeframe — even in a situation it has never encountered before.&lt;br&gt;
The real world doesn't always look like training data&lt;/p&gt;

&lt;p&gt;A robotic arm picking up an object has to process data in real time, correct micro-errors every millisecond, and adapt to signals the human eye cannot perceive — the slight deformation of a soft material under pressure, the micro-slip of a smooth surface, the change in resistance of a mechanical assembly depending on ambient temperature. These are continuous signals, not words.&lt;/p&gt;

&lt;p&gt;This is why robotics and embedded systems have developed radically different approaches: control loops that continuously measure the gap between the actual state and the target, and correct in real time. Not because researchers lacked ambition, but because the problem demands it.&lt;/p&gt;

&lt;p&gt;The natural follow-up question is: what if we trained a model on robotic scenarios instead of text? That's exactly what reinforcement learning in robotics does — and it works, in bounded environments. The problem is combinatorics: the number of possible situations in the real world is enormous. You can't cover the whole domain.&lt;/p&gt;

&lt;p&gt;Why the illusion persists&lt;/p&gt;

&lt;p&gt;The spectacular robot demonstrations that circulate on social media are potentially filmed in controlled environments, with scenarios carefully chosen to showcase what works. They don't show the thousands of failures that came before, or the conditions under which the same system breaks down completely.&lt;/p&gt;

&lt;p&gt;The deeper problem is that we're very bad at telling the difference between an impressive performance and a general capability. A model that brilliantly explains the mechanics of a liquidity crisis and then cites a nonexistent author in the very next sentence illustrates exactly that gap. The fluency of the response hides the absence of any internal verification.&lt;/p&gt;

&lt;p&gt;What this means in practice&lt;/p&gt;

&lt;p&gt;This doesn't mean AI has no role in physical systems. Hybrid approaches — where a high-level model plans and a classical system executes — can produce serious results in bounded domains.&lt;/p&gt;

&lt;p&gt;The real question isn't "when will AI be powerful enough to drive machines?" It's "how do we improve the behavior of a system when it encounters conditions it wasn't trained for?"&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;The more models seem to "understand", the easier it is to believe that the leap to the physical world is only a matter of time.&lt;/p&gt;

&lt;p&gt;An LLM that encounters an unknown situation will still produce a response — it will complete, invent, fill the gap.&lt;/p&gt;

&lt;p&gt;In the physical world, that behavior becomes dangerous. A robotic system that "improvises" in an unknown situation doesn't produce a bad answer — it produces an unexpected movement, potentially violent, in an environment where humans may be present. This isn't a display bug. It's a real physical risk.&lt;/p&gt;

&lt;p&gt;What today's language-model-based systems lack is precisely this: the ability to detect that they're out of their domain, and stop. Not keep acting.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>What if humility were AI's next frontier?</title>
      <dc:creator>Khiari Hamdane</dc:creator>
      <pubDate>Sat, 28 Feb 2026 15:45:55 +0000</pubDate>
      <link>https://dev.to/khiari_hamdane/what-if-humility-were-ais-next-frontier-1328</link>
      <guid>https://dev.to/khiari_hamdane/what-if-humility-were-ais-next-frontier-1328</guid>
      <description>&lt;p&gt;Today's AI doesn't "think" — it operates as a statistical estimation engine. It projects concepts into a complex vector space, where each word is a coordinate. The model learns to estimate the continuation of a sequence by calculating probabilities from billions of parameters. In short, it is a calculation machine that learns to model plausible paths in response to a given input. This architecture, while highly performant, confronts us with the inherent limitations of large language models (LLMs).&lt;br&gt;
What we call "hallucinations" is not a mysterious bug, but a symptom of the difficulty of model calibration. When an AI generates a factual error, the problem doesn't lie solely in a lack of information — it stems from a confidence bias: a model can be mathematically "certain" while being factually wrong. These hallucinations are complex emergent properties, tied as much to the nature of training (RLHF, data bias) as to the Transformer architecture itself. The model is not "forced" to bet blindly, but it operates within a framework where probability is not always correlated with truth. To date, LLM architectures deployed at scale do not natively include mechanisms to distinguish what they know from what they don't.&lt;br&gt;
Rather than waiting for a solution from ever-larger models, a more grounded approach deserves exploration: verification engineering. Architectures like RAG (Retrieval Augmented Generation) attempt to bridge probabilistic generation and factual verification. However, RAG is far from a silver bullet — it shifts the hallucination problem toward complex challenges of data retrieval, chunking, and source reliability. The real challenge is to imagine systems capable of doubting, of verifying their own estimates against grounded sources, and above all, of moderating their responses when they cannot guarantee the accuracy of their output. What if the next updates to AI were not only about making models “smarter,” but about teaching them to recognize their own limits?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
