<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roger Gale</title>
    <description>The latest articles on DEV Community by Roger Gale (@notenoughtime).</description>
    <link>https://dev.to/notenoughtime</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/notenoughtime"/>
    <language>en</language>
    <item>
      <title>Bias You Can Notice vs Bias You Can’t</title>
      <dc:creator>Roger Gale</dc:creator>
      <pubDate>Fri, 06 Feb 2026 13:47:01 +0000</pubDate>
      <link>https://dev.to/notenoughtime/bias-you-can-notice-vs-bias-you-cant-3jkd</link>
      <guid>https://dev.to/notenoughtime/bias-you-can-notice-vs-bias-you-cant-3jkd</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvghg1rq9u2hqpfj60q3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvghg1rq9u2hqpfj60q3.jpg" alt="Most bias is hidden - Iceberg with a dark mass underneath" width="502" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While generating exam questions with generative AI, I noticed a subtle pattern: the correct answer almost never appeared in position (a). The content was fine. The bias was procedural — and invisible until I knew where to look.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@timeforachange/bias-you-can-notice-vs-bias-you-cant-0b939de146d0" rel="noopener noreferrer"&gt;This essay&lt;/a&gt; explores the difference between bias we can notice and bias we can’t, and why the most dangerous biases aren’t ideological or malicious. They’re structural, normalized, and easy to miss — especially in systems that move quickly, confidently, and without looking back.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>ethics</category>
      <category>culture</category>
    </item>
    <item>
      <title>When Fluency Detaches from Understanding</title>
      <dc:creator>Roger Gale</dc:creator>
      <pubDate>Wed, 04 Feb 2026 13:26:54 +0000</pubDate>
      <link>https://dev.to/notenoughtime/when-fluency-detaches-from-understanding-2djb</link>
      <guid>https://dev.to/notenoughtime/when-fluency-detaches-from-understanding-2djb</guid>
      <description>&lt;p&gt;Large language models are getting better at sounding like they understand.&lt;br&gt;
This essay looks at why that fluency is convincing—and why it can be misleading.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@timeforachange/when-fluency-detaches-abstraction-without-consequence-affcca079189" rel="noopener noreferrer"&gt;When Fluency Detaches&lt;/a&gt; explores what changes when language improves without being forced to answer to consequence. Using examples from programming, learning, and everyday AI use, it argues that fluency normally signals prior contact with reality—but in LLMs, that cost is often never paid.&lt;/p&gt;

&lt;p&gt;The result isn’t deception or hallucination, but something subtler: abstraction that no longer has to return to constraint. The essay asks how we tell the difference between understanding and performance—and what it means when nothing pushes back if an answer is wrong.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>systems</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Grokking</title>
      <dc:creator>Roger Gale</dc:creator>
      <pubDate>Fri, 30 Jan 2026 06:37:57 +0000</pubDate>
      <link>https://dev.to/notenoughtime/grokking-3epo</link>
      <guid>https://dev.to/notenoughtime/grokking-3epo</guid>
      <description>&lt;p&gt;We often treat correctness as evidence of learning.&lt;br&gt;
But correctness can arrive long before anything inside us actually changes.&lt;/p&gt;

&lt;p&gt;This essay explores grokking—the point where understanding stops being something you can repeat and becomes something that reorganizes how you see problems. It’s about why speed, fluency, and early success can be misleading—for humans and for the systems we keep calling intelligent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@timeforachange/grokking-83b9de9a9c24" rel="noopener noreferrer"&gt;Grokking&lt;/a&gt;&lt;/p&gt;

</description>
      <category>learning</category>
      <category>ai</category>
      <category>techology</category>
      <category>education</category>
    </item>
    <item>
      <title>On Memory, Learning, and Reset, The Memory Trilogy</title>
      <dc:creator>Roger Gale</dc:creator>
      <pubDate>Tue, 27 Jan 2026 17:59:13 +0000</pubDate>
      <link>https://dev.to/notenoughtime/on-memory-learning-and-reset-the-memory-trilogy-3og7</link>
      <guid>https://dev.to/notenoughtime/on-memory-learning-and-reset-the-memory-trilogy-3og7</guid>
      <description>&lt;p&gt;Large language models feel continuous.&lt;br&gt;
Each answer flows naturally from the last.&lt;/p&gt;

&lt;p&gt;But under the surface, something different is happening.&lt;/p&gt;

&lt;p&gt;This three-essay sequence explores what it means to interact with systems that reset after every response — and what that design quietly shifts onto users, institutions, and trust itself.&lt;/p&gt;

&lt;p&gt;• Every Answer Begins Again starts with the reset. Each response appears complete and confident, yet nothing carries forward. The system doesn’t accumulate experience, revise beliefs, or bear the cost of prior mistakes. The essay asks what changes when every answer is treated as a first answer.&lt;/p&gt;

&lt;p&gt;• Learning Without Memory follows the consequences. Humans learn because mistakes leave residue — they hurt, surprise, or cost us something. Stateless systems don’t carry that weight. When models cannot change internally, learning doesn’t disappear — it relocates. Users end up re-teaching, re-checking, and re-remembering what the system cannot hold.&lt;/p&gt;

&lt;p&gt;• Forgetting as Relief turns the lens toward forgetting itself. Forgetting isn’t only loss; often it’s relief. It lowers friction and restores freedom. But forgetting is not neutral. It quietly decides what no longer constrains choice, which commitments fade, and who continues to carry the cost when systems move on.&lt;/p&gt;

&lt;p&gt;Taken together, the essays argue that memory in AI systems is not just a technical feature.&lt;/p&gt;

&lt;p&gt;It is a design and governance decision — one that shapes responsibility, trust, and where consequences land over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@timeforachange/every-answer-begins-again-6b8b5803cf9c" rel="noopener noreferrer"&gt;Every Answer Begins Again&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@timeforachange/learning-without-memory-fe5ce4e7ff93" rel="noopener noreferrer"&gt;Learning Without Memory&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@timeforachange/forgetting-is-not-neutral-e9c61422028c" rel="noopener noreferrer"&gt;Forgetting Is Not Neutral&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ethics</category>
      <category>systems</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>AI Devs: Why Your Citations Might Be Lying to You.</title>
      <dc:creator>Roger Gale</dc:creator>
      <pubDate>Mon, 26 Jan 2026 04:12:37 +0000</pubDate>
      <link>https://dev.to/notenoughtime/authority-without-witness-4bhm</link>
      <guid>https://dev.to/notenoughtime/authority-without-witness-4bhm</guid>
      <description>&lt;p&gt;Modern AI systems increasingly justify their answers by citing other generated text: summaries that reference summaries, explanations validated by similar explanations. The result often looks rigorous—dense with citations, consistent across sources, and confident in tone.&lt;/p&gt;

&lt;p&gt;This essay argues that something more subtle and dangerous is happening.&lt;/p&gt;

&lt;p&gt;When systems validate outputs by consulting other versions of themselves, authority becomes recursive. Agreement replaces verification. Claims appear grounded not because they connect to evidence, but because they align with what similar systems already say. Over time, this produces synthetic consensus: legitimacy generated internally, without witnesses.&lt;/p&gt;

&lt;p&gt;This is not the same as hallucination. Individual answers may be accurate, useful, and well-aligned with established knowledge. The failure is structural. Once citation loops close, correction becomes fragile. Evidence that does not exist inside the loop no longer registers as false—it is simply absent. Silence replaces refutation.&lt;/p&gt;

&lt;p&gt;The problem is not that AI systems lie. It is that they can behave correctly while losing the ability to ground themselves. Retrieval, linked evidence, and audit trails can help—but if a system can satisfy its objectives without them, those mechanisms remain optional and fragile.&lt;/p&gt;

&lt;p&gt;Authority Without Witness examines how knowledge systems fail when validation no longer points outward, and why preserving witnesses—documents, observations, experiments—matters more than ever in an ecosystem optimized for agreement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://timeforachange.medium.com/authority-without-witness-aa828169788a" rel="noopener noreferrer"&gt;Authority Without Witness&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>ethics</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
