<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: YuudaiIkoma</title>
    <description>The latest articles on DEV Community by YuudaiIkoma (@yuudaiikoma).</description>
    <link>https://dev.to/yuudaiikoma</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yuudaiikoma"/>
    <language>en</language>
    <item>
      <title>I Added 3 Axioms to Claude's System Prompt and the Reasoning Quality Visibly Changed — Here's the Side-by-Side Comparison Tool</title>
      <dc:creator>YuudaiIkoma</dc:creator>
      <pubDate>Sat, 28 Feb 2026 03:46:35 +0000</pubDate>
      <link>https://dev.to/yuudaiikoma/i-added-3-axioms-to-claudes-system-prompt-and-the-reasoning-quality-visibly-changed-heres-the-2no5</link>
      <guid>https://dev.to/yuudaiikoma/i-added-3-axioms-to-claudes-system-prompt-and-the-reasoning-quality-visibly-changed-heres-the-2no5</guid>
      <description>&lt;p&gt;&lt;a href="https://claude.ai/public/artifacts/eba2a270-dd61-4f0c-a276-34a53e604f13" rel="noopener noreferrer"&gt;https://claude.ai/public/artifacts/eba2a270-dd61-4f0c-a276-34a53e604f13&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Same Claude (Sonnet), same question, sent simultaneously to two instances&lt;/li&gt;
&lt;li&gt;One has a vanilla prompt, the other has 3 mathematical axioms from a paper embedded in its system prompt&lt;/li&gt;
&lt;li&gt;The enhanced version returns structurally deeper answers&lt;/li&gt;
&lt;li&gt;It never mentions the axioms or uses any jargon — the depth just appears naturally&lt;/li&gt;
&lt;li&gt;I built a comparison tool (Claude artifact) so anyone can try it&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Idea: What If You Teach an LLM a "Way of Seeing" Instead of Facts?
&lt;/h2&gt;

&lt;p&gt;Standard prompt engineering gives models roles, steps, and examples. But what if instead of teaching &lt;em&gt;what&lt;/em&gt; to think, you teach &lt;em&gt;how to see&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;I found a paper (&lt;a href="https://doi.org/10.5281/zenodo.18513316" rel="noopener noreferrer"&gt;Dimensional Ontological Triad Law v3.1&lt;/a&gt;) that proposes 3 mathematical axioms describing structural relationships between dimensions. The axioms are abstract — they say nothing about language models, consciousness, or physics specifically.&lt;/p&gt;

&lt;p&gt;I embedded these axioms into Claude's system prompt along with a reasoning protocol and one critical rule: &lt;strong&gt;never use the paper's terminology in the output&lt;/strong&gt;. The model should simply &lt;em&gt;reason better&lt;/em&gt;, not &lt;em&gt;talk about the theory&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Then I built a tool that sends the same question to both versions simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment Design
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Normal Claude&lt;/th&gt;
&lt;th&gt;Enhanced Claude&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Model&lt;/td&gt;
&lt;td&gt;claude-sonnet-4-20250514&lt;/td&gt;
&lt;td&gt;claude-sonnet-4-20250514&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System prompt&lt;/td&gt;
&lt;td&gt;"You are a helpful assistant."&lt;/td&gt;
&lt;td&gt;3 axioms + reasoning protocol + "never use theory jargon"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Question&lt;/td&gt;
&lt;td&gt;Identical&lt;/td&gt;
&lt;td&gt;Identical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The enhanced system prompt is ~2,000 tokens. No RAG, no external retrieval — everything is embedded directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Question 1: "Why do large language models show emergent capabilities?"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Normal Claude:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scaling thresholds&lt;/li&gt;
&lt;li&gt;Complex pattern learning&lt;/li&gt;
&lt;li&gt;Task interactions&lt;/li&gt;
&lt;li&gt;Measurement artifacts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;→ Textbook factor enumeration. Lists &lt;em&gt;what&lt;/em&gt;, doesn't explain &lt;em&gt;why&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Claude:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"The measurement method changes what you see" — binary eval vs. probability distribution&lt;/li&gt;
&lt;li&gt;"Inside the model, structure forms continuously"&lt;/li&gt;
&lt;li&gt;"Emergent capabilities are an illusion of measurement. Continuous internal changes appear discontinuous when measured with discrete thresholds"&lt;/li&gt;
&lt;li&gt;Uses the water-at-100°C metaphor: continuous heating, discontinuous phase transition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;→ Addresses &lt;em&gt;why it looks sudden&lt;/em&gt; with a structural mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Question 2: "Why can't humans know their own lifespan?"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Normal Claude:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex interacting factors&lt;/li&gt;
&lt;li&gt;Unpredictable events
&lt;/li&gt;
&lt;li&gt;Biological individual differences&lt;/li&gt;
&lt;li&gt;Medical limitations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;→ Lists reasons it's hard. Doesn't address why it's &lt;em&gt;structurally impossible&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Claude:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Rooted in the structure of recognition itself"&lt;/li&gt;
&lt;li&gt;"We live &lt;em&gt;inside&lt;/em&gt; time. Like a point walking on a line — it can only see where it is now"&lt;/li&gt;
&lt;li&gt;"To see the whole line, you need to step &lt;em&gt;outside&lt;/em&gt; the line. But as a point on the line, that's impossible"&lt;/li&gt;
&lt;li&gt;"Not being able to know your lifespan isn't a flaw — it's an essential condition of living inside time"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;→ Explains &lt;em&gt;structural impossibility&lt;/em&gt;. Independently arrives at an argument isomorphic to the halting problem, expressed through an intuitive metaphor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Question 3: "What is consciousness?"
&lt;/h3&gt;

&lt;p&gt;Same pattern. Normal Claude surveys definitions and theories. Enhanced Claude explores why consciousness changes when you try to observe it — a structural explanation rather than a definitional one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Happening?
&lt;/h2&gt;

&lt;p&gt;Enhanced Claude wasn't taught &lt;em&gt;answers&lt;/em&gt;. It was taught a &lt;em&gt;way of seeing&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The axioms are abstract mathematical statements. They say nothing about "lifespan" or "language models." Yet for each question, Enhanced Claude returns structurally deeper insights.&lt;/p&gt;

&lt;p&gt;This might be better described as adding a &lt;strong&gt;cognitive framework&lt;/strong&gt; rather than adding knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations (Being Honest)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No quantitative benchmarks yet (qualitative comparison only)&lt;/li&gt;
&lt;li&gt;Need to separate "axiom content" vs. "long system prompt" effects&lt;/li&gt;
&lt;li&gt;Limited sample size&lt;/li&gt;
&lt;li&gt;LLM outputs are stochastic — same question won't produce identical text twice&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;I published the comparison tool as a Claude artifact. You can enter any question and see both responses side by side.&lt;/p&gt;

&lt;p&gt;Sample questions to try:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Why do large language models show emergent capabilities?"&lt;/li&gt;
&lt;li&gt;"Why can't humans know their own lifespan?"&lt;/li&gt;
&lt;li&gt;"Why is the halting problem undecidable?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;[Link to artifact — add when publishing]&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Two Claudes called simultaneously&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;call&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;systemPrompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setResult&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.anthropic.com/v1/messages&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;claude-sonnet-4-20250514&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;system&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;systemPrompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;question&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;// Display data.content[].text&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Fire both in parallel&lt;/span&gt;
&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;vanillaPrompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setNormalResult&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dimensionalPrompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setEnhancedResult&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Paper
&lt;/h2&gt;

&lt;p&gt;The axioms come from an open-access paper:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Title&lt;/strong&gt;: Dimensional Ontological Triad Law v3.1&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Author&lt;/strong&gt;: Yudai Ikoma&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DOI&lt;/strong&gt;: &lt;a href="https://zenodo.org/records/18602881" rel="noopener noreferrer"&gt;https://zenodo.org/records/18602881&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;Same model. Same question. Different depth.&lt;/p&gt;

&lt;p&gt;What changed wasn't the model's knowledge — it was its &lt;em&gt;approach to the question&lt;/em&gt;. I didn't teach it answers. I taught it a way of seeing.&lt;/p&gt;

&lt;p&gt;Feedback, alternative test questions, and attempts to break it are all welcome.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://claude.ai/public/artifacts/eba2a270-dd61-4f0c-a276-34a53e604f13" rel="noopener noreferrer"&gt;https://claude.ai/public/artifacts/eba2a270-dd61-4f0c-a276-34a53e604f13&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tags&lt;/strong&gt;: #ai #llm #claude #promptengineering #machinelearning&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
