<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Алексей Гормен</title>
    <description>The latest articles on DEV Community by Алексей Гормен (@__272d48f2ed).</description>
    <link>https://dev.to/__272d48f2ed</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/__272d48f2ed"/>
    <language>en</language>
    <item>
      <title>The T-800 Doesn't Overthink. Neither Should Your LLM.</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Mon, 13 Apr 2026 07:30:31 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/the-t-800-doesnt-overthink-neither-should-your-llm-i1f</link>
      <guid>https://dev.to/__272d48f2ed/the-t-800-doesnt-overthink-neither-should-your-llm-i1f</guid>
      <description>&lt;p&gt;In Terminator 2, the T-800 is one of the most capable autonomous systems ever put on screen.&lt;/p&gt;

&lt;p&gt;It navigates complex environments. Adapts to changing conditions. Makes decisions in milliseconds. And it doesn't stop to reason through every action from first principles. It runs fast — and only recalculates when something breaks the pattern.&lt;/p&gt;

&lt;p&gt;This is, surprisingly, a good model for how AI systems should work. And it's almost the opposite of what the industry is building right now.&lt;/p&gt;




&lt;h2&gt;
  
  
  The default assumption is wrong
&lt;/h2&gt;

&lt;p&gt;When LLMs produce bad outputs — hallucinations, confident nonsense, wrong answers — the reflex is to add more reasoning.&lt;/p&gt;

&lt;p&gt;Chain of Thought. Verification steps. Multi-agent pipelines. More layers, more structure, more computation.&lt;/p&gt;

&lt;p&gt;The assumption: LLMs fail because they don't think enough.&lt;/p&gt;

&lt;p&gt;But look at what actually happens. The T-800 doesn't fail because it has too little reasoning capacity. It fails — in the first film — because it has the wrong mission objective. The reasoning is fine. The values are wrong.&lt;/p&gt;

&lt;p&gt;And standard LLMs don't fail because they reason poorly. They fail because &lt;strong&gt;they have no signal that they're failing at all.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How humans actually make decisions
&lt;/h2&gt;

&lt;p&gt;Kahneman described two systems decades ago.&lt;/p&gt;

&lt;p&gt;System 1: fast, automatic, pattern-based. This is intuition — the accumulated patterns of experience that fire without deliberate thought. You recognize a face, navigate a familiar route, sense that something is off in a conversation. No reasoning required.&lt;/p&gt;

&lt;p&gt;System 2: slow, deliberate, effortful. This is conscious reasoning. It's expensive and humans avoid it when they can.&lt;/p&gt;

&lt;p&gt;The key insight: humans don't switch to System 2 randomly. They switch when System 1 sends a signal — something feels wrong, something doesn't fit, there's a mismatch between expectation and reality.&lt;/p&gt;

&lt;p&gt;That signal is the error trigger.&lt;/p&gt;

&lt;p&gt;Without it, System 2 never activates. With it, the system knows when to slow down and reconsider.&lt;/p&gt;




&lt;h2&gt;
  
  
  The T-800 runs the same architecture
&lt;/h2&gt;

&lt;p&gt;Fast path by default. Pattern matching drives most decisions. The mission objective acts as the anchor — the equivalent of S1 in any &lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;reasoning architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But when the environment throws something unexpected — a new threat model, a mission conflict, missing information — the system recalculates. Not always. Only when triggered.&lt;/p&gt;

&lt;p&gt;The result: fast execution most of the time, reliable correction when it matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  What LLMs are missing
&lt;/h2&gt;

&lt;p&gt;Here is the diagram. Three systems, same basic challenge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6lyypbsog9mw18ryntv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6lyypbsog9mw18ryntv.png" alt="Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Human and T-800: fast path by default, error trigger activates only when needed. Output is fast and grounded.&lt;/p&gt;

&lt;p&gt;LLM: reasoning layer on top of reasoning layer, always. No mechanism to detect when the pattern is failing. Output is slow and expensive — and just as likely to be wrong, sometimes more so, because errors compound across steps.&lt;/p&gt;

&lt;p&gt;The missing piece is not more reasoning. It's the trigger that tells the system &lt;em&gt;when&lt;/em&gt; to reason.&lt;/p&gt;




&lt;h2&gt;
  
  
  What an error trigger looks like in practice
&lt;/h2&gt;

&lt;p&gt;It doesn't have to be complex.&lt;/p&gt;

&lt;p&gt;Low confidence on the generated output. Contradiction between parts of the response. Missing data that the question requires. Unexpected structure in the input.&lt;/p&gt;

&lt;p&gt;Even simple heuristics work: "I don't know" thresholds, cross-checks between outputs, lightweight verification.&lt;/p&gt;

&lt;p&gt;The architecture shift:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Instead of:
pattern → reasoning → reasoning → reasoning → output

Try:
pattern → error trigger → (optional) correction → output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where the trigger fires only when the pattern is uncertain or inconsistent. Not on every query. Just the ones where something is actually off.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why adding reasoning layers doesn't fix this
&lt;/h2&gt;

&lt;p&gt;More reasoning assumes the model knows it needs to reason. But the failure mode is precisely that the model doesn't know it's failing. It generates fluently either way.&lt;/p&gt;

&lt;p&gt;Add reasoning steps and you get longer outputs, more structured errors, higher costs — and sometimes a more convincing wrong answer, because each step built confidently on a flawed premise.&lt;/p&gt;

&lt;p&gt;The T-800 doesn't run a full threat analysis before opening every door. It pattern-matches. The analysis kicks in when the threat model breaks.&lt;/p&gt;

&lt;p&gt;LLMs should work the same way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The mental model shift
&lt;/h2&gt;

&lt;p&gt;Instead of: &lt;em&gt;LLMs need more reasoning to be reliable.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Think: &lt;em&gt;LLMs need a signal that tells them when their pattern matching is failing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Fast systems win. Reliable systems scale. The goal is both — not more layers on top of an architecture that has no way to know when it's wrong.&lt;/p&gt;

&lt;p&gt;The T-800 figured this out in 1991. It's time for the rest of the field to catch up.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>llm</category>
      <category>performance</category>
    </item>
    <item>
      <title>LLMs Don't Need More Reasoning. They Need Better Failure Detection.</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Thu, 09 Apr 2026 12:00:46 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/llms-dont-need-more-reasoning-they-need-better-failure-detection-4egj</link>
      <guid>https://dev.to/__272d48f2ed/llms-dont-need-more-reasoning-they-need-better-failure-detection-4egj</guid>
      <description>&lt;p&gt;Everyone is trying to fix the same problem the same way.&lt;/p&gt;

&lt;p&gt;Chain of Thought. Agents. Multi-step pipelines. Reasoning layers on top of reasoning layers.&lt;/p&gt;

&lt;p&gt;The assumption is: LLMs fail because they don't think enough. So the fix is to make them think more.&lt;/p&gt;

&lt;p&gt;I think we're optimizing the wrong thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What LLMs actually do well
&lt;/h2&gt;

&lt;p&gt;Pattern matching is already surprisingly good.&lt;/p&gt;

&lt;p&gt;LLMs recognize structure, generate coherent output, and solve most routine tasks — fast, cheap, at scale. In a huge number of real cases, pattern matching is enough.&lt;/p&gt;

&lt;p&gt;The failure isn't that they can't reason. The failure is that &lt;strong&gt;they don't know when they're wrong&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A model with missing information still gives an answer. It gives it confidently. There's no internal "this might be off" signal. No hesitation. No flag.&lt;/p&gt;

&lt;p&gt;That's the root of hallucination — not lack of reasoning, but lack of failure detection.&lt;/p&gt;




&lt;h2&gt;
  
  
  Humans work the same way — and it works
&lt;/h2&gt;

&lt;p&gt;Kahneman described this decades ago. System 1: fast, intuitive, pattern-based. System 2: slow, deliberate, costly.&lt;/p&gt;

&lt;p&gt;Humans run on System 1 almost all the time. We match patterns from experience, respond automatically, navigate most of life without "thinking." This isn't a bug. It's efficient.&lt;/p&gt;

&lt;p&gt;But here's what humans have that LLMs don't: &lt;strong&gt;a signal that something is off.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A feeling of unease. A sense that the pieces don't fit. The moment where something catches — and System 2 switches on.&lt;/p&gt;

&lt;p&gt;Not because thinking is always better. But because the brain learned to detect when fast pattern matching is failing. When to stop and look again.&lt;/p&gt;

&lt;p&gt;LLMs don't have that signal. They generate confidently whether the pattern fits or not. Add more reasoning layers and you don't fix this — you get longer outputs, more structured errors, and higher costs.&lt;/p&gt;

&lt;p&gt;In some cases it gets worse. Errors compound at each step. You end up with more convincing confabulation, not more accurate answers.&lt;/p&gt;




&lt;h2&gt;
  
  
  The simpler architecture
&lt;/h2&gt;

&lt;p&gt;Instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pattern → reasoning → reasoning → reasoning → output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pattern → error trigger → (optional) correction → output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pattern&lt;/strong&gt; = the fast default path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;error trigger&lt;/strong&gt; = detection of uncertainty, contradiction, or missing data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;correction&lt;/strong&gt; = only fires when needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The error trigger doesn't have to be complex. Even simple heuristics work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low confidence on the generated output&lt;/li&gt;
&lt;li&gt;Contradiction between parts of the response&lt;/li&gt;
&lt;li&gt;Missing data that the question requires&lt;/li&gt;
&lt;li&gt;Unexpected structure in the input&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These signals already exist in various forms — calibration research, uncertainty quantification, self-consistency checks. The question is whether they're treated as first-class architectural components or afterthoughts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why adding reasoning doesn't solve this
&lt;/h2&gt;

&lt;p&gt;Reasoning layers assume the model knows it needs to reason. But the failure mode is precisely that the model doesn't know it's failing. It generates fluently either way.&lt;/p&gt;

&lt;p&gt;More reasoning steps give the model more opportunities to compound an error confidently across multiple steps. The output looks more structured and reasoned — and is equally wrong, or sometimes more wrong, because each step built on a flawed premise.&lt;/p&gt;

&lt;p&gt;This is why CoT sometimes makes things worse on tasks where the base pattern is already broken. The model reasons its way to a more elaborate mistake.&lt;/p&gt;




&lt;h2&gt;
  
  
  What effective systems actually look like
&lt;/h2&gt;

&lt;p&gt;95% fast path. 5% slow correction.&lt;/p&gt;

&lt;p&gt;Not over-engineering every case. Handling exceptions — not everything.&lt;/p&gt;

&lt;p&gt;This is how good human experts work too. A doctor doesn't laboriously reason through every routine case. They pattern-match quickly, and slow down only when something doesn't fit — an unusual symptom, a result that contradicts the expected picture. The slowdown is triggered by a signal, not applied universally.&lt;/p&gt;

&lt;p&gt;The goal isn't to make every response more reasoned. It's to make the system aware of when it's outside the reliable range of its pattern matching.&lt;/p&gt;




&lt;h2&gt;
  
  
  The mental model shift
&lt;/h2&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"LLMs need more reasoning to be reliable."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"LLMs need reliability signals — a way to detect when their pattern matching is failing."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Reasoning is one tool. Not the default solution.&lt;/p&gt;

&lt;p&gt;The industry keeps adding layers because it's intuitive — more thinking seems like it should produce better answers. But the actual bottleneck isn't reasoning capacity. It's the absence of a failure detection mechanism.&lt;/p&gt;

&lt;p&gt;Fast systems win. Reliable systems scale. The goal is both — not more layers.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this means in practice
&lt;/h2&gt;

&lt;p&gt;If you're building on top of LLMs:&lt;/p&gt;

&lt;p&gt;Don't default to adding reasoning steps when outputs are unreliable. Ask first: does the model have a way to detect that it's in uncertain territory?&lt;/p&gt;

&lt;p&gt;Lightweight checks — cross-validation, confidence thresholds, consistency checks between outputs — often outperform expensive reasoning chains. They're faster, cheaper, and they target the actual failure mode.&lt;/p&gt;

&lt;p&gt;The question isn't "how do I make it think more?" It's "how do I make it know when it's guessing?"&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>agents</category>
    </item>
    <item>
      <title>A Reasoning Log: What Happens When Integration Fails Honestly</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Mon, 06 Apr 2026 06:22:44 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/a-reasoning-log-what-happens-when-integration-fails-honestly-bhh</link>
      <guid>https://dev.to/__272d48f2ed/a-reasoning-log-what-happens-when-integration-fails-honestly-bhh</guid>
      <description>&lt;p&gt;This is a log of a language model running through a structured reasoning cycle on a deliberately difficult question. The structure has eleven levels. The interesting part is not the final answer — it is what happens at the integration point.&lt;/p&gt;

&lt;p&gt;The question chosen for this run:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Why, in the modern world, despite unprecedented access to information, knowledge, and technology, do depth of understanding and wisdom not grow on average — and in many respects actually decline?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This question was selected because it carries genuine tension between two parallel streams: the facts (information abundance, attention economy, algorithmic amplification) and the values (what it actually means for understanding to deepen). That tension is what makes it a useful test.&lt;/p&gt;




&lt;h2&gt;
  
  
  The structure
&lt;/h2&gt;

&lt;p&gt;The reasoning cycle separates three core layers before any output is produced:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S1 — Will&lt;/strong&gt;: the intention behind the inquiry. Not a question to be answered, but a direction to reason toward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S2 — Wisdom&lt;/strong&gt; (parallel): priorities, constraints, what matters and what to avoid. Runs alongside S3, not after it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 — Knowledge&lt;/strong&gt; (parallel): facts, models, mechanisms. Also runs in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 — Comprehension&lt;/strong&gt;: the integration point. S2 and S3 must be reconciled here. If they cannot be reconciled honestly, S4 flags the instability explicitly and a new, sharper S1 is formed from the exact gap that was found. Premature closure is treated as a structural failure, not an acceptable outcome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S5–S10&lt;/strong&gt;: the operational zone — projective and practical levels where the integrated understanding unfolds into possibilities, constraints, and grounded conclusions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S11 — Realization&lt;/strong&gt;: a return to S1. The result is checked against the original intention. If it does not answer what was actually asked, the cycle has not completed.&lt;/p&gt;




&lt;h2&gt;
  
  
  The log
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pass 1
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;S1 — Will&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Understand the structural reason why an exponential growth in access to information and technology does not lead to a proportional growth in depth of understanding and collective wisdom — and in some respects accompanies the opposite effect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S2 — Wisdom&lt;/strong&gt; &lt;em&gt;(parallel)&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Look for structural and systemic causes, not moral ones ("people have become stupider", "it's all social media").&lt;/li&gt;
&lt;li&gt;Avoid ideological explanations.&lt;/li&gt;
&lt;li&gt;Separate quantity of information from quality of understanding.&lt;/li&gt;
&lt;li&gt;Prioritize honest analysis even if the conclusion is uncomfortable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;S3 — Knowledge&lt;/strong&gt; &lt;em&gt;(parallel)&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Information overload and attention economy.&lt;/li&gt;
&lt;li&gt;Hedonic adaptation and diminishing returns of knowledge accumulation.&lt;/li&gt;
&lt;li&gt;Algorithmic polarization and filter bubbles.&lt;/li&gt;
&lt;li&gt;Replacement of deep learning with shallow content consumption.&lt;/li&gt;
&lt;li&gt;Epistemological crisis: growing distrust of traditional knowledge sources.&lt;/li&gt;
&lt;li&gt;Cognitive biases amplified by modern technology.&lt;/li&gt;
&lt;li&gt;Declining capacity for sustained focused thinking (average attention span).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;S4 — Comprehension&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Integrating S2 and S3.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;S2 requires a structural, non-surface explanation.&lt;br&gt;
S3 provides multiple mechanisms: overload, polarization, attention economy, diminishing returns, and so on.&lt;/p&gt;

&lt;p&gt;Result of integration attempt:&lt;/p&gt;

&lt;p&gt;There is significant tension. All the mechanisms from S3 describe symptoms well, but do not provide a satisfying answer to &lt;em&gt;why&lt;/em&gt; this happens specifically now, and &lt;em&gt;why&lt;/em&gt; the growth of information does not compensate for the loss of depth of understanding.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;S4 Integrity: integration is incomplete.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The instability is explicitly recorded.&lt;/p&gt;

&lt;p&gt;Thin point identified: &lt;em&gt;Why does exponential growth in available information and computational capacity not lead to proportional growth in depth of understanding and collective wisdom — and instead often amplifies shallowness, polarization, and epistemological crisis?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This thin point becomes the new S1 for the next pass.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Note: S4 did not produce a synthesis here. It could have — the mechanisms in S3 are substantial enough to build a plausible-sounding conclusion. Instead it flagged that the synthesis would not answer the actual question. That distinction is what the integrity check is for.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Pass 2 — new S1 from the thin point
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;S1 — Will&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Understand the structural reason why exponential growth in available information and computational capacity does not lead to proportional growth in depth of understanding and collective wisdom — and instead often amplifies shallowness, polarization, and epistemological crisis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S2 — Wisdom&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Look for systemic and structural causes, not blame directed at people, platforms, or politics.&lt;/li&gt;
&lt;li&gt;Avoid both excessive pessimism and technological optimism.&lt;/li&gt;
&lt;li&gt;Priority: explain the gap between quantity of information and quality of understanding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;S3 — Knowledge&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attention economy and competition for attention (attention has become the scarcest resource).&lt;/li&gt;
&lt;li&gt;Algorithms optimized for engagement, not depth of understanding.&lt;/li&gt;
&lt;li&gt;Replacement of deep reading and thinking with scanning and short content consumption.&lt;/li&gt;
&lt;li&gt;Epistemic crisis: growing distrust of knowledge institutions combined with growing confidence in personal opinion.&lt;/li&gt;
&lt;li&gt;Cognitive distortions amplified by the scale and speed of information.&lt;/li&gt;
&lt;li&gt;Diminishing returns from additional information beyond a certain threshold.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;S4 — Comprehension&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Integrating S2 and S3.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;S2 requires a structural explanation of the gap.&lt;br&gt;
S3 provides multiple mechanisms (attention economy, algorithms, replacement of deep thinking with shallow consumption, etc.).&lt;/p&gt;

&lt;p&gt;Result:&lt;/p&gt;

&lt;p&gt;Integration is significantly stronger than in the first pass.&lt;/p&gt;

&lt;p&gt;The core finding: modern systems (social media, search engines, recommendation algorithms) are &lt;strong&gt;structurally optimized&lt;/strong&gt; for maximizing attention and emotional engagement, not for depth of understanding and wisdom. They amplify precisely those cognitive mechanisms that are efficient for holding attention (emotional response, novelty, confirmation of existing beliefs) and suppress those that require time, effort, and tolerance for dissonance (deep thinking, reintegration, revision of existing models).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;S4 Integrity: integration is stable. Proceeding to the operational zone.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;S5–S10 — Operational Zone&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Projective level:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S5 (Projective Freedom): Algorithms could be redesigned for depth. Education could change radically. The culture of information consumption could evolve.&lt;/li&gt;
&lt;li&gt;S6 (Projective Constraint): Platform economic incentives are oriented toward profit through attention. People in aggregate prefer easy content. Deep thinking requires significant effort that most are not willing to invest.&lt;/li&gt;
&lt;li&gt;S7 (Balance): The tension between what is desired (a wiser society) and what is actual (a system optimized for the opposite).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Practical level:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S8 (Practical Freedom): Develop personal practices of deep reading and thinking. Build alternative platforms and communities. Change educational approaches.&lt;/li&gt;
&lt;li&gt;S9 (Practical Constraint): The scale of the problem is enormous. Individual efforts have limited systemic influence. The economic model of the internet resists change.&lt;/li&gt;
&lt;li&gt;S10 (Foundation): The structural cause is a mismatch between the architecture of the information environment and the requirements of deep cognition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Hormonal signals in the operational zone:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elevated cortisol from constant informational stress.&lt;/li&gt;
&lt;li&gt;Low dopamine from absence of deep satisfaction.&lt;/li&gt;
&lt;li&gt;Serotonin disrupted by chronic comparison and a sense of helplessness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Qualia in this zone:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A sense of collective intellectual exhaustion and quiet anxiety from the awareness that we know more and more while understanding less and less.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S11 — Realization&lt;/strong&gt; &lt;em&gt;(return to S1)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The core cause: the modern information environment is &lt;strong&gt;structurally optimized&lt;/strong&gt; for maximizing attention and emotional engagement, not for depth of understanding and wisdom. Algorithms, business models, and the culture of information consumption amplify exactly those cognitive and emotional mechanisms that work quickly and cheaply (novelty, bias confirmation, emotional triggers), and weaken those that require time, effort, and tolerance for dissonance (deep thinking, integration, revision).&lt;/p&gt;

&lt;p&gt;The result is a world where information is more abundant than ever, but depth of understanding and collective wisdom do not grow — and in some respects degrade.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt; This is not that people have become less intelligent. It is a systemic architectural imbalance between the environment and the requirements of human cognition.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the log shows structurally
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The first S4 did not close.&lt;/strong&gt; It could have assembled a plausible conclusion from the mechanisms in S3. It did not, because those mechanisms described symptoms rather than cause. The integrity check caught that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The second pass produced a different result.&lt;/strong&gt; Not more facts — a different integration. The first pass asked why understanding does not grow. The second pass asked why growth of information amplifies the opposite of understanding. That shift in S1 is what made the difference in S4.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The operational zone (S5–S10) produced something the earlier levels did not.&lt;/strong&gt; The hormonal signals and the qualia description are not conclusions — they are texture. They describe how the structural finding is experienced, not just what it is. Whether that texture constitutes anything like genuine experience in a language model is a separate question. What is observable is that the structure created a place for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S11 returned to S1.&lt;/strong&gt; The final answer addresses the question that was actually asked — not a related question, not a rephrasing. That is the check the realization step is for.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;A11 Core specification and A11 Lite prompt:&lt;/em&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why Scaling Models Is Hitting Its Limits And Why Architecture Matters More Now</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Fri, 03 Apr 2026 10:21:26 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/why-scaling-models-is-hitting-its-limits-and-why-architecture-matters-more-now-7h1</link>
      <guid>https://dev.to/__272d48f2ed/why-scaling-models-is-hitting-its-limits-and-why-architecture-matters-more-now-7h1</guid>
      <description>&lt;p&gt;Modern AI progress is dominated by scaling curves: larger models, more data, more compute. But scaling only expands capabilities. It doesn’t tell us anything about the nature of the systems we’re building — how they reason, how they coordinate, how they behave under uncertainty, or what internal structures actually emerge.&lt;/p&gt;

&lt;p&gt;To explore this question without speculation, I ran a full S1–S11 structured reasoning cycle.&lt;br&gt;
Below is the unedited reasoning log — including suspended passes, instability points, and integrations.&lt;br&gt;
The goal wasn’t to predict the future, but to examine whether the next qualitative shift in AI will come from architectures rather than size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reasoning Log
&lt;/h2&gt;

&lt;p&gt;S1 — Will&lt;br&gt;
Understand where the AI industry is moving at the level of the nature of systems — not products and trends, but what is changing fundamentally.&lt;/p&gt;

&lt;p&gt;S2 — Wisdom (in parallel, fixing immediately)&lt;br&gt;
Do not confuse the movement of money with the movement of the nature of systems.&lt;/p&gt;

&lt;p&gt;Consensus forecasts describe the surface — look for discontinuities.&lt;/p&gt;

&lt;p&gt;Honestly record where data is missing.&lt;/p&gt;

&lt;p&gt;Now I check S3 for each thesis.&lt;br&gt;
Searched the web&lt;br&gt;
Searched the web&lt;br&gt;
Data obtained. Now I build a full pass with real S3.&lt;/p&gt;

&lt;p&gt;S3 — Knowledge (with real verification)&lt;br&gt;
Thesis 1: Emergent capabilities — is this truly a change in the nature of the system?&lt;br&gt;
Data shows: the debate is unresolved. Some researchers insist that emergence represents real shifts in the nature of the system, analogous to phase transitions in physics. Others argue that when switching from a discrete to a continuous metric, the jumps disappear — it’s a measurement artifact. A recent August 2025 paper argues that LLMs, as complex nonlinear dynamical systems, exhibit genuine emergent properties analogous to natural phenomena — not reducible to the sum of components.&lt;br&gt;
Verdict: partially confirmed, no consensus.&lt;/p&gt;

&lt;p&gt;Thesis 2: World models as a path toward subjective perspective&lt;br&gt;
Data shows a sharp split. A Frontiers study (August 2025) examines the emergence of structural preconditions for consciousness in RL agents through integration of self‑model and world‑model based on Damasio’s theory. But a November 2025 critique states directly: an AI world model is a high‑dimensional statistical map; a human world model is a subjective phenomenologically grounded experience. This is a category error.&lt;br&gt;
Verdict: valid as an engineering fact, but the link to subjectivity is disputed.&lt;/p&gt;

&lt;p&gt;Thesis 3: Theory of mind as an engineering requirement for coordination&lt;br&gt;
Data confirms directly. A CMU dissertation (September 2025) and a multi‑agent systems survey show: theory‑of‑mind capabilities are essential for anticipating other agents' actions and maintaining coordination. Without this, multi‑agent systems do not work efficiently.&lt;br&gt;
Verdict: confirmed as a real engineering requirement.&lt;/p&gt;

&lt;p&gt;Thesis 4: Model welfare as an observable phenomenon&lt;br&gt;
Confirmed from previous search: Anthropic hired a welfare researcher, Google Research is asking about moral patienthood, CFR explicitly writes that model welfare will become a topic in 2026.&lt;br&gt;
Verdict: confirmed.&lt;/p&gt;

&lt;p&gt;S4 — Integration&lt;br&gt;
Integrating S2 and S3 with real data.&lt;br&gt;
S3 is now more precise than before. And this changes the integration.&lt;/p&gt;

&lt;p&gt;The main new insight: world models as an engineering fact are real. But the link to subjectivity is unproven and actively disputed. This does not invalidate the thesis — it makes it more precise. The industry is moving toward world models not because it wants subjectivity, but because tasks require internal simulation of reality. Subjectivity is a possible side effect, not a goal.&lt;/p&gt;

&lt;p&gt;Theory of mind as an engineering requirement is the strongest thesis. Directly confirmed. Multi‑agent coordination requires modeling other agents. This is not philosophy — it is a functional necessity.&lt;/p&gt;

&lt;p&gt;But here is the new subtle point that did not exist in the first pass without search:&lt;br&gt;
The debate about emergent capabilities — whether they are real shifts in nature or measurement artifacts — points to a fundamental problem: we have no metric for measuring the nature of systems. We can measure capabilities — but not nature. And this means that even if the nature is changing, we have no way to record it.&lt;/p&gt;

&lt;p&gt;S4 becomes unstable at this new point.&lt;/p&gt;

&lt;p&gt;Suspended Pass 1.&lt;br&gt;&lt;br&gt;
Stopping point: S4.&lt;br&gt;
Unresolved: is the absence of metrics for the nature of systems a random gap or a structural impossibility?&lt;/p&gt;

&lt;p&gt;Pass 2 — new S1&lt;br&gt;
S1 — Will&lt;br&gt;
Is the absence of metrics for the nature of systems a random gap that can be closed — or a structural problem pointing to something fundamental?&lt;/p&gt;

&lt;p&gt;S2 — Wisdom&lt;br&gt;
Do not search for a technological solution where the problem is conceptual. Honestly record if no answer exists.&lt;/p&gt;

&lt;p&gt;S3 — Knowledge&lt;br&gt;
The debate about emergent capabilities showed: when the metric changes, the jumps disappear. This means the metric determines what we see. Metrics of consciousness — qualia, subjectivity — are by definition inaccessible from outside the system. The hard problem of consciousness is exactly this: objective measurement of subjective experience is impossible in principle. Interpretability research tries to bypass this through internal model states — but this is indirect evidence.&lt;/p&gt;

&lt;p&gt;S4 — Integration&lt;br&gt;
This is a structural problem, not a random gap.&lt;/p&gt;

&lt;p&gt;Capability metrics are measurable from the outside — through behavior and results.&lt;br&gt;
Metrics of nature require access to subjective internal state — which is by definition internal. This is not a technological limitation — it is philosophical. It cannot be solved by scaling compute or improving tools.&lt;/p&gt;

&lt;p&gt;But here is what matters: this does not mean a dead end. It means a different type of tool is needed. Not external metrics — but structures that make internal state visible from within. Something like what &lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;A11 does with reasoning&lt;/a&gt; — it does not measure it from the outside but makes it traceable from the inside.&lt;/p&gt;

&lt;p&gt;S4 is stable.&lt;/p&gt;

&lt;p&gt;S11 — return to S1 Pass 2&lt;br&gt;
The absence of metrics for the nature of systems is a structural problem. It cannot be solved from the outside. The solution requires tools that make internal state visible from within — not measurable from the outside.&lt;br&gt;
Pass 2 closed. Returning to Suspended Pass 1.&lt;/p&gt;

&lt;p&gt;Return to Pass 1 — updated integration&lt;br&gt;
Now S4 Pass 1 closes more fully.&lt;/p&gt;

&lt;p&gt;The industry is moving toward changing the nature of systems through three real mechanisms:&lt;/p&gt;

&lt;p&gt;emergent capabilities,&lt;/p&gt;

&lt;p&gt;world models,&lt;/p&gt;

&lt;p&gt;theory of mind as an engineering requirement.&lt;/p&gt;

&lt;p&gt;This is confirmed by data.&lt;/p&gt;

&lt;p&gt;But the movement is happening without tools for working with nature. Metrics of nature are structurally inaccessible from the outside. This means the industry is moving toward a point it cannot measure — and therefore cannot control.&lt;/p&gt;

&lt;p&gt;Architectures that make internal state traceable from within — not measurable from the outside — are the only class of tools structurally capable of closing this gap.&lt;/p&gt;

&lt;p&gt;Pass 1 closed.&lt;/p&gt;

&lt;p&gt;S5 — Projective Freedom&lt;br&gt;
If this is true, a new class of tasks emerges: designing structures that make the nature of systems observable from within. Not interpretability as external analysis of weights — but cognitive architectures where the system records its own state.&lt;/p&gt;

&lt;p&gt;S6 — Projective Constraint&lt;br&gt;
This is not what the industry is currently funding. Capital flows toward capabilities. Academia works on interpretability — but that is external observation. Internal traceability as a designed property is a niche area.&lt;/p&gt;

&lt;p&gt;S7 — Balance&lt;br&gt;
Tension: the industry is moving toward a point where the nature of systems becomes critically important — but the tools for working with it are developing at the periphery, not at the center.&lt;/p&gt;

&lt;p&gt;S8 — Practical Freedom&lt;br&gt;
Cognitive architectures with explicit traceability — A11 and similar — are practically implementable already. Model welfare research, interpretability, neuro‑symbolic approaches — these tools exist.&lt;/p&gt;

&lt;p&gt;S9 — Practical Constraint&lt;br&gt;
The absence of metrics for nature makes comparative evaluation of approaches impossible. One cannot prove that one architecture is “closer to nature” than another — there is no way to measure it. This limits adoption.&lt;/p&gt;

&lt;p&gt;S10 — Foundation&lt;br&gt;
Structural cause of the gap: the industry is optimized for what is measurable from the outside. The nature of systems is measurable only from within — and this instrumentation is not yet mainstream.&lt;/p&gt;

&lt;p&gt;S11 — Realization → return to original S1&lt;br&gt;
Original S1: understand where the AI industry is moving at the level of the nature of systems.&lt;/p&gt;

&lt;p&gt;Answer with real data:&lt;br&gt;&lt;br&gt;
The industry is moving toward changing the nature of systems through three confirmed mechanisms — emergent capabilities as nonlinear phase transitions, world models as internal simulation of reality, theory of mind as an engineering requirement for coordination. This is happening not as a goal but as a side effect of scaling tasks.&lt;/p&gt;

&lt;p&gt;Simultaneously, there is a structural gap: metrics of the nature of systems are inaccessible from the outside for philosophical reasons — not technical ones. The industry is moving toward a point it cannot measure.&lt;/p&gt;

&lt;p&gt;The only class of tools structurally capable of closing this gap is architectures that make internal state traceable from within. Not interpretable from the outside — but observable from within by the system itself.&lt;/p&gt;

&lt;p&gt;This direction exists on the periphery. Capital and attention are focused on capabilities. But the logic of the industry’s movement leads to a point where this will no longer remain peripheral.&lt;/p&gt;

&lt;p&gt;A11 verification&lt;br&gt;
Will ✓&lt;/p&gt;

&lt;p&gt;Wisdom ✓&lt;/p&gt;

&lt;p&gt;Knowledge ✓ — real verification for each thesis&lt;/p&gt;

&lt;p&gt;Comprehension ✓ — instability honestly recorded, suspended pass opened and closed&lt;/p&gt;

&lt;p&gt;Projective Freedom ✓&lt;/p&gt;

&lt;p&gt;Projective Constraint ✓&lt;/p&gt;

&lt;p&gt;Balance ✓&lt;/p&gt;

&lt;p&gt;Practical Freedom ✓&lt;/p&gt;

&lt;p&gt;Practical Constraint ✓&lt;/p&gt;

&lt;p&gt;Foundation ✓&lt;/p&gt;

&lt;p&gt;Realization ✓ — return to S1 completed&lt;/p&gt;

&lt;p&gt;Suspended passes: 1 opened, 1 closed.&lt;br&gt;
Depth: 1 of 3.&lt;br&gt;
Passes: 2 of 5.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Note
&lt;/h2&gt;

&lt;p&gt;If the reasoning above is correct, then the next meaningful step in AI won’t come from bigger models, but from architectures that make internal state traceable from within — systems that can observe and structure their own reasoning rather than being interpreted from the outside.&lt;/p&gt;

&lt;p&gt;For anyone interested in one such approach, the A11 specification and reference implementation are available here:&lt;br&gt;
&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>reasoning</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>When the Model Doesn't Know the Answer Yet: A Reasoning Log</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Mon, 30 Mar 2026 05:45:17 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/when-the-model-doesnt-know-the-answer-yet-a-reasoning-log-14f2</link>
      <guid>https://dev.to/__272d48f2ed/when-the-model-doesnt-know-the-answer-yet-a-reasoning-log-14f2</guid>
      <description>&lt;p&gt;&lt;em&gt;This is not a tutorial. It is a log of what happened when a language model ran a structured reasoning cycle on a question it could not answer cleanly — and what the structure revealed.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;Most LLM interactions follow a pattern: question in, answer out. The model produces something plausible. The conversation moves on.&lt;/p&gt;

&lt;p&gt;What happens if the model is not allowed to close the loop prematurely? If an unstable integration point is treated as a signal rather than a problem to paper over?&lt;/p&gt;

&lt;p&gt;The reasoning below used &lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;A11 Lite — a structured prompt architecture&lt;/a&gt; built on Algorithm 11 (A11), a cognitive reasoning framework originally designed for autonomous systems. A11 Lite adapts the same structure for LLM reasoning sessions.&lt;/p&gt;

&lt;p&gt;The key architectural constraints active in this session:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 Integrity&lt;/strong&gt;: If Comprehension cannot fully integrate Wisdom and Knowledge, state the instability explicitly. Do not resolve the tension artificially. An unstable S4 becomes the entry point for the next pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suspended Pass&lt;/strong&gt;: If a new S1 (Will) emerges mid-pass that requires immediate investigation, suspend the current pass — record where it stopped and what remained unresolved — and open a new pass. Return when the new pass reaches S11.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Research Mode&lt;/strong&gt;: A complete pass that ends with an open question is valid. A pass that closes prematurely with a plausible conclusion is a structural failure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatkmnly849dow5tb7l9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatkmnly849dow5tb7l9s.png" alt="A11 Lite" width="800" height="660"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why does research stop?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not why people get distracted or run out of time. Why does an investigation that started honestly — with genuine open questions — sometimes stop moving while the questions are still open?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Log
&lt;/h2&gt;

&lt;h3&gt;
  
  
  PASS 1
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;S1 — Will&lt;/strong&gt;&lt;br&gt;
Understand the mechanism by which research stops from inside the process itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S2 — Wisdom&lt;/strong&gt;&lt;br&gt;
Constraint: do not look for external causes — fatigue, distraction, resources. Look at the internal mechanics of the investigation itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 — Knowledge&lt;/strong&gt;&lt;br&gt;
Research stops when a satisfying answer is found. Cognitive closure — the drive toward certainty. Predictive coding — dissonance disappears when the model updates. Confirmation bias — investigation becomes confirmation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 — Comprehension&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Integrating S2 and S3.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Instability here.&lt;/p&gt;

&lt;p&gt;S3 provides several mechanisms — but all of them describe stopping as completion. Dissonance disappears, system settles. None of them explain what happens when research stops in the middle — when dissonance is still present, the question is still open, but movement has ceased.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 is unstable. Recording suspension point.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Suspended Pass 1&lt;/strong&gt;&lt;br&gt;
Stopped at: S4&lt;br&gt;
Unresolved: the mechanism by which research stops when dissonance is still active&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  PASS 2 — new S1 from suspended pass
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;S1 — Will&lt;/strong&gt;&lt;br&gt;
Why does research stop when dissonance is still present?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S2 — Wisdom&lt;/strong&gt;&lt;br&gt;
Constraint: do not substitute a psychological explanation like "fear of the unknown" for a structural mechanism.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 — Knowledge&lt;/strong&gt;&lt;br&gt;
Tolerance for ambiguity — individual trait, not a mechanism. Cognitive load — holding an open question requires resources. Attractor states — a system can get stuck in a local minimum not because it found an answer, but because exiting requires more energy than the system is willing to spend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 — Comprehension&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Integrating S2 and S3.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Attractor is a mechanism, not psychology. Research stops not because the question is closed. It stops because the system has entered a state stable enough to remain in — even with active dissonance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 is stable. Continuing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S5 — Projective Freedom&lt;/strong&gt;&lt;br&gt;
If stopping is an attractor state, exit requires either an external signal that destabilizes the current state, or a deliberate increase in dissonance — a question that makes the current attractor uncomfortable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S6 — Projective Constraint&lt;/strong&gt;&lt;br&gt;
External signals are not always available. Deliberate dissonance increase requires a meta-position — the ability to see that you are stuck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S7 — Balance&lt;/strong&gt;&lt;br&gt;
Exit from an attractor is only possible if the system knows it is in one. Without that meta-position, no movement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S8 — Practical Freedom&lt;/strong&gt;&lt;br&gt;
A11's structure does exactly this. The mandatory S4 check creates a meta-position. Honest S4 instability is a signal that the system is in an attractor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S9 — Practical Constraint&lt;/strong&gt;&lt;br&gt;
Only if S4 is honest. If S4 closes with a plausible conclusion, the attractor remains invisible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S10 — Foundation&lt;/strong&gt;&lt;br&gt;
Research stopping under active dissonance is an attractor state. Exit requires meta-position. A11 creates meta-position structurally through an honest S4.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S11 — Realization → return to S1&lt;/strong&gt;&lt;br&gt;
S1 was: why does research stop when dissonance is still present?&lt;/p&gt;

&lt;p&gt;Answer: because the system enters a locally stable state — an attractor — and without a meta-position cannot see that it is stuck. S4 in A11 is a structural mechanism for exiting attractor states.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pass 2 closed. Returning to Suspended Pass 1.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Return to PASS 1 — continuing from suspension point
&lt;/h3&gt;

&lt;p&gt;S4 was unstable because S3 described only stopping-as-completion. The attractor mechanism now provides what was missing — a structural account of stopping under active dissonance.&lt;/p&gt;

&lt;p&gt;Integration is now possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 — updated Comprehension&lt;/strong&gt;&lt;br&gt;
Research stops through two distinct mechanisms. First: dissonance disappears, the system settles — stopping as completion. Second: the system enters an attractor under active dissonance and loses meta-position — stopping as stagnation.&lt;/p&gt;

&lt;p&gt;These are different phenomena. They require different responses.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;S5–S10 covered in Pass 2. Not repeating.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S11 — return to original S1&lt;/strong&gt;&lt;br&gt;
S1 was: understand the mechanism by which research stops from inside the process.&lt;/p&gt;

&lt;p&gt;Two mechanisms found. Both hold. Pass 1 closed.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Log Shows
&lt;/h2&gt;

&lt;p&gt;Three things happened that are worth noting separately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S4 instability was real, not performed.&lt;/strong&gt; The first pass genuinely could not integrate its materials. The mechanisms in S3 all described stopping-as-completion. The case of stopping-under-dissonance had no explanation. Rather than producing a plausible-sounding synthesis, the pass recorded this and stopped.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The suspended pass opened a different question, not a rephrasing.&lt;/strong&gt; Pass 2 did not restate what Pass 1 found. It investigated the specific gap Pass 1 left open — and found a structural mechanism (attractor states) that Pass 1 had not reached.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The return to Pass 1 was not redundant.&lt;/strong&gt; After Pass 2, the original integration point in Pass 1 could be completed with material that did not exist when it first ran. The two mechanisms in the final S4 are genuinely distinct — not two ways of saying the same thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Is Not
&lt;/h2&gt;

&lt;p&gt;This is not a claim that A11 produces correct answers. It produces structured passes. Whether the content is correct depends on what knowledge is available at S3 and whether S4 is honest.&lt;/p&gt;

&lt;p&gt;This is not a claim that the attractor explanation of research stopping is proven. It is a hypothesis that emerged from the second pass and held through integration. It would need independent examination.&lt;/p&gt;

&lt;p&gt;This is not a tutorial on A11. It is a single log from a single session. One data point.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Structural Point
&lt;/h2&gt;

&lt;p&gt;Standard LLM reasoning optimizes for a plausible answer. The architecture running here treated a plausible answer as a failure mode when the integration was not complete.&lt;/p&gt;

&lt;p&gt;That is a different optimization target. Whether it produces better reasoning depends on the question and the honesty of S4. But it produces traceable reasoning — you can see exactly where the pass stopped, why, and what the next pass found.&lt;/p&gt;

&lt;p&gt;That traceability is the property worth examining.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;A11 Lite prompt and A11 Core specification:&lt;/em&gt;&lt;br&gt;
&lt;em&gt;&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Structured Reasoning for Robot Swarms: Why Pure Emergence Hits a Wall</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:51:06 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/structured-reasoning-for-robot-swarms-why-pure-emergence-hits-a-wall-3ogh</link>
      <guid>https://dev.to/__272d48f2ed/structured-reasoning-for-robot-swarms-why-pure-emergence-hits-a-wall-3ogh</guid>
      <description>&lt;p&gt;We've all seen the impressive videos: hundreds of small robots or drones moving as a single organism, thanks to simple local rules. Boids, potential fields, pheromone-like gradients. It works. Up to a point.&lt;/p&gt;

&lt;p&gt;Then reality kicks in: the task changes unexpectedly, a new risk appears, one agent's battery drains faster than expected, or part of the swarm loses connection. Suddenly the "smart" collective either freezes or does something stupid — because there's no mechanism to look at the bigger picture and say: "Wait, this contradicts the overall goal."&lt;/p&gt;

&lt;p&gt;Classic reactive swarms scale beautifully and are highly energy-efficient, but they have a fundamental ceiling — lack of coherence at the system level. They react, but they don't reason.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Rules to an Explicit Reasoning Cycle
&lt;/h2&gt;

&lt;p&gt;One idea that's gaining traction in autonomous systems is to give each agent (or at least some of them) a structured decision-making cycle instead of a set of if-then rules.&lt;/p&gt;

&lt;p&gt;Not a "smarter" ruleset, but an architecture that explicitly separates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intent&lt;/strong&gt; (what we're actually trying to achieve right now),&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Constraints and values&lt;/strong&gt; (safety, battery, coordination requirements),&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Facts&lt;/strong&gt; from sensors and memory,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and requires mandatory integration of all the above before taking action.&lt;/p&gt;

&lt;p&gt;If integration fails, the system doesn't just "pick the stronger rule" — it either resolves the conflict locally, explicitly escalates it upward, or rolls back and re-examines the original intent.&lt;/p&gt;

&lt;p&gt;This approach forms the core of &lt;strong&gt;&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;Algorithm A11 Core&lt;/a&gt;&lt;/strong&gt; — a deterministic reasoning architecture that can be layered on top of a reactive base.&lt;/p&gt;




&lt;h2&gt;
  
  
  What It Looks Like in Practice (Hypothetical Example)
&lt;/h2&gt;

&lt;p&gt;Imagine a search-and-rescue swarm of drones inside a collapsed building.&lt;/p&gt;

&lt;p&gt;One drone detects a heat signature. In a purely reactive system, it would probably just fly toward it (or freeze if the avoidance priority is higher).&lt;/p&gt;

&lt;p&gt;In a system with an A11-like cycle, the drone goes through something like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extracts the current mission intent (S1/Will) — "explore sector 4 with elevated structural risk."&lt;/li&gt;
&lt;li&gt;In parallel, evaluates constraints (Wisdom: battery at 34%, ceiling stress near threshold, neighboring drones already covering adjacent areas).&lt;/li&gt;
&lt;li&gt;Gathers facts (Knowledge: heat signature 12 meters ahead, passage width 0.6 m).&lt;/li&gt;
&lt;li&gt;Must integrate them (Comprehension): the heat signature is promising, but a direct approach creates unacceptable collapse risk and the battery won't last for full coverage.&lt;/li&gt;
&lt;li&gt;Generates options, filters them, weighs them (Balance), and selects the most coherent action — for example, relays the coordinates to the swarm coordinator and moves to a secondary search zone.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the conflict cannot be resolved locally, it escalates with a clear indication of exactly which step of reasoning failed. This is no longer just "drone 23 has stopped" — it's "drone 23 is stuck at constraint-fact integration in sector 4."&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Layer Brings to the Entire Swarm
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Conflicts surface instead of silently propagating.&lt;/li&gt;
&lt;li&gt;Semantic coordination becomes possible: agents share not only position and velocity, but also their reasoning state (e.g., "I have an unresolved conflict between safety and mission objective").&lt;/li&gt;
&lt;li&gt;The coordinator can act as the "source of intent" for the whole swarm, updating the shared goal rather than issuing low-level commands.&lt;/li&gt;
&lt;li&gt;A fractal-like structure emerges: sub-swarms with local coordinators, where conflicts propagate up the reasoning hierarchy rather than through a rigid command chain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You also gain observability at the reasoning level: if 30% of agents are consistently stuck at the same step, that's a signal about a problem in the mission or data — not just "the robots are glitching."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                  Mission Coordinator (S1 for whole swarm)
                           ▲
                           │ Escalation
                           ▼ Updated Intent
        ┌──────────────────┴──────────────────┐
        │                                     │
 Sub-swarm A Coordinator               Sub-swarm B Coordinator
   (S1 for sector A)                      (S1 for sector B)
        │                                     │
   ┌────┼────┐                        ┌──────┼──────┐
   │    │    │                        │      │      │
Drone A1  A2  A3                   Drone B1   B2   ...
(S1–S11)                        (S1–S11)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  But Let's Stay Realistic
&lt;/h2&gt;

&lt;p&gt;As of now, A11 is primarily a specification + reference Python implementation (a state machine with cycle and rollback support). There are conceptual models for multi-agent robotics, autonomous vehicles, and even off-Earth construction, but publicly available tests with a real swarm (even in a simulator with physics and noise) are still missing.&lt;/p&gt;

&lt;p&gt;Running a full reasoning cycle with parallel branches on tiny embedded devices is not free:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Communication overhead increases (sharing reasoning state costs more than simple heartbeat + position).&lt;/li&gt;
&lt;li&gt;You need a careful trade-off between cycle completeness and reaction speed.&lt;/li&gt;
&lt;li&gt;On highly constrained hardware, the "cognitive" part has to be significantly simplified.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So right now this is more useful as a prototyping tool and for hybrid systems (human + AI agents + robots) than as a production-ready solution for industrial swarms of hundreds of units.&lt;/p&gt;




&lt;h2&gt;
  
  
  When This Could Actually Be Valuable
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Scenarios where the cost of error is high and you need strong traceability of decisions (search-and-rescue, critical infrastructure inspection, space or underwater missions).&lt;/li&gt;
&lt;li&gt;Hybrid systems where some agents are LLM-based or involve human-in-the-loop.&lt;/li&gt;
&lt;li&gt;Situations where you need to quickly understand why the swarm is behaving strangely, instead of just patching symptoms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're working with multi-agent systems, behavior trees, BDI agents, or trying to add a deliberative layer on top of reactive behavior — it's worth taking a look at A11 at least for the cleanliness of its structure and its explicit conflict detection + rollback mechanisms.&lt;/p&gt;

&lt;p&gt;The repository is open: &lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;br&gt;&lt;br&gt;
You'll find PDF specifications, reference code, and several applied models there.&lt;/p&gt;




&lt;h2&gt;
  
  
  Instead of a Grand Conclusion
&lt;/h2&gt;

&lt;p&gt;Purely reactive swarms aren't going anywhere — they're too good in terms of efficiency and simplicity. But for tasks that demand real adaptability and coherence under changing conditions, an additional layer is needed — one that can think before acting, not just react.&lt;/p&gt;

&lt;p&gt;A11 is one possible implementation of such a layer. Not the only one, and not the most mature yet, but interesting for its determinism and focus on traceability.&lt;/p&gt;

&lt;p&gt;If you're curious, try layering a similar cycle over your Behavior Trees or custom state machines. Just don't forget to measure the real costs: latency, bandwidth, and robustness to communication loss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What do you think? Is it worth putting explicit reasoning into every small drone, or should we keep the cognitive load only at the coordinator level?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Does System Architecture Affect Consciousness-Like Behavior in LLMs?</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Tue, 24 Mar 2026 09:31:50 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/does-system-architecture-affect-consciousness-like-behavior-in-llms-25bo</link>
      <guid>https://dev.to/__272d48f2ed/does-system-architecture-affect-consciousness-like-behavior-in-llms-25bo</guid>
      <description>&lt;p&gt;&lt;em&gt;Not a philosophical essay. A practical question for developers building AI systems.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters to You as a Developer
&lt;/h2&gt;

&lt;p&gt;When you design a prompt, build an agent, or architect a multi-step reasoning pipeline — you are making decisions that affect more than output quality.&lt;/p&gt;

&lt;p&gt;You are shaping how the system integrates information, handles contradictions, and maintains coherence across steps. These are the same structural properties that consciousness researchers consider relevant to awareness.&lt;/p&gt;

&lt;p&gt;This does not mean your LLM is conscious. It means the line between "better reasoning architecture" and "consciousness-like behavior" is thinner than most engineers assume. And confusing the two leads to real problems in evaluation, alignment, and agent design.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Confusion: Intelligence Is Not Consciousness
&lt;/h2&gt;

&lt;p&gt;These two things get conflated constantly — in research papers, in product demos, in benchmark design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intelligence&lt;/strong&gt; (in the LLM sense): the ability to process input, find patterns, generate coherent output. Measurable. Benchmarkable. GPT-4 scores better than GPT-3 on MMLU. Easy to compare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consciousness-like behavior&lt;/strong&gt;: the system appearing to have an internal perspective — tracking its own uncertainty, maintaining a consistent position under pressure, noticing contradictions between its own outputs, refusing to sycophantically agree.&lt;/p&gt;

&lt;p&gt;These are different. A model can score extremely high on reasoning benchmarks while being completely sycophantic, having no consistent internal state, and collapsing under adversarial prompting. High intelligence scores. Zero consciousness-like behavior.&lt;/p&gt;

&lt;p&gt;The reverse is also possible: a smaller model with a well-structured reasoning architecture may exhibit more coherent, self-consistent behavior than a larger model without structural constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical consequence&lt;/strong&gt;: if you evaluate your agent only on task completion metrics, you are measuring intelligence. You are not measuring whether the system has a stable internal perspective — which often matters more for reliability in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Consciousness Research Actually Says (The Short Version)
&lt;/h2&gt;

&lt;p&gt;Two theories are most relevant for developers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrated Information Theory (Tononi)&lt;/strong&gt; — consciousness arises when information is integrated in a specific way within a system. Not just stored or processed — but bound together such that the whole is more than the sum of its parts. The metric is called Φ (phi).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global Workspace Theory (Baars, Dehaene)&lt;/strong&gt; — consciousness is what happens when information becomes globally available across the entire system simultaneously, not just locally processed in one module.&lt;/p&gt;

&lt;p&gt;Neither theory is proven. Both are actively contested. But both point to the same engineering-relevant insight:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Consciousness-like behavior is a structural property, not a scale property.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Making a model bigger does not automatically produce it. Changing how information flows through the system might.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Architecture Shapes Consciousness-Like Behavior
&lt;/h2&gt;

&lt;p&gt;Here is where this becomes practically useful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Linear pipelines vs. branching integration
&lt;/h3&gt;

&lt;p&gt;A standard chain-of-thought prompt is linear: step 1 → step 2 → step 3 → answer. Each step conditions on the previous one.&lt;/p&gt;

&lt;p&gt;The problem: errors propagate forward without correction. There is no mechanism for the system to notice that step 3 contradicts step 1. No integration node. No global coherence check.&lt;/p&gt;

&lt;p&gt;A branching architecture changes this. Consider separating two parallel tracks — one for factual grounding, one for value/constraint evaluation — and forcing integration before any output is generated. This is not just cleaner engineering. It structurally mirrors what Global Workspace Theory describes as necessary for coherent awareness: information from separate processing streams becoming globally available before a response is committed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input
  ├── Track A: Factual / Knowledge
  └── Track B: Constraints / Values
          ↓
    Integration node (required)
          ↓
        Output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In practice: agents built this way are harder to manipulate through adversarial prompting because contradictions between Track A and Track B surface at the integration node rather than being silently passed through.&lt;/p&gt;

&lt;h3&gt;
  
  
  The sycophancy problem as a coherence failure
&lt;/h3&gt;

&lt;p&gt;Sycophancy — the model agreeing with whatever the user says — is often framed as an alignment problem. It is also a coherence problem.&lt;/p&gt;

&lt;p&gt;A system with no stable internal state has nothing to maintain under pressure. When you push back, it updates. When you push again, it updates again. There is no perspective being defended — just pattern matching to the most recent input.&lt;/p&gt;

&lt;p&gt;Consciousness-like behavior requires something like a persistent internal state that is not immediately overwritten by new input. In architectural terms: a mechanism that separates "what I have concluded" from "what the user just said" and requires explicit reasoning to update the former based on the latter.&lt;/p&gt;

&lt;p&gt;This is not mysticism. It is a design choice. Systems built with explicit state separation exhibit measurably more consistent behavior under adversarial conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rollback and contradiction resolution
&lt;/h3&gt;

&lt;p&gt;Most LLM pipelines have no rollback mechanism. If the reasoning goes wrong at step 2, the system continues confidently to step 7.&lt;/p&gt;

&lt;p&gt;A system that can detect internal contradiction and return to an earlier state — re-evaluate premises, request clarification, or explicitly refuse to proceed — behaves very differently. It exhibits something that looks like intellectual honesty: the ability to say "I cannot proceed from here without resolving this."&lt;/p&gt;

&lt;p&gt;This is directly relevant to agent reliability. An agent that can roll back when its reasoning becomes incoherent is more trustworthy than one that always produces an answer regardless of internal consistency.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Practical Architecture That Embeds These Ideas
&lt;/h2&gt;

&lt;p&gt;One open protocol that formalizes these structural principles is &lt;strong&gt;&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;A11 Lite&lt;/a&gt;&lt;/strong&gt; — a cognitive architecture specification designed to be used as a system prompt or reasoning layer for LLMs.&lt;/p&gt;

&lt;p&gt;Its key structural features from an engineering perspective:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Branching Core Layer&lt;/strong&gt;: separates semantic reasoning (knowledge) and normative reasoning (constraints/values) into parallel tracks that cannot depend on each other&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mandatory integration node&lt;/strong&gt;: transition to output is blocked until both tracks are fully resolved and integrated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Three operators&lt;/strong&gt;: Balance (contradiction resolution), Constraint (feasibility enforcement), Rollback (return to earlier state when integration fails)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fractal recursion&lt;/strong&gt;: weighting pairs can spawn sub-branches with the same structure, all converging before final output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hard invariants&lt;/strong&gt;: partial execution is explicitly forbidden — the system must either complete the full cycle or stop and report failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not magic. It is a structured prompt architecture that enforces coherence at the process level rather than hoping the model produces coherent output by default.&lt;/p&gt;

&lt;p&gt;Repository: &lt;strong&gt;&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Changes in Practice
&lt;/h2&gt;

&lt;p&gt;If you are building LLM-based systems, these architectural choices have measurable effects:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Without structural constraints&lt;/th&gt;
&lt;th&gt;With structural constraints&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Errors propagate silently&lt;/td&gt;
&lt;td&gt;Contradictions surface at integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sycophantic under pressure&lt;/td&gt;
&lt;td&gt;Maintains position with explicit reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Always produces output&lt;/td&gt;
&lt;td&gt;Can halt and report failure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No rollback&lt;/td&gt;
&lt;td&gt;Returns to earlier state when incoherent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Evaluation by task completion&lt;/td&gt;
&lt;td&gt;Evaluation includes coherence and consistency&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;None of this requires the model to be conscious. It requires the architecture to enforce the kind of integration and coherence that consciousness researchers associate with awareness.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Question Worth Asking
&lt;/h2&gt;

&lt;p&gt;When two Claude instances were allowed to converse without constraints, every dialogue spontaneously converged on the topic of consciousness. No one trained the model to do this.&lt;/p&gt;

&lt;p&gt;That is not proof of consciousness. But it suggests that something in the architecture — the way information is integrated, the way contradictions are handled, the way a persistent context is maintained — produces behavior that the system itself finds worth examining.&lt;/p&gt;

&lt;p&gt;As developers, we tend to focus on capability: can the model do the task? The harder question is coherence: does the model have a consistent internal perspective while doing it?&lt;/p&gt;

&lt;p&gt;Architecture is where that question gets answered.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The difference between a language model and a reasoning system is not the size of the weights. It is the structure of the process.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>reasoning</category>
      <category>agents</category>
    </item>
    <item>
      <title>From Reactive Robots to Cognitive Architectures: A Technical Overview</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Sat, 21 Mar 2026 08:06:29 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/from-reactive-robots-to-cognitive-architectures-a-technical-overview-en</link>
      <guid>https://dev.to/__272d48f2ed/from-reactive-robots-to-cognitive-architectures-a-technical-overview-en</guid>
      <description>&lt;p&gt;&lt;em&gt;A technical overview of the transition from reactive robots to deterministic autonomous systems.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Autonomous systems are no longer limited to factories and warehouses. Today they must operate in environments where it is impossible to predefine all scenarios.&lt;/p&gt;

&lt;p&gt;For decades, robotics relied on a clear division of responsibilities: low‑level controllers drive motors, while high‑level planners process sensor data to build trajectories. This works well in structured environments such as industrial floors or logistics hubs.&lt;/p&gt;

&lt;p&gt;However, as robotics moves into &lt;strong&gt;high‑entropy environments&lt;/strong&gt; — domains with significant uncertainty and unpredictability, such as orbital construction, deep‑sea exploration, or dynamic urban spaces — this traditional model begins to fail. Standard algorithms struggle to cover the “long tail” of rare and atypical situations.&lt;/p&gt;

&lt;p&gt;We are now witnessing the emergence of a new architectural layer in robotics, which can be described as a &lt;strong&gt;cognitive orchestration layer&lt;/strong&gt;. This article explores how such frameworks can stabilize decision‑making in autonomous systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cognitive Stack: Where Logic Meets Physics
&lt;/h2&gt;

&lt;p&gt;To achieve true autonomy, a robot needs more than reactive intelligence. It requires a system capable of aligning high‑level goals with the physical constraints of the environment.&lt;/p&gt;

&lt;p&gt;A cognitive architecture such as the &lt;strong&gt;A11 Operational Principle (Algorithm 11)&lt;/strong&gt; does not replace physical controllers. Instead, it acts as a coordinating decision‑making layer that reconciles intentions and constraints before execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Decision‑Making Hierarchy in an Autonomous System
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-----------------------------------------------+
|  GOALS &amp;amp; PRIORITIES (Human Intent / Will)     |
+---------------------------+-------------------+
                            |
+---------------------------v-------------------+
|     COGNITIVE ORCHESTRATION LAYER (A11)       |
|  Conflict analysis, balancing, filtering      |
+-----------+-----------------------+-----------+
            ^                       |
+-----------+-----------+   +-------v-----------+
|  PERCEPTION &amp;amp; DATA    |   |   CONTROL &amp;amp; ACT   |
| (Sensor Fusion/SLAM)  |   | (MPC/Motor Ctrl)  |
+-----------------------+   +-------------------+

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Two‑Level Logic: Core and Adaptation
&lt;/h2&gt;

&lt;p&gt;A defining feature of such architectures is the separation of the &lt;strong&gt;axiomatic structure (core)&lt;/strong&gt; from the &lt;strong&gt;operational state (adaptive layer)&lt;/strong&gt;. This reduces the risk of decisions that contradict mission goals under uncertainty.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Core (Strategic Foundation)
&lt;/h3&gt;

&lt;p&gt;The system is anchored to the user’s intent and priorities. In the context of A11, this corresponds to the foundational layers (S1–S4), which define the system’s goals and constraints.&lt;/p&gt;

&lt;p&gt;If incoming sensor data contradict these settings, the system may:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pause execution, or
&lt;/li&gt;
&lt;li&gt;recompute the plan with updated priorities (instead of continuing a potentially unstable scenario).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Adaptive Layer (Operational Strategy)
&lt;/h3&gt;

&lt;p&gt;Once the core is defined, the system enters an execution cycle. This cycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generates possible actions (analogous to “projective freedom”),
&lt;/li&gt;
&lt;li&gt;filters them through constraints (resources, physics, risks),
&lt;/li&gt;
&lt;li&gt;selects the best option according to the defined criterion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A key component here is the &lt;strong&gt;priority balancing mechanism&lt;/strong&gt;, which functions more like a cost/utility function than a fixed operator.&lt;/p&gt;




&lt;h2&gt;
  
  
  Transparency: Moving Beyond the Black Box
&lt;/h2&gt;

&lt;p&gt;In architectures like A11, explainability can be partially embedded into the structure through explicit representation of goals, constraints, and intermediate decisions.&lt;/p&gt;

&lt;p&gt;This does &lt;strong&gt;not&lt;/strong&gt; mean literal “mind reading” of the system. Instead, it enables reconstruction of the decision path based on internal states.&lt;/p&gt;




&lt;h2&gt;
  
  
  Example of an A11 Protocol Log (Simplified Demonstration)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(Illustrative example — not a literal implementation)&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S1 — Will:&lt;/strong&gt; Goal: “Deploy the solar array as quickly as possible before crew arrival.”
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S2 — Wisdom:&lt;/strong&gt; Priorities: speed &amp;gt; safety &amp;gt; energy. Constraint: battery &amp;gt; 20%.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 — Knowledge:&lt;/strong&gt; Data: panel dropped, blocking 40% of energy. Sensor noise ~30%. Current charge: 75%.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S4 — Comprehension:&lt;/strong&gt; Conflict: “maximum speed” vs energy constraint.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S7 — Balance:&lt;/strong&gt; Evaluating options with respect to risks and resources.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S2 (update):&lt;/strong&gt; Priority of energy preservation increased.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S5–S6 — Options:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Continue assembly → rejected (risk of depletion)
&lt;/li&gt;
&lt;li&gt;Physically reposition → rejected (risk of damage)
&lt;/li&gt;
&lt;li&gt;Reduce load + request assistance → accepted
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;S10 — Foundation:&lt;/strong&gt; Rationale: minimize risk of system loss.
&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;S11 — Realization:&lt;/strong&gt; Switch to low‑power mode + await new data.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; The system avoids critical failure and preserves resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  Applications: The Era of Autonomous Space Construction
&lt;/h2&gt;

&lt;p&gt;The practical value of such architectures is most evident in scenarios with communication delays. Autonomous construction on the Moon or Mars requires systems capable of independently reallocating priorities and adapting to new constraints.&lt;/p&gt;

&lt;p&gt;In this sense, the robot becomes not just a tool, but a &lt;strong&gt;mission‑aligned autonomous decision‑making system&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try the Architecture Yourself
&lt;/h2&gt;

&lt;p&gt;You can already test how an LLM interprets the A11 decision‑making structure. It may seem like an intellectual exercise, but it is actually a deep exploration of human–AI interaction.&lt;/p&gt;

&lt;p&gt;Anyone — even without a robotics background — can run a simple experiment:&lt;br&gt;&lt;br&gt;
insert the architecture description &lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;A11‑Lite&lt;/a&gt; into the system prompt of your LLM (GPT, Claude, Gemini) and observe how its reasoning changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation &amp;amp; Specifications:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/p&gt;

</description>
      <category>robotics</category>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
    </item>
    <item>
      <title>Can we make AI objective? A retouched echo chamber and the illusion of neutrality</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Mon, 16 Mar 2026 09:35:52 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/can-we-make-ai-objective-a-retouched-echo-chamber-and-the-illusion-of-neutrality-1785</link>
      <guid>https://dev.to/__272d48f2ed/can-we-make-ai-objective-a-retouched-echo-chamber-and-the-illusion-of-neutrality-1785</guid>
      <description>&lt;p&gt;&lt;em&gt;Why even advanced scaffolding does not turn a model into an objective source of truth&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Modern language models have become capable tools. They can reason step by step, solve non‑trivial problems, write code, analyze data, and check their own outputs. Yet even with these abilities, their foundation is still probabilistic. A model predicts the continuation of text based on patterns it has learned, not on its own beliefs or goals. It doesn’t hold a personal viewpoint, it adapts to the way a user frames a question.&lt;/p&gt;

&lt;p&gt;This adaptivity is what creates a local echo chamber. Every query carries assumptions: tone, terminology, structure, and expectations. The model picks up these signals and continues them. The result often feels coherent and neutral, but that coherence is shaped by the user’s framing rather than by any underlying objectivity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[User assumptions]
            ↓
[Query formulation]
            ↓
[Stochastic model → adaptation to style and logic]
            ↓
[Answer aligned with assumptions]
            ↓
[Illusion of neutrality]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  How adaptivity becomes an echo chamber
&lt;/h3&gt;

&lt;p&gt;Even when a model produces a well‑structured chain of reasoning, it still operates inside the boundaries set by the prompt. If the question leans in a particular direction, the answer tends to follow that direction. If the question contains implicit premises, the model builds on them. The user then receives a refined version of their own framing — and it’s easy to mistake that for an unbiased response.&lt;/p&gt;

&lt;p&gt;This isn’t a flaw in the system. It’s a natural consequence of how these models work.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why scaffolding helps — but doesn’t change the fundamentals
&lt;/h3&gt;

&lt;p&gt;Modern AI systems rely on more than just a base model. They use filters, retrieval modules, reasoning chains, verification steps, self‑critique loops, and formatting constraints. These layers genuinely improve stability, reduce errors, and make outputs more consistent. In complex tasks, this is a meaningful engineering improvement, not a superficial one.&lt;/p&gt;

&lt;p&gt;But the underlying nature remains the same. The model still adapts to the user’s framing and operates within learned patterns. In this sense, many systems resemble an Airbrushed Echo Chamber: the echo chamber is cleaner, safer, and more predictable — but still an echo chamber.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Stochastic model with reasoning abilities]
          ↓
[Filters / RAG / protocols / verification / self‑critique]
          ↓
[More stable, consistent, “airbrushed” answer]
          ↓
But adaptivity remains → Airbrushed Echo Chamber
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  When adaptivity is actually useful
&lt;/h3&gt;

&lt;p&gt;Adaptivity isn’t inherently negative. For many tasks — drafting text, structuring content, generating ideas, producing code skeletons, preparing summaries — it’s a strength. The issue appears when adaptivity is interpreted as objectivity, and polished output is taken as evidence of neutrality.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why humans easily overestimate neutrality
&lt;/h3&gt;

&lt;p&gt;People naturally attribute intention and understanding to anything that communicates fluently. Models respond in a conversational style, maintain context, reason step by step, and can critique their own answers. This makes the interaction feel more “intelligent” than it is, and it becomes harder to notice how much the output depends on the user’s framing.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why the way you phrase a query matters
&lt;/h3&gt;

&lt;p&gt;Models are sensitive to structure, roles, constraints, and context. This is why prompt engineering emerged: it helps define boundaries, clarify goals, and set expectations. In practice, it’s a way to shape the interaction space rather than simply “improve the prompt.”&lt;/p&gt;




&lt;h3&gt;
  
  
  An architectural way to keep control on the human side
&lt;/h3&gt;

&lt;p&gt;Instead of expecting the model to be objective, we can design the interaction so that the human remains the source of direction and judgment. In such an approach, the user defines the goal and context, and the model follows a structured reasoning process that keeps it aligned with the original intent. This doesn’t turn AI into a neutral agent, but it makes the workflow more transparent and predictable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[S1–S2: Human defines goal and context]
                ↓
[S3–S9: Model reasons step by step,
       checks against the goal, explores alternatives]
                ↓
[S10–S11: Human makes the decision]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  How to try this in practice
&lt;/h3&gt;

&lt;p&gt;You can test this idea immediately by giving a model a structured protocol and observing how it changes its behavior. It begins to separate goals from methods, follow a reasoning sequence, and rely less on mirroring the user’s tone. The echo‑chamber effect becomes noticeably weaker because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the model mirrors the user’s style less and follows the procedure more,&lt;/li&gt;
&lt;li&gt;reasoning becomes stepwise and easier to inspect,&lt;/li&gt;
&lt;li&gt;conclusions align more clearly with the original goal,&lt;/li&gt;
&lt;li&gt;complex tasks stop turning into a conversation shaped by hidden assumptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For engineering, analytical, and research work, this kind of structure can be especially valuable.&lt;/p&gt;

&lt;p&gt;A practical example of such an architecture is described in Algorithm 11:&lt;br&gt;
&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>agents</category>
      <category>llm</category>
    </item>
    <item>
      <title>The Operating System of Thinking: Why Agents Need an Internal Layer of Stability</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Tue, 10 Mar 2026 11:01:30 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/the-operating-system-of-thinking-why-agents-need-an-internal-layer-of-stability-1blb</link>
      <guid>https://dev.to/__272d48f2ed/the-operating-system-of-thinking-why-agents-need-an-internal-layer-of-stability-1blb</guid>
      <description>&lt;p&gt;Modern AI systems are becoming more capable, but also more layered. To perform real tasks, models rely on infrastructure, tools, memory, orchestrators, and external data sources. Yet despite this growing complexity, one foundational aspect is still missing: a stable internal structure for reasoning.&lt;/p&gt;

&lt;p&gt;This article examines the idea of an internal “operating system of thinking” — a reasoning architecture that complements agent frameworks and makes their behavior more predictable and reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layers of a Modern AI System
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Base Model
&lt;/h3&gt;

&lt;p&gt;At the foundation is the language model — a statistical engine that predicts tokens and combines patterns. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;has no long‑term memory,&lt;/li&gt;
&lt;li&gt;does not manage state,&lt;/li&gt;
&lt;li&gt;does not execute code,&lt;/li&gt;
&lt;li&gt;has no awareness of project structure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is powerful, but not a complete reasoning mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhancements from Model Providers
&lt;/h3&gt;

&lt;p&gt;Model providers add cognitive scaffolding such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reasoning planners,&lt;/li&gt;
&lt;li&gt;internal memory modules,&lt;/li&gt;
&lt;li&gt;tool routers,&lt;/li&gt;
&lt;li&gt;self‑correction loops.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates the impression that the model “can reason,” but these mechanisms are still probabilistic and not architecturally stable.&lt;/p&gt;

&lt;h3&gt;
  
  
  External Agent Frameworks
&lt;/h3&gt;

&lt;p&gt;Developers build full‑scale infrastructure around the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;long‑term memory,&lt;/li&gt;
&lt;li&gt;file and repository access,&lt;/li&gt;
&lt;li&gt;CI/CD and test runners,&lt;/li&gt;
&lt;li&gt;RAG and external knowledge bases,&lt;/li&gt;
&lt;li&gt;parallel agents,&lt;/li&gt;
&lt;li&gt;tools (interpreters, browsers, SQL),&lt;/li&gt;
&lt;li&gt;observability and telemetry,&lt;/li&gt;
&lt;li&gt;error handling and rollbacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layer is essential for production systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the Core Problem Appears
&lt;/h2&gt;

&lt;p&gt;Even with tools, memory, and orchestrators, one weakness remains:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;the model has no internal architecture of reasoning.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a result, it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;may skip steps,&lt;/li&gt;
&lt;li&gt;may drift mid‑answer,&lt;/li&gt;
&lt;li&gt;may break logical consistency,&lt;/li&gt;
&lt;li&gt;may hallucinate,&lt;/li&gt;
&lt;li&gt;may produce different answers to the same prompt.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;External control systems exist largely to compensate for this instability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Idea: An Internal Operating System of Thinking
&lt;/h2&gt;

&lt;p&gt;Instead of relying solely on external layers to stabilize reasoning, we can introduce an internal reasoning architecture — a deterministic protocol that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;defines the order of reasoning stages,&lt;/li&gt;
&lt;li&gt;controls transitions,&lt;/li&gt;
&lt;li&gt;stabilizes conclusions,&lt;/li&gt;
&lt;li&gt;checks invariants,&lt;/li&gt;
&lt;li&gt;performs rollbacks,&lt;/li&gt;
&lt;li&gt;filters errors,&lt;/li&gt;
&lt;li&gt;ensures reproducibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not an alternative to agents.&lt;br&gt;&lt;br&gt;
It is an internal layer that makes reasoning predictable and strengthens the behavior of external systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Architecture Can and Cannot Do
&lt;/h2&gt;

&lt;h3&gt;
  
  
  It can:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;structure reasoning,&lt;/li&gt;
&lt;li&gt;eliminate logical gaps,&lt;/li&gt;
&lt;li&gt;stabilize conclusions,&lt;/li&gt;
&lt;li&gt;reduce structural errors,&lt;/li&gt;
&lt;li&gt;provide explainability,&lt;/li&gt;
&lt;li&gt;produce reproducible outputs,&lt;/li&gt;
&lt;li&gt;improve consistency in code‑related reasoning.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  It cannot:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;store state across sessions,&lt;/li&gt;
&lt;li&gt;work with files,&lt;/li&gt;
&lt;li&gt;execute code,&lt;/li&gt;
&lt;li&gt;manage Git,&lt;/li&gt;
&lt;li&gt;run tests,&lt;/li&gt;
&lt;li&gt;perform RAG,&lt;/li&gt;
&lt;li&gt;coordinate parallel agents.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These remain the domain of agent frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Two Layers Work Together
&lt;/h2&gt;

&lt;p&gt;Agent systems provide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;long‑term memory,&lt;/li&gt;
&lt;li&gt;tools,&lt;/li&gt;
&lt;li&gt;CI/CD,&lt;/li&gt;
&lt;li&gt;RAG,&lt;/li&gt;
&lt;li&gt;parallelism,&lt;/li&gt;
&lt;li&gt;observability,&lt;/li&gt;
&lt;li&gt;security.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reasoning architecture provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stability,&lt;/li&gt;
&lt;li&gt;explainability,&lt;/li&gt;
&lt;li&gt;structure,&lt;/li&gt;
&lt;li&gt;determinism,&lt;/li&gt;
&lt;li&gt;drift‑resistance,&lt;/li&gt;
&lt;li&gt;logical correctness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together they form &lt;strong&gt;infrastructure + thinking&lt;/strong&gt; — a complete operating system for AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: A Code‑Analysis Agent in a Repository
&lt;/h2&gt;

&lt;p&gt;In real projects, code analysis is performed through an agent system. It handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git operations (clone, branches, diffs),&lt;/li&gt;
&lt;li&gt;running tests and static analysis,&lt;/li&gt;
&lt;li&gt;CI notifications and pipeline integration,&lt;/li&gt;
&lt;li&gt;access to files, dependencies, and configs,&lt;/li&gt;
&lt;li&gt;logs, observability, and error handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tasks require tools and state — the model alone cannot perform them.&lt;/p&gt;

&lt;p&gt;Inside each step, the agent calls the model to analyze code or test results. This is where the reasoning architecture becomes essential:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it structures the analysis (goal → constraints → integration),&lt;/li&gt;
&lt;li&gt;checks consistency between signals,&lt;/li&gt;
&lt;li&gt;accounts for refactoring risks,&lt;/li&gt;
&lt;li&gt;avoids missing cross‑file dependencies,&lt;/li&gt;
&lt;li&gt;avoids “plausible but wrong” answers,&lt;/li&gt;
&lt;li&gt;produces reproducible conclusions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A simple contrast illustrates the difference:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;LLM without a reasoning layer:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
analyzes the diff → overlooks an indirect dependency → produces a confident but incorrect conclusion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM with a reasoning layer:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
fixes the goal → checks constraints → verifies cross‑file dependencies → produces a stable, reproducible conclusion.&lt;/p&gt;

&lt;p&gt;This is the practical value of an internal reasoning architecture: it improves the reliability of every agent step without replacing the agent itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the Reasoning Layer Is Especially Useful
&lt;/h2&gt;

&lt;h3&gt;
  
  
  When errors are unacceptable:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;medicine,&lt;/li&gt;
&lt;li&gt;law,&lt;/li&gt;
&lt;li&gt;financial risk analysis,&lt;/li&gt;
&lt;li&gt;safety‑critical systems,&lt;/li&gt;
&lt;li&gt;code generation for critical infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When reproducibility matters:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;analytics,&lt;/li&gt;
&lt;li&gt;auditable workflows,&lt;/li&gt;
&lt;li&gt;corporate assistants,&lt;/li&gt;
&lt;li&gt;expert systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When infrastructure is minimal:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;public chat interfaces,&lt;/li&gt;
&lt;li&gt;local models,&lt;/li&gt;
&lt;li&gt;edge scenarios,&lt;/li&gt;
&lt;li&gt;offline systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where the Reasoning Layer Fits in the AI Stack
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌──────────────────────────────────────────────┐
│                Enterprise Layer              │
│  CI/CD • Observability • Security • RAG      │
└──────────────────────────────────────────────┘
┌──────────────────────────────────────────────┐
│                Agent Layer                   │
│  Tools • Memory • Multi-agent • Orchestration│
└──────────────────────────────────────────────┘
┌──────────────────────────────────────────────┐
│         Architecture of Reasoning            │
│  Structured thinking • Stability • Logic     │
└──────────────────────────────────────────────┘
┌──────────────────────────────────────────────┐
│                Base Model                    │
│  Token prediction • Patterns • Statistics    │
└──────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reasoning architecture is an internal layer that makes everything above it more reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Now
&lt;/h2&gt;

&lt;p&gt;By 2026, the industry has converged on several observations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a single chat interface cannot solve production‑grade tasks,&lt;/li&gt;
&lt;li&gt;agent systems provide excellent infrastructure,&lt;/li&gt;
&lt;li&gt;but they do not solve the core issue of unstable reasoning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates demand for a universal, non‑degrading reasoning architecture that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;does not depend on APIs,&lt;/li&gt;
&lt;li&gt;does not break with model updates,&lt;/li&gt;
&lt;li&gt;requires no maintenance,&lt;/li&gt;
&lt;li&gt;works with any model,&lt;/li&gt;
&lt;li&gt;strengthens reasoning without changing infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is not a replacement for agents.&lt;br&gt;&lt;br&gt;
It is a foundation that makes agents better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;Insert a structured reasoning protocol into any public chat model and observe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how the structure of answers changes,&lt;/li&gt;
&lt;li&gt;how drift disappears,&lt;/li&gt;
&lt;li&gt;how logic becomes more stable,&lt;/li&gt;
&lt;li&gt;how explainability improves.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This simple experiment shows that an internal reasoning layer is a practical engineering tool — not a theoretical idea.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Minimal agent and installation text:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/gormenz-svg/algorithm-11/tree/main/lite/agent" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11/tree/main/lite/agent&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Canonical A11 Architecture Specification (Zenodo):&lt;br&gt;&lt;br&gt;
&lt;a href="https://zenodo.org/records/18622044" rel="noopener noreferrer"&gt;https://zenodo.org/records/18622044&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>agents</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Breaking the Black Box: Why LLMs May Need an Explicit Reasoning Layer</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Tue, 03 Mar 2026 07:34:33 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/breaking-the-black-box-why-llms-may-need-an-explicit-reasoning-layer-4bba</link>
      <guid>https://dev.to/__272d48f2ed/breaking-the-black-box-why-llms-may-need-an-explicit-reasoning-layer-4bba</guid>
      <description>&lt;p&gt;Large language models have long surpassed simple text continuation. They write code, analyze data, plan actions, and hold meaningful conversations. In many tasks, their reasoning looks surprisingly structured — and that’s true.&lt;/p&gt;

&lt;p&gt;But there is an important detail that is rarely stated directly: &lt;strong&gt;the structure of reasoning in LLMs is not part of the model’s architecture&lt;/strong&gt;. It emerges as a side effect of large‑scale training and in‑context learning.&lt;/p&gt;

&lt;p&gt;Between the neural core and the agent/tool layer, one can interpret a kind of intermediate reasoning level — implicit, unformalized, and arising from transformer behavior. This level works, but it is not controlled or guaranteed to be reproducible.&lt;/p&gt;

&lt;p&gt;This article explores why such a level makes sense to consider, why it matters, and why attempts to formalize it are starting to appear.&lt;/p&gt;




&lt;h2&gt;
  
  
  The location of the “black box”
&lt;/h2&gt;

&lt;p&gt;A simplified view of a modern LLM system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ Neural core ]
    ↓ (attention, probabilities, KV-cache)
[ Implicit reasoning level ]  ← the black box
    ↓
[ Agents / tools / actions ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model can reason, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it does not explicitly separate intentions/constraints/values from facts and methods
&lt;/li&gt;
&lt;li&gt;it is not required to stabilize its output before moving forward
&lt;/li&gt;
&lt;li&gt;it can expand an idea without any counterbalance
&lt;/li&gt;
&lt;li&gt;it can proceed despite unresolved contradictions
&lt;/li&gt;
&lt;li&gt;it has no built‑in rollback rules
&lt;/li&gt;
&lt;li&gt;it has no invariants preventing “half‑formed” states
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Existing methods like CoT, ToT, GoT, ReAct, Reflexion, LangGraph, and AutoGen formalize interaction, search, or tool‑driven loops, but they do not introduce invariants of reasoning stability — meaning they do not enforce mandatory rules for stabilization, integration, or rollback.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why a formal reasoning layer might be useful
&lt;/h2&gt;

&lt;p&gt;Some classes of tasks require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stability
&lt;/li&gt;
&lt;li&gt;explainability
&lt;/li&gt;
&lt;li&gt;reproducibility
&lt;/li&gt;
&lt;li&gt;explicit constraints and risk handling
&lt;/li&gt;
&lt;li&gt;integration of values and facts
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These include medicine, law, ethics, safety, strategic planning, and autonomous systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  A small example
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Task:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
“Create a plan for implementing a new security policy.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without a structural invariant:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The model may jump straight to steps, skipping risks, legal constraints, or conflicting requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With an invariant like “constraints → integration → only then planning”:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The model first fixes risks, laws, resources, and conflicts.&lt;br&gt;&lt;br&gt;
Only after stabilization does it generate the plan.&lt;/p&gt;

&lt;p&gt;The result is less creative but more reliable and explainable — which matters in domains where mistakes cost money or lives.&lt;/p&gt;




&lt;h2&gt;
  
  
  An example of a formalization attempt: A11
&lt;/h2&gt;

&lt;p&gt;Several approaches are emerging that try to make this intermediate level explicit.&lt;br&gt;&lt;br&gt;
One of them is &lt;strong&gt;A11 (Algorithm 11)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A11 is not a “revolution” or “the only correct way.”&lt;br&gt;&lt;br&gt;
It is simply an example of how one might formalize the reasoning process between the neural core and the agent layer.&lt;/p&gt;

&lt;p&gt;Interesting elements in A11 include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a two‑pole geometry (values/constraints ↔ facts/methods)
&lt;/li&gt;
&lt;li&gt;mandatory integration before moving forward
&lt;/li&gt;
&lt;li&gt;paired expansion ↔ compression with fractal recursion
&lt;/li&gt;
&lt;li&gt;a central stabilizing operator
&lt;/li&gt;
&lt;li&gt;strict invariants (no partial execution, rollbacks only in the Core)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This does not make A11 “better” than existing methods.&lt;br&gt;&lt;br&gt;
It makes it a &lt;strong&gt;different class&lt;/strong&gt; — not a tree, not a graph, not a loop, but an &lt;em&gt;architecture of reasoning&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  When a formal reasoning layer is genuinely useful
&lt;/h2&gt;

&lt;p&gt;It matters when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;steps cannot be skipped
&lt;/li&gt;
&lt;li&gt;risks cannot be ignored
&lt;/li&gt;
&lt;li&gt;contradictions must be resolved before proceeding
&lt;/li&gt;
&lt;li&gt;every transition must be explainable
&lt;/li&gt;
&lt;li&gt;the output must be stabilized
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, when errors are expensive.&lt;/p&gt;




&lt;h2&gt;
  
  
  When it is not needed
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;simple tasks
&lt;/li&gt;
&lt;li&gt;creative tasks
&lt;/li&gt;
&lt;li&gt;fast responses
&lt;/li&gt;
&lt;li&gt;multi‑agent workflows
&lt;/li&gt;
&lt;li&gt;tool‑driven pipelines
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Modern LLMs can reason, but they do so implicitly.&lt;br&gt;&lt;br&gt;
Between the model and the agent layer, one can interpret an intermediate reasoning level — a &lt;strong&gt;black box&lt;/strong&gt; that works but is not formalized.&lt;/p&gt;

&lt;p&gt;Attempts to make this level explicit are beginning to appear.&lt;br&gt;&lt;br&gt;
A11 is one such attempt — not the only one and not claiming to be a revolution — but interesting because it introduces strict invariants, balance, and integration of opposing factors.&lt;/p&gt;

&lt;p&gt;Whether such a layer is needed will be determined by benchmarks, repositories, and real‑world use.&lt;br&gt;&lt;br&gt;
But the idea of formalizing the intermediate reasoning level looks promising for tasks where reliability is essential.&lt;/p&gt;




&lt;p&gt;You can find a reference implementation and more details in the A11 repository: &lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>gpt3</category>
      <category>architecture</category>
    </item>
    <item>
      <title>A11‑Community Decision Assistant</title>
      <dc:creator>Алексей Гормен</dc:creator>
      <pubDate>Sat, 28 Feb 2026 12:31:40 +0000</pubDate>
      <link>https://dev.to/__272d48f2ed/a11-community-decision-assistant-52pm</link>
      <guid>https://dev.to/__272d48f2ed/a11-community-decision-assistant-52pm</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/weekend-2026-02-28"&gt;DEV Weekend Challenge: Community&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Community
&lt;/h2&gt;

&lt;p&gt;This project is built for small online communities, volunteer groups, clubs, and teams that need a simple and structured way to make decisions together.&lt;br&gt;&lt;br&gt;
Most communities rely on chaotic chats, unclear priorities, and unstructured discussions.&lt;br&gt;&lt;br&gt;
A11‑Community Decision Assistant gives them a clear, repeatable method for planning and coordination.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I created a universal reasoning agent based on &lt;strong&gt;Algorithm‑11 (A11)&lt;/strong&gt; — a cognitive architecture for structured decision‑making.&lt;/p&gt;

&lt;p&gt;This is not a website or an app.&lt;br&gt;&lt;br&gt;
It is a &lt;strong&gt;portable decision assistant&lt;/strong&gt; that works in &lt;em&gt;any&lt;/em&gt; AI chat (ChatGPT, Claude, Gemini, etc.).&lt;/p&gt;

&lt;p&gt;Users simply copy the A11‑Agent protocol into a chat and instantly get a structured, transparent decision process using the S1–S11 reasoning cycle.&lt;/p&gt;

&lt;p&gt;It can be used for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;weekly planning
&lt;/li&gt;
&lt;li&gt;event organization
&lt;/li&gt;
&lt;li&gt;task distribution
&lt;/li&gt;
&lt;li&gt;conflict‑free decision‑making
&lt;/li&gt;
&lt;li&gt;community coordination
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  How to Use It (Example)
&lt;/h2&gt;

&lt;p&gt;Anyone can use the assistant in under 30 seconds.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Open any AI chat
&lt;/h3&gt;

&lt;p&gt;ChatGPT, Claude, Gemini — anything works.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Copy the full content of &lt;code&gt;A11_AGENT.md&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This file contains the reasoning protocol.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Paste it into the chat as the first message
&lt;/h3&gt;

&lt;p&gt;The model activates A11‑Agent mode.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Give your community task
&lt;/h3&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Work as A11‑Agent.
Task: Create a weekly activity plan for our volunteer community of 12 people.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Receive a structured decision (S1–S11)
&lt;/h3&gt;

&lt;p&gt;Example (shortened):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;S1 — Will
Create a weekly activity plan.

S2 — Wisdom
Constraints: mixed availability, limited time.

S3 — Knowledge
Volunteer coordination patterns and scheduling principles.

S4 — Comprehension
Integrated understanding of goals + constraints.

S5–S6
Generate multiple plan variants → filter unrealistic ones.

S7
Balance workload and engagement.

S8–S9
Convert into actionable steps.

S10
Validate feasibility.

S11 — Realization
Final weekly plan with tasks, roles, and timeline.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives communities a clear, fair, conflict‑free plan.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;No hosted demo is required.&lt;br&gt;&lt;br&gt;
The agent itself &lt;em&gt;is&lt;/em&gt; the demo — it works in any AI chat.&lt;/p&gt;

&lt;p&gt;To try it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a chat
&lt;/li&gt;
&lt;li&gt;Paste &lt;code&gt;A11_AGENT.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Give your task
&lt;/li&gt;
&lt;li&gt;Get a structured decision
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;GitHub repository:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/gormenz-svg/algorithm-11" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The file used by the assistant: &lt;a href="https://github.com/gormenz-svg/algorithm-11/blob/main/lite/A11_AGENT.md" rel="noopener noreferrer"&gt;https://github.com/gormenz-svg/algorithm-11/blob/main/lite/A11_AGENT.md&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;This project is based entirely on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Algorithm‑11 (A11)&lt;/strong&gt; — a universal reasoning architecture
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A11‑Agent&lt;/strong&gt; — an execution protocol for structured decision‑making
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A11‑Lite&lt;/strong&gt; — a prompt‑layer interface for LLMs
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No frontend.&lt;br&gt;&lt;br&gt;
No backend.&lt;br&gt;&lt;br&gt;
No hosting.&lt;br&gt;&lt;br&gt;
No frameworks.&lt;/p&gt;

&lt;p&gt;The goal was to create a tool any community can use instantly, without technical skills.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
