<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Naoki Higuchi (CSCT-NAIL)</title>
    <description>The latest articles on DEV Community by Naoki Higuchi (CSCT-NAIL) (@csctnail).</description>
    <link>https://dev.to/csctnail</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/csctnail"/>
    <language>en</language>
    <item>
      <title>A New AI Architecture Without Prior Distributions: Stream-Based AI and Compositional Inference</title>
      <dc:creator>Naoki Higuchi (CSCT-NAIL)</dc:creator>
      <pubDate>Tue, 03 Feb 2026 02:24:41 +0000</pubDate>
      <link>https://dev.to/csctnail/-a-new-ai-architecture-without-prior-distributions-stream-based-ai-and-compositional-inference-1ohc</link>
      <guid>https://dev.to/csctnail/-a-new-ai-architecture-without-prior-distributions-stream-based-ai-and-compositional-inference-1ohc</guid>
      <description>&lt;h2&gt;
  
  
  The Problem with Current AI
&lt;/h2&gt;

&lt;p&gt;The foundation of current AI is the Transformer/Attention model—undeniably successful. But &lt;em&gt;why&lt;/em&gt; it succeeds and &lt;em&gt;where&lt;/em&gt; its limits lie remain subjects of debate.&lt;/p&gt;

&lt;p&gt;Hallucination—generating outputs that deviate from training data—has no fundamental solution despite numerous proposed fixes. Causal reasoning and compositional inference over unseen combinations remain weak points with clear limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Different Foundation
&lt;/h2&gt;

&lt;p&gt;I designed a new AI architecture based on different principles. It's grounded in theory but implemented as working code.&lt;/p&gt;

&lt;p&gt;The core ideas are simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We live in a continuously moving world → Input must be a &lt;strong&gt;stream&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Based on human cognition → &lt;strong&gt;No prior distribution&lt;/strong&gt; is better&lt;/li&gt;
&lt;li&gt;To approximate neurons → Various &lt;strong&gt;gate mechanisms&lt;/strong&gt; are needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are designed to be describable as physical dynamics, and they actually work.&lt;/p&gt;

&lt;p&gt;However, the resulting architecture is far from existing AI. Understanding it requires knowledge across multiple domains—semiotics, neuroscience, geometry, information theory—rather than statistics alone. Bayesian inference is not required.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CSCT Engine
&lt;/h2&gt;

&lt;p&gt;This architecture is governed by axioms called &lt;strong&gt;CSCT (Clock-Selected Compression Theory)&lt;/strong&gt;. The key rules:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Input is a stream (continuous). Basic coefficients have non-negative constraints. There is an external input called an 'anchor', and gate opening/closing is controlled in synchronization with the anchor's change (velocity). This gate mechanism achieves discretization. Input information becomes discrete codes, reusable for inference even after learning stops."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Design Philosophy: Cognition as a Projected Dynamical System
&lt;/h3&gt;

&lt;p&gt;The CSCT Engine is a neurodynamical simulator implementing the 5 axioms in computable form. It mathematically models cognition as a &lt;strong&gt;Projected Dynamical System (PDS) evolving on a Simplex&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unlike standard deep learning approaches that operate in unconstrained vector spaces allowing negative weights (subtractive interference), the CSCT Engine enforces a &lt;strong&gt;non-negative constraint&lt;/strong&gt; (C ≥ 0) with &lt;strong&gt;unit sum&lt;/strong&gt; (Σp_k = 1).&lt;/p&gt;

&lt;p&gt;This ensures the system cannot "rewind" entropy via mathematical cancellation, but must &lt;strong&gt;"construct" meanings within the Simplex&lt;/strong&gt; defined by codebook vectors, thereby grounding internal symbols in physical causality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzc6lqdo1so4qfk1n4zqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzc6lqdo1so4qfk1n4zqw.png" alt="CSCT Engine Architecture" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure: Overview of the CSCT Engine Architecture.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Two Architectures: SingleGate and MultiGate
&lt;/h3&gt;

&lt;p&gt;The CSCT Engine has two main architectures:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;SingleGate&lt;/th&gt;
&lt;th&gt;MultiGate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Biological Correspondence&lt;/td&gt;
&lt;td&gt;Peripheral Processing&lt;/td&gt;
&lt;td&gt;Central Processing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Computational Cost&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application&lt;/td&gt;
&lt;td&gt;Single source waveform&lt;/td&gt;
&lt;td&gt;Inter-channel relationships&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Phase Processing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;None (velocity-based)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;θ phase temporal modulation&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;SingleGate&lt;/strong&gt; performs clock selection directly from input features (position, velocity, acceleration). The gate opens/closes according to the anchor's rate of change (velocity), but no explicit phase calculation is performed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MultiGate&lt;/strong&gt; has three neurological gate mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Na⁺ Gate&lt;/strong&gt;: Corresponds to sodium channels—high-speed, sparse clock selection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;θ Phase&lt;/strong&gt;: Corresponds to hippocampal θ rhythm—time-dependent rhythm modulation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NMDA Gate&lt;/strong&gt;: Corresponds to NMDA receptors—&lt;strong&gt;integration window opens at θ phase peak&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;combined_gate = Na⁺_activation × NMDA_activation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The NMDA gate opens and closes depending on θ phase. This matches the timing when LTP (Long-Term Potentiation) occurs in the hippocampus, making it a &lt;strong&gt;computational implementation of θ-γ coupling known in neuroscience&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structural Differences from Attention Models
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Time Structure
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Attention Model&lt;/th&gt;
&lt;th&gt;CSCT Engine&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Processing&lt;/td&gt;
&lt;td&gt;Batch (static frames, text, video/audio via position encoding)&lt;/td&gt;
&lt;td&gt;Stream (continuous time-series)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-reference&lt;/td&gt;
&lt;td&gt;S created from S(t-1)&lt;/td&gt;
&lt;td&gt;Forward flow only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Parallelism&lt;/td&gt;
&lt;td&gt;Processes all information at once&lt;/td&gt;
&lt;td&gt;Progresses through stream&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In current Attention models, the self is created from S(t-1). This structure with negative indices allows static images, text, and video/audio (converted to position information) to be handled uniformly.&lt;/p&gt;

&lt;p&gt;In contrast, the CSCT Engine is stream-based, so it currently cannot handle static images as elegantly.&lt;/p&gt;

&lt;p&gt;This may seem more constrained and slower. However, it's an architecture with high potential for &lt;strong&gt;processing while moving&lt;/strong&gt;—a fundamentally different capability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distance Structure
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Attention Model&lt;/th&gt;
&lt;th&gt;CSCT Engine&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Norm&lt;/td&gt;
&lt;td&gt;L2 (codes mix, hard to extract)&lt;/td&gt;
&lt;td&gt;L1-like (codes separable)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Negative values&lt;/td&gt;
&lt;td&gt;Allowed&lt;/td&gt;
&lt;td&gt;Not allowed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vector addition&lt;/td&gt;
&lt;td&gt;Unrestricted&lt;/td&gt;
&lt;td&gt;Constrained to simplex&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Convex hull&lt;/td&gt;
&lt;td&gt;Local pseudo-hull only&lt;/td&gt;
&lt;td&gt;Globally closed hull&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Why Transformer "Attention" Is Insufficient
&lt;/h3&gt;

&lt;p&gt;The Transformer attention mechanism is defined as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Attention(Q, K, V) = softmax(QK^T / √d_k) V
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since softmax outputs sum to unity, the result is a &lt;strong&gt;convex combination&lt;/strong&gt; of value vectors V. Superficially, this appears to satisfy the convex hull constraint.&lt;/p&gt;

&lt;p&gt;However, this constitutes only a &lt;strong&gt;local pseudo-hull&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each layer has its own local hull but doesn't form a globally closed simplex&lt;/li&gt;
&lt;li&gt;No boundary means ever-increasing resources are needed to counteract entropy production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The CSCT Engine requires geometric simplex structure, disallows negative computation, and forbids simple addition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This constraint creates a geometric convex hull in information space, enabling compositional inference.&lt;/strong&gt; Conversely, no amount of improvement to current AI can eliminate hallucination or achieve true compositional inference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 8: Meaning Extraction
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Purpose
&lt;/h3&gt;

&lt;p&gt;EX8 tests whether a &lt;strong&gt;frozen system can infer unseen inputs&lt;/strong&gt; after discrete codes acquire meaning through anchoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  Experimental Design
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Training Data:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Singles: A, B&lt;/li&gt;
&lt;li&gt;Composites: A+B, A+C, B+C&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Withheld:&lt;/strong&gt; C alone (never shown as single)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test:&lt;/strong&gt; After stopping learning (no gradient), can the model infer C (never directly taught) given only anchor 'c'?&lt;/p&gt;

&lt;h3&gt;
  
  
  Three Geometric Conditions
&lt;/h3&gt;

&lt;p&gt;We vary C's position to examine convex hull effects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IN_HULL&lt;/strong&gt;: C is a convex combination of A and B (inside hull)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OUT_HULL&lt;/strong&gt;: C is orthogonal to both A and B (outside hull)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RANDOM&lt;/strong&gt;: C is randomly initialized (baseline)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Results (30 seeds per condition, 90 total runs)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Condition&lt;/th&gt;
&lt;th&gt;Success Rate&lt;/th&gt;
&lt;th&gt;Withheld Similarity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;IN_HULL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;96.7%&lt;/strong&gt; (29/30)&lt;/td&gt;
&lt;td&gt;0.979 ± 0.025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RANDOM&lt;/td&gt;
&lt;td&gt;53.3% (16/30)&lt;/td&gt;
&lt;td&gt;0.682 ± 0.370&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OUT_HULL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;16.7%&lt;/strong&gt; (5/30)&lt;/td&gt;
&lt;td&gt;0.701 ± 0.242&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Statistical test: Kruskal-Wallis H = 42.52, p = 5.85 × 10⁻¹⁰&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Discovery: Ungrounded Symbol Acquisition
&lt;/h3&gt;

&lt;p&gt;Interestingly, in OUT_HULL, C acquired a unique code in 80% of seeds. Yet reconstruction accuracy remained low.&lt;/p&gt;

&lt;p&gt;We term this &lt;strong&gt;"Ungrounded Symbol Acquisition"&lt;/strong&gt;: a discrete code is assigned and manipulated, but it lacks representational content within the codebook's constructive capacity.&lt;/p&gt;

&lt;p&gt;This provides a mathematical instantiation of Searle's "Chinese Room" argument—the system can "handle" the symbol without the symbol bearing reconstructable meaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 9: Syntax Emergence
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Purpose
&lt;/h3&gt;

&lt;p&gt;EX9 tests whether &lt;strong&gt;syntactic inference can arise&lt;/strong&gt; once meaning has been discretized. It is the mirror of EX8:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EX8 (Meaning)&lt;/strong&gt;: Withheld &lt;strong&gt;primitive&lt;/strong&gt; must be inferred from observed composites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EX9 (Syntax)&lt;/strong&gt;: Withheld &lt;strong&gt;composite&lt;/strong&gt; must be inferred from observed primitives and composites&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Experimental Design
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Training Data:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Singles: A, B, C&lt;/li&gt;
&lt;li&gt;Composites: A+B, A+C&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Withheld:&lt;/strong&gt; B+C (never trained)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test:&lt;/strong&gt; Given anchor 'b+c', can the model infer B+C (never directly taught)?&lt;/p&gt;

&lt;h3&gt;
  
  
  Results (30 seeds per condition, 90 total runs)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Condition&lt;/th&gt;
&lt;th&gt;Success Rate&lt;/th&gt;
&lt;th&gt;Withheld Similarity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;IN_HULL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;66.7%&lt;/strong&gt; (20/30)&lt;/td&gt;
&lt;td&gt;0.890 ± 0.138&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RANDOM&lt;/td&gt;
&lt;td&gt;33.3% (10/30)&lt;/td&gt;
&lt;td&gt;0.767 ± 0.253&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OUT_HULL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;13.3%&lt;/strong&gt; (4/30)&lt;/td&gt;
&lt;td&gt;0.742 ± 0.168&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Statistical test: Kruskal-Wallis H = 19.13, p = 7.00 × 10⁻⁵&lt;/p&gt;

&lt;h3&gt;
  
  
  Discussion: Syntax as Interpolation, Not Algebra
&lt;/h3&gt;

&lt;p&gt;EX9 results show that syntax in CSCT emerges not as &lt;strong&gt;algebraic rule manipulation&lt;/strong&gt; (combining symbols independent of content) but as &lt;strong&gt;discovery of barycentric coordinates&lt;/strong&gt; on the codebook simplex.&lt;/p&gt;

&lt;p&gt;Rather than learning subtraction, the system discovers barycentric coordinates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;f: y_{A+B} → ½v_A + ½v_B
f: y_{A+C} → ½v_A + ½v_C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because this mapping rule is geometrically consistent, the system generalizes to the withheld input &lt;code&gt;y_{B+C} → ½v_B + ½v_C&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;From these results, I conclude that human cognition—especially intellectual activity—is built on &lt;strong&gt;constraints&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Through nine experiments, we demonstrated that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Discretization&lt;/strong&gt; emerges reliably from continuous dynamics via clock selection (EX1-3)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Irreversible anchors&lt;/strong&gt; provide stability against noise and drift, outperforming reversible (self-referential) systems long-term (EX4)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binding&lt;/strong&gt; is achieved as implicit synchronization through locking to a shared anchor (EX5)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal time&lt;/strong&gt; progresses proportionally to the signal's rate of change (flux), halting when no signal is present (EX7)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic grounding&lt;/strong&gt; requires convex-hull membership; symbols assigned outside the hull remain ungrounded (EX6, EX8)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Syntax&lt;/strong&gt; emerges as barycentric interpolation, not algebraic abstraction; compositional inference degrades outside the trained hull (EX9)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Modern AI pursues "infinite expansion" through scaling. I propose the inverse: &lt;strong&gt;closed constraint may be a necessary condition for stable, efficient intelligence&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The CSCT Engine may be the first program that can describe (design) our cognitive activity as an "OS."&lt;/p&gt;

&lt;p&gt;If you're interested, please run the code. There are no licensing restrictions, and this is just the beginning. This is not speculation—it's a working program.&lt;/p&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code (GitHub):&lt;/strong&gt; &lt;a href="https://github.com/CSCT-NAIL/CSCT" rel="noopener noreferrer"&gt;https://github.com/CSCT-NAIL/CSCT&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Paper (Zenodo DOI):&lt;/strong&gt; &lt;a href="https://zenodo.org/records/18408862" rel="noopener noreferrer"&gt;https://doi.org/10.5281/zenodo.18382368&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project site:&lt;/strong&gt; &lt;a href="https://csct-nail.com" rel="noopener noreferrer"&gt;https://csct-nail.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>hallucination</category>
    </item>
  </channel>
</rss>
