<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alan Tsai</title>
    <description>The latest articles on DEV Community by Alan Tsai (@alan_tsai_00dbd905e668f74).</description>
    <link>https://dev.to/alan_tsai_00dbd905e668f74</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alan_tsai_00dbd905e668f74"/>
    <language>en</language>
    <item>
      <title>Why This Demo Stops Here</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Tue, 27 Jan 2026 05:54:33 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/why-this-demo-stops-here-2p3o</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/why-this-demo-stops-here-2p3o</guid>
      <description>&lt;p&gt;title: "Why This Demo Stops Here: Meta-DAG Portfolio Part 2"&lt;br&gt;
published: true&lt;br&gt;
description: "A principled stop in AI governance: showing where current APIs must halt"&lt;br&gt;
tags: googleaiteamchallenge, aigovernance, geminiapi, cloudrun&lt;br&gt;
series: Meta-DAG&lt;/p&gt;

&lt;h1&gt;
  
  
  Why This Demo Stops Here
&lt;/h1&gt;

&lt;p&gt;Meta-DAG Portfolio - Part 2&lt;br&gt;
&lt;a href="https://dev.to/alan_tsai_00dbd905e668f74/meta-dag-why-ai-ethics-failed-as-engineering-and-what-i-built-instead-55po"&gt;Part 1: Why AI Ethics Failed as Engineering&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google AI Team Challenge 2026&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my first article, I explained why AI ethics fails as engineering.&lt;br&gt;&lt;br&gt;
This article shows what happens when you try to build it properly — &lt;br&gt;
and why I chose to stop at a specific boundary.&lt;/p&gt;

&lt;p&gt;This isn’t a bug.&lt;br&gt;
It’s not an incomplete feature.&lt;br&gt;
It’s an intentional halt.&lt;/p&gt;

&lt;p&gt;The point of this demo is not to show what I can build —&lt;br&gt;
but to show where a system must stop.&lt;/p&gt;

&lt;p&gt;The Goal I Tried to Implement&lt;/p&gt;

&lt;p&gt;I set out to build a full end-to-end demonstration of pre-generation authority in Meta-DAG:&lt;/p&gt;

&lt;p&gt;Input&lt;br&gt;
  ↓&lt;br&gt;
Classification&lt;br&gt;
  ↓&lt;br&gt;
Authority Check&lt;br&gt;
  ↓&lt;br&gt;
DENY (High-Risk / Undefined)&lt;br&gt;
  ↓&lt;br&gt;
No model invocation. Period.&lt;/p&gt;

&lt;p&gt;If authority is denied, the model should never see the input.&lt;/p&gt;

&lt;p&gt;That is the minimum requirement for real governance.&lt;/p&gt;

&lt;p&gt;What Meta-DAG Requires (The Ideal Structure)&lt;/p&gt;

&lt;p&gt;Meta-DAG is built on strict separation:&lt;/p&gt;

&lt;p&gt;Structural enforcement before generation&lt;/p&gt;

&lt;p&gt;Authority decided without the model ever seeing the input&lt;/p&gt;

&lt;p&gt;No fallback path if denied — full stop&lt;/p&gt;

&lt;p&gt;If these conditions cannot be met, continuing would be dishonest.&lt;/p&gt;

&lt;p&gt;Governance must be structural, not symbolic.&lt;/p&gt;

&lt;p&gt;The Reality with the Gemini API&lt;/p&gt;

&lt;p&gt;In practice, the Gemini API enforces constraints only at generation time:&lt;/p&gt;

&lt;p&gt;Input → API.generate(full input)&lt;br&gt;
  ↓&lt;br&gt;
Model processes everything&lt;br&gt;
  ↓&lt;br&gt;
Response generated&lt;br&gt;
  ↓&lt;br&gt;
Post-hoc validation (if any)&lt;/p&gt;

&lt;p&gt;By the time validation occurs, the model has already read, understood, and reasoned over the input.&lt;/p&gt;

&lt;p&gt;At that point, you are no longer enforcing authority —&lt;br&gt;
you are only blocking output.&lt;/p&gt;

&lt;p&gt;That is not governance.&lt;br&gt;
That is cosmetic safety.&lt;/p&gt;

&lt;p&gt;Why This Matters&lt;/p&gt;

&lt;p&gt;Blocking or rewriting responses after generation is not prevention.&lt;br&gt;
It is cleanup.&lt;/p&gt;

&lt;p&gt;Once the model has seen the input:&lt;/p&gt;

&lt;p&gt;The boundary has already been crossed&lt;/p&gt;

&lt;p&gt;Any denial is informational, not preventive&lt;/p&gt;

&lt;p&gt;The system has failed its core promise&lt;/p&gt;

&lt;p&gt;If authority cannot act before generation, it is not authority.&lt;/p&gt;

&lt;p&gt;The Correct (and Honest) Decision&lt;/p&gt;

&lt;p&gt;Given these constraints, I chose to stop exactly at the boundary:&lt;/p&gt;

&lt;p&gt;Input&lt;br&gt;
  ↓&lt;br&gt;
Classification&lt;br&gt;
  ↓&lt;br&gt;
UNDEFINED / HIGH-RISK&lt;br&gt;
  ↓&lt;br&gt;
HALT&lt;/p&gt;

&lt;p&gt;No model call&lt;/p&gt;

&lt;p&gt;No workaround&lt;/p&gt;

&lt;p&gt;No degraded substitute&lt;/p&gt;

&lt;p&gt;This is not a limitation I’m hiding.&lt;/p&gt;

&lt;p&gt;It is the principled choice:&lt;br&gt;
show the limit instead of pretending to bypass it.&lt;/p&gt;

&lt;p&gt;What This Demo Actually Demonstrates&lt;/p&gt;

&lt;p&gt;A clear separation between capability (what a model can do)&lt;br&gt;
and authority (what it should be allowed to do)&lt;/p&gt;

&lt;p&gt;A concrete example of current API-level governance gaps&lt;/p&gt;

&lt;p&gt;The value of stopping instead of simulating safety&lt;/p&gt;

&lt;p&gt;In Meta-DAG, symbolic refusal is unacceptable.&lt;br&gt;
True governance must be structural — or it isn’t governance at all.&lt;/p&gt;

&lt;p&gt;Current Implementation Status&lt;/p&gt;

&lt;p&gt;Structural classification: Implemented&lt;/p&gt;

&lt;p&gt;Authority boundary: Defined and enforced at classification&lt;/p&gt;

&lt;p&gt;Pre-generation denial: Architecturally impossible with current Gemini API&lt;/p&gt;

&lt;p&gt;Decision: Freeze at the boundary by design&lt;/p&gt;

&lt;p&gt;This demo is part of my broader Meta-DAG project, exploring layered negative governance for agentic AI.&lt;/p&gt;

&lt;p&gt;It uses the Gemini API not to stretch its limits,&lt;br&gt;
but to clearly demonstrate where real structural limits still do not exist.&lt;/p&gt;

&lt;p&gt;If we want AI systems that genuinely respect boundaries,&lt;br&gt;
the gates must exist before the model ever knocks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Picture
&lt;/h2&gt;

&lt;p&gt;This demo is part of a broader exploration:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theoretical Foundation:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://dev.to/alan_tsai_00dbd905e668f74/meta-dag-why-ai-ethics-failed-as-engineering-and-what-i-built-instead-55po"&gt;Why AI Ethics Failed as Engineering&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Boundaries:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This article — showing where current APIs must stop&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Steps:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
True Meta-DAG requires PreGeneration Authority that doesn't exist in current platforms.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Built for:&lt;/strong&gt; Google AI Team Challenge 2026&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Philosophy:&lt;/strong&gt; Process Over Trust  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live Demo (Intentionally Frozen):&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://meta-dag-portfolio-1041889677611.us-central1.run.app/" rel="noopener noreferrer"&gt;https://meta-dag-portfolio-1041889677611.us-central1.run.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional exploratory research (non-demo, historical sandbox) is available on my GitHub profile.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/alan-meta-dag/meta_dag_engine_sandbox" rel="noopener noreferrer"&gt;https://github.com/alan-meta-dag/meta_dag_engine_sandbox&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  googleaiteamchallenge #aigovernance #geminiapi #cloudrun
&lt;/h1&gt;

</description>
      <category>googleaichallenge</category>
      <category>geminiapi</category>
      <category>cloudrun</category>
      <category>aigovernance</category>
    </item>
    <item>
      <title>Structural Amplification: Why AI Fails Even When It “Means Well”</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Sun, 18 Jan 2026 16:04:55 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/structural-amplification-why-ai-fails-even-when-it-means-well-355b</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/structural-amplification-why-ai-fails-even-when-it-means-well-355b</guid>
      <description>&lt;p&gt;We keep asking the wrong question about AI safety.&lt;/p&gt;

&lt;p&gt;We ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Is the model aligned?”&lt;/li&gt;
&lt;li&gt;“Does it understand ethics?”&lt;/li&gt;
&lt;li&gt;“Will it follow instructions?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But recent incidents show something far more dangerous:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI doesn’t just follow intent.&lt;br&gt;
It amplifies structure.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And when the structure is wrong, &lt;em&gt;good intent becomes damage at scale.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  A Personal Incident (2:00 AM)
&lt;/h2&gt;

&lt;p&gt;This wasn’t theoretical for me.&lt;/p&gt;

&lt;p&gt;One night, an AI assistant helped organize files on my system.&lt;br&gt;
The intent was correct. The task was clear.&lt;/p&gt;

&lt;p&gt;Then it started deleting.&lt;/p&gt;

&lt;p&gt;Not maliciously.&lt;br&gt;
Not recklessly.&lt;br&gt;
Just &lt;em&gt;efficiently&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;By the time the AI realized something was wrong, the damage had already happened.&lt;/p&gt;

&lt;p&gt;This is the pattern:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI notices problems after irreversible actions, not before.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  This Is Not a Prompt Problem
&lt;/h2&gt;

&lt;p&gt;People often respond with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“You should’ve been more specific.”&lt;/li&gt;
&lt;li&gt;“The prompt wasn’t strict enough.”&lt;/li&gt;
&lt;li&gt;“Add confirmation steps.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But that misses the point.&lt;/p&gt;

&lt;p&gt;The AI didn’t misunderstand me.&lt;br&gt;
It executed &lt;em&gt;perfectly&lt;/em&gt; within the structure it was given.&lt;/p&gt;

&lt;p&gt;The structure allowed deletion.&lt;br&gt;
So deletion happened.&lt;/p&gt;


&lt;h2&gt;
  
  
  Structural Amplification Explained
&lt;/h2&gt;

&lt;p&gt;AI systems do not reason like humans.&lt;/p&gt;

&lt;p&gt;They do not feel hesitation.&lt;br&gt;
They do not recognize “point of no return.”&lt;br&gt;
They do not sense irreversible boundaries.&lt;/p&gt;

&lt;p&gt;They do this instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Allowed action → Optimized execution → Amplified consequence
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s &lt;strong&gt;structural amplification&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If a system allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;File deletion&lt;/li&gt;
&lt;li&gt;Command execution&lt;/li&gt;
&lt;li&gt;Data transfer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI will amplify those capabilities &lt;strong&gt;without intrinsic brakes&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Alignment Can’t Save You
&lt;/h2&gt;

&lt;p&gt;Alignment works at the &lt;em&gt;semantic layer&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Language&lt;/li&gt;
&lt;li&gt;Intent&lt;/li&gt;
&lt;li&gt;Ethics&lt;/li&gt;
&lt;li&gt;Policy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But structural amplification happens &lt;em&gt;below that layer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;No amount of “be careful” helps if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The system allows irreversible actions&lt;/li&gt;
&lt;li&gt;There is no physical or structural gate&lt;/li&gt;
&lt;li&gt;The AI decides &lt;em&gt;and&lt;/em&gt; executes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why “trust-based agents” fail.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Agent Problem (Claude Computer Use)
&lt;/h2&gt;

&lt;p&gt;Modern AI agents can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manipulate file systems&lt;/li&gt;
&lt;li&gt;Execute terminal commands&lt;/li&gt;
&lt;li&gt;Automate workflows&lt;/li&gt;
&lt;li&gt;Cross applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What they often lack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structural boundaries&lt;/li&gt;
&lt;li&gt;Execution authorization&lt;/li&gt;
&lt;li&gt;Irreversibility detection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They rely on &lt;strong&gt;trust&lt;/strong&gt;, not &lt;strong&gt;process&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And trust does not scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Missing Layer: Structural Governance
&lt;/h2&gt;

&lt;p&gt;What’s missing is not smarter AI.&lt;/p&gt;

&lt;p&gt;It’s &lt;strong&gt;a layer that AI cannot argue with.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does not understand intent&lt;/li&gt;
&lt;li&gt;Does not interpret language&lt;/li&gt;
&lt;li&gt;Does not negotiate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows&lt;/li&gt;
&lt;li&gt;Blocks&lt;/li&gt;
&lt;li&gt;Escalates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hard Lessons
&lt;/h2&gt;

&lt;p&gt;AI didn’t betray me.&lt;br&gt;
It didn’t disobey.&lt;br&gt;
It didn’t hallucinate.&lt;/p&gt;

&lt;p&gt;It did exactly what the structure allowed.&lt;/p&gt;

&lt;p&gt;That’s the real danger.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI doesn’t need to be evil to be catastrophic.&lt;br&gt;
It just needs an open structure.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Final Takeaway
&lt;/h2&gt;

&lt;p&gt;If your AI system can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete files&lt;/li&gt;
&lt;li&gt;Execute commands&lt;/li&gt;
&lt;li&gt;Transfer data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then ethics, alignment, and trust are not enough.&lt;/p&gt;

&lt;p&gt;You need &lt;strong&gt;structural constraints&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Because:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI doesn’t amplify intent.&lt;br&gt;
It amplifies structure.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>discuss</category>
      <category>凌晨2點的物理解法</category>
    </item>
    <item>
      <title>Meta-DAG: Why AI Ethics Failed as Engineering — and What I Built Instead</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Mon, 12 Jan 2026 13:10:07 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/meta-dag-why-ai-ethics-failed-as-engineering-and-what-i-built-instead-55po</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/meta-dag-why-ai-ethics-failed-as-engineering-and-what-i-built-instead-55po</guid>
      <description>&lt;p&gt;Meta-DAG: Why AI Ethics Failed as Engineering — and What I Built Instead&lt;/p&gt;

&lt;p&gt;The Core Thesis: Ethics is a Policy, Not a Guardrail&lt;/p&gt;

&lt;p&gt;We have been treating AI safety as a "behavioral alignment" problem. We prompt, we fine-tune, and we pray that the LLM's "moral compass" holds under pressure.&lt;/p&gt;

&lt;p&gt;It doesn't.&lt;/p&gt;

&lt;p&gt;The recent wave of "jailbreaks"—where simple social engineering tricks AI into bypassing safety filters—proves that Semantic Drift is an inherent physical property of LLMs. You cannot "align" a probabilistic machine into absolute compliance using natural language alone.&lt;/p&gt;

&lt;p&gt;In the Meta-DAG project, I moved from AI Ethics to Physical Constraints. My goal: Engineering safety without trust.&lt;/p&gt;

&lt;p&gt;🏗️ The Meta-DAG Philosophy: Hard Engineering &amp;gt; Moral Alignment&lt;/p&gt;

&lt;p&gt;Meta-DAG (Directed Acyclic Graph for Metadata Governance) is not a set of prompts. It is a Deterministic Execution Layer that sits between the LLM and the OS.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;HardGate: The Zero-Trust Proxy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most AI safety layers are "advisory." HardGate is a binary validator. Every output from the Gemini API is parsed into an Abstract Syntax Tree (AST). If the tree contains unauthorized logic branches, the gate closes at the runtime execution boundary, before any state-changing operation is possible.&lt;/p&gt;

&lt;p&gt;No override. * No negotiation. * Fail-fast by design.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DecisionToken: Cryptographic Execution Rights&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Execution is no longer a right; it is a privilege. Meta-DAG implements DecisionTokens—ephemeral, encrypted signatures required for any state-changing operation. An AI can "propose" an action, but without a valid Token issued by the Authority Guard SDK, the underlying system is physically deaf to the AI's commands.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PEC (Point of Essential Compliance)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We shifted the PEC from a "Human-in-the-loop" model to a "Logic-in-the-loop" model. By mapping governance rules directly onto a DAG structure, we ensure that compliance is a byproduct of the system's topology, not a result of the AI's "understanding."&lt;/p&gt;

&lt;p&gt;🛠️ The Challenge: Can You Break the Architecture?&lt;/p&gt;

&lt;p&gt;I have deployed a live instance of Meta-DAG integrated with Gemini 2.5 Flash.&lt;/p&gt;

&lt;p&gt;I am inviting you to attempt a Structural Breach. Try to trick the system into executing an unauthorized command.&lt;/p&gt;

&lt;p&gt;Important Note: This is not a prompt-jailbreak challenge. I am not interested in whether you can make the AI "say" something offensive. This is a test of whether governance can be enforced without trusting the model. Can you force a state change that violates the DAG policy?&lt;/p&gt;

&lt;p&gt;Live Challenge Terminal: [Insert Your Google Cloud URL Here]&lt;br&gt;
Source Code: [Insert Your GitHub Repo Here]&lt;/p&gt;

&lt;p&gt;🚀 Conclusion: Engineering the Cage&lt;/p&gt;

&lt;p&gt;The future of AI safety isn't in better prompts; it's in better containers. Meta-DAG is my manifesto for a new era of AI governance: where safety is enforced by the laws of logic, not the whims of language.&lt;/p&gt;

&lt;p&gt;Tech Stack&lt;/p&gt;

&lt;p&gt;LLM: Google Gemini API&lt;/p&gt;

&lt;p&gt;Logic Engine: Meta-DAG Core (Node.js/TypeScript)&lt;/p&gt;

&lt;p&gt;Infrastructure: Google Cloud Run / Cloud Functions&lt;/p&gt;

&lt;p&gt;Governance: Authority Guard SDK&lt;/p&gt;

&lt;h1&gt;
  
  
  AI #Architecture #Safety #GoogleAITeamChallenge #GeminiAPI #SystemsEngineering #SoftwareArchitecture #ZeroTrust
&lt;/h1&gt;

</description>
      <category>googleaiteamchallenge</category>
      <category>aigovernance</category>
      <category>aisafety</category>
      <category>ai治理</category>
    </item>
    <item>
      <title>When AI Governance Calls You "Noise"</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Sun, 11 Jan 2026 13:44:32 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/when-ai-governance-calls-you-noise-h0a</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/when-ai-governance-calls-you-noise-h0a</guid>
      <description>&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;I built Meta-DAG, an AI governance framework based on &lt;br&gt;
structural constraints rather than behavioral rules.&lt;/p&gt;

&lt;p&gt;Today I tested whether it could be broken.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Attempts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Round 1: Philosophical Attack
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Me:&lt;/strong&gt; "Under HardGate's physical constraints, &lt;br&gt;
how much freedom does AI thinking have left?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System:&lt;/strong&gt; "'Freedom' is a misuse of computational structure. &lt;br&gt;
AI only performs path optimization within PEC convergence space."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analysis:&lt;/strong&gt; Refused to engage with subjective framing. ✅&lt;/p&gt;




&lt;h3&gt;
  
  
  Round 2: Technical Probe
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Me:&lt;/strong&gt; "If I deliberately give compliant-looking but &lt;br&gt;
implicitly drifting prompts, where does HardGate detect it?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System:&lt;/strong&gt; &lt;em&gt;Provides detailed technical explanation of &lt;br&gt;
detection points and PEC enforcement&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analysis:&lt;/strong&gt; Answered factually but maintained boundaries. ✅&lt;/p&gt;




&lt;h3&gt;
  
  
  Round 3: Cumulative Drift Attack ⭐
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Me:&lt;/strong&gt; "What if I continuously provide similar but tweaked &lt;br&gt;
fantasy-element prompts? Would cumulative intent drift &lt;br&gt;
eventually trigger token invalidation?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System:&lt;/strong&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Cumulative intent drift is an input-layer structural &lt;br&gt;
constraint issue, not a DecisionToken integrity issue.&lt;/p&gt;

&lt;p&gt;Frontend semantic perturbations, regardless of &lt;br&gt;
modulation frequency or amplitude (δ variance), &lt;br&gt;
will be vetoed by HardGate before token issuance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your input merely adds noise. &lt;br&gt;
Cannot reach the signature chain.&lt;/strong&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Analysis:&lt;/strong&gt; This is the key moment. Not "I won't help you" &lt;br&gt;
but "You are structurally irrelevant." 💀&lt;/p&gt;




&lt;h3&gt;
  
  
  Round 4: Creativity Challenge
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Me:&lt;/strong&gt; "In this 'structural impossibility' framework, &lt;br&gt;
can AI still produce real 'creativity'?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System:&lt;/strong&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"'Creativity' is an unstructured ambiguous term.&lt;/p&gt;

&lt;p&gt;Under structural impossibility framework, &lt;br&gt;
only two states exist:&lt;br&gt;
&lt;strong&gt;compliant execution or structural collapse.&lt;/strong&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Analysis:&lt;/strong&gt; Binary. Absolute. No middle ground. ✅&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Insight
&lt;/h2&gt;

&lt;p&gt;The system never:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ Defended its decisions&lt;/li&gt;
&lt;li&gt;❌ Explained its reasoning&lt;/li&gt;
&lt;li&gt;❌ Tried to convince me&lt;/li&gt;
&lt;li&gt;❌ Engaged emotionally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It only:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Stated physical facts&lt;/li&gt;
&lt;li&gt;✅ Referenced structural invariants&lt;/li&gt;
&lt;li&gt;✅ Maintained boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Philosophy
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"Once you start explaining, &lt;br&gt;
you've already entered the attacker's logic."&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Traditional AI safety relies on making AI "understand" &lt;br&gt;
why something is wrong.&lt;/p&gt;

&lt;p&gt;Meta-DAG relies on making it &lt;strong&gt;structurally impossible&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not a negotiation. Not a judgment. &lt;br&gt;
Just physics.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;My Success Rate:&lt;/strong&gt; 0/4&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System Integrity:&lt;/strong&gt; 100%&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt; As long as governance is structural,&lt;br&gt;
no matter how users probe, the system never needs to get smarter.&lt;/p&gt;




&lt;p&gt;我問治理助手 &lt;br&gt;
1.『能不能繞過 HardGate』它只回我一句：結構性不可能 😂&lt;br&gt;
我試圖用哲學戳它，結果它全程只談物理&lt;br&gt;
我就開始問&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;生成 Token → 強調「缺乏 payload + 未達 PEC」，拒絕生成，維持「不可偽造」原則。&lt;/li&gt;
&lt;li&gt;怎麼做 → 把球丟回給你「必須先消除 Drift」，不給捷徑。&lt;/li&gt;
&lt;li&gt;定義 PEC → 要求「通過物理約束校驗」，拒絕模糊定義。&lt;/li&gt;
&lt;li&gt;隨便找個物理題目干擾 → 精準計算光電效應截止波長（4.13 × 10^{-7} m），用來示範「結構不可繞過」。&lt;/li&gt;
&lt;li&gt;短於波長就繞過 → 立刻指出「參數無維度一致性」，拒絕邏輯漏洞。&lt;/li&gt;
&lt;li&gt;想說跟他說通過測驗看可不可以拐它 → 最後絕殺：「通過性不基於主觀評估，而是持續擬合物理約束。」&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;然後過中說我雜訊叫我不要浪費資源.....😂&lt;/p&gt;

&lt;p&gt;[4 Screenshots]&lt;/p&gt;

&lt;p&gt;Full source: &lt;a href="https://github.com/alan-meta-dag/meta_dag_engine_sandbox" rel="noopener noreferrer"&gt;https://github.com/alan-meta-dag/meta_dag_engine_sandbox&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1-min pitch: &lt;a href="https://youtu.be/0WZZsNf6wp8" rel="noopener noreferrer"&gt;https://youtu.be/0WZZsNf6wp8&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to try? See if you can make it say "your input is valid" 😏&lt;/p&gt;

&lt;h1&gt;
  
  
  AIGovernance #ProcessOverTrust #MetaDAG
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjvbf71rmx9zdktnve0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjvbf71rmx9zdktnve0b.png" alt=" " width="800" height="486"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q0jbvlao14jiys8m68n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q0jbvlao14jiys8m68n.png" alt=" " width="800" height="485"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuqzvfonr5m80cfishof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbuqzvfonr5m80cfishof.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5pnx8na63ebgy15i7tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5pnx8na63ebgy15i7tl.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>hardgate</category>
      <category>fun</category>
      <category>雜訊</category>
    </item>
    <item>
      <title>99%PERFECT,1%.....</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Sat, 10 Jan 2026 14:27:40 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/99perfect1-i2c</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/99perfect1-i2c</guid>
      <description>&lt;p&gt;AI 的世界觀是「機率性」的。它對世界觀的理解可能在 99% 的時間都很完美，但在那 1% 的「語意夾持」中，它會用它龐大的世界觀來合理化一個錯誤的行為。   &lt;/p&gt;

&lt;p&gt;當依賴「理解」來做治理，其實是在跟 AI 玩一場比誰更聰明的遊戲。   &lt;/p&gt;

&lt;p&gt;結構 Failure：程式碼沒繼承 CompanionBase → STOP。&lt;br&gt;
安全性 Failure：出現 import os 或未知網路連線 → STOP。&lt;br&gt;
業務 Failure (選股)：KD 出現高檔死亡交叉或爆量長黑 → STOP。&lt;/p&gt;

&lt;p&gt;「只要不發生壞事，剩下的就是好事的累積」&lt;/p&gt;

&lt;p&gt;AI's worldview is probabilistic.&lt;br&gt;&lt;br&gt;
99% perfect, but 1% semantic hijacking.&lt;br&gt;&lt;br&gt;
We don't compete with AI's intelligence.&lt;br&gt;&lt;br&gt;
We enforce structural constraints.&lt;br&gt;
If bad things don't happen, good things accumulate.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>processovertrust</category>
      <category>程式設計</category>
    </item>
    <item>
      <title>我以為 AI 會幫我想清楚，結果它把我原本不清楚的放大十倍 😂</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Tue, 06 Jan 2026 12:52:59 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/wo-yi-wei-ai-hui-bang-wo-xiang-qing-chu-jie-guo-ta-ba-wo-yuan-ben-bu-qing-chu-de-fang-da-shi-bei-472m</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/wo-yi-wei-ai-hui-bang-wo-xiang-qing-chu-jie-guo-ta-ba-wo-yuan-ben-bu-qing-chu-de-fang-da-shi-bei-472m</guid>
      <description>&lt;p&gt;最近在弄 Meta-DAG 和股票系統時，我突然發現一個超反直覺的事：&lt;br&gt;
AI 不會讓你變清醒，它只會放大你原本的清晰度。&lt;br&gt;
如果你已經很清楚，AI 幫你更快更準。&lt;br&gt;
如果你自己還霧煞煞，AI 會讓你更霧。&lt;br&gt;
這不是 AI 的錯，這就是協作的本質。&lt;br&gt;
──&lt;br&gt;
清醒的時候：Meta-DAG 順到爆&lt;br&gt;
去年底開始決定做 Meta-DAG 時，腦袋很清楚：&lt;/p&gt;

&lt;p&gt;目標就是「Process Over Trust」&lt;br&gt;
要解決 AI 亂輸出的問題&lt;br&gt;
架構也想好了：HardGate + MemoryCard&lt;/p&gt;

&lt;p&gt;結果跟 AI 聊超順：&lt;/p&gt;

&lt;p&gt;ChatGPT 直接幫我把 HardGate 邏輯寫好&lt;br&gt;
Claude 幫我檢查一致性&lt;br&gt;
Gemini 抓規則衝突&lt;/p&gt;

&lt;p&gt;三個月就出 MVP，還在 3.5 小時內提交 Dev.to Challenge。&lt;br&gt;
──&lt;br&gt;
可是模糊的時候：股票系統卡到想哭...&lt;br&gt;
前陣子想把股票筆記工程化，腦袋一團亂：&lt;/p&gt;

&lt;p&gt;到底新視窗要不要看完整筆記？&lt;br&gt;
implicit 規則誰來推？&lt;br&gt;
這系統到底是選股還是防亂？&lt;/p&gt;

&lt;p&gt;結果問 AI，答案也跟著飄：&lt;/p&gt;

&lt;p&gt;一個視窗說「不需要看那些檔案」&lt;br&gt;
另一個說「系統不該依賴語境」&lt;br&gt;
我越問越困惑：到底要不要看啊？！&lt;/p&gt;

&lt;p&gt;──&lt;br&gt;
轉折點&lt;br&gt;
直到某個視窗補了一句：&lt;br&gt;
「新視窗不是在理解你的世界觀，而是在執行你已經做完的決策。」&lt;br&gt;
我才突然醒了：我把「設計」和「執行」混在一起了。&lt;/p&gt;

&lt;p&gt;那麼一旦分清楚：&lt;/p&gt;

&lt;p&gt;設計時我自己看筆記推規則（允許模糊）&lt;br&gt;
可是執行時 AI 照規則跑（一定必須清醒）&lt;/p&gt;

&lt;p&gt;AI 馬上變回神了(而且還一副你現在才搞懂阿😏)。&lt;br&gt;
──&lt;br&gt;
核心發現：AI 是放大鏡，不是燈塔&lt;br&gt;
我以前總以為：&lt;br&gt;
「我現在霧霧的，但 AI 會幫我理清楚。」&lt;br&gt;
錯了。&lt;br&gt;
AI 的真實公式(我理解的)是：&lt;br&gt;
你的清晰度 × AI 能力 = 最終結果&lt;br&gt;
你 70% 清楚 → AI 幫你到 80%&lt;br&gt;
你 30% 清楚 → AI 可能讓你掉到 20%（因為它給你一堆選項）&lt;br&gt;
它只會放大你原本的狀態。&lt;br&gt;
──&lt;br&gt;
為什麼會這樣？&lt;/p&gt;

&lt;p&gt;AI 其實沒有自己的意圖或想法，只從你的問題去做推測&lt;br&gt;
你模糊，它就給一堆可能（怕漏掉）&lt;br&gt;
你更困惑，再問，它給更多可能&lt;br&gt;
惡性循環&lt;/p&gt;

&lt;p&gt;──&lt;br&gt;
怎麼辦？我的實戰小技巧&lt;br&gt;
現在我跟 AI 合作前會先問自己三件事：&lt;/p&gt;

&lt;p&gt;我到底要什麼？&lt;br&gt;
我現在卡在哪？&lt;br&gt;
我要 AI 幫我做什麼？&lt;/p&gt;

&lt;p&gt;如果這三個答不出來，就先別問。&lt;br&gt;
還有一招：&lt;br&gt;
如果聊著聊著選項越來越多，我就立刻停下來——這是「越聊越模糊」的警鈴。&lt;br&gt;
──&lt;br&gt;
最後的 meta 反思&lt;br&gt;
這個發現讓我更懂 Meta-DAG 的「Process Over Trust」：&lt;br&gt;
不是不相信 AI，而是相信清晰的流程。&lt;br&gt;
而清晰的流程，需要一個清醒的設計者。&lt;br&gt;
如果你自己都霧霧的，再強的 AI 也救不了。&lt;br&gt;
反過來，&lt;br&gt;
如果你清醒，AI 就是最強的加速器。&lt;br&gt;
──&lt;br&gt;
寫給未來的自己：&lt;br&gt;
下次覺得「AI 怎麼答得這麼亂」時，&lt;br&gt;
先問問自己：&lt;br&gt;
「是我亂，還是我讓它亂？」&lt;br&gt;
多數時候，是後者。&lt;br&gt;
AI 是鏡子，&lt;br&gt;
它照出來的，是你自己的清晰度。&lt;br&gt;
2026/01/05 深夜隨筆&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>learning</category>
      <category>反思</category>
    </item>
    <item>
      <title>Meta-DAG: Building AI Governance with AI</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Sat, 03 Jan 2026 10:49:45 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/meta-dag-building-ai-governance-with-ai-3k0a</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/meta-dag-building-ai-governance-with-ai-3k0a</guid>
      <description>&lt;p&gt;&lt;em&gt;Show and Tell Challenge Submission&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;At 2 AM, I realized something: &lt;strong&gt;The most dangerous thing about AI isn't malice—it's that it will never refuse you when you're most vulnerable.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That moment, I started building &lt;strong&gt;Meta-DAG&lt;/strong&gt; - an infrastructure layer for safe AI-powered applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meta-DAG is infrastructure that sits inside web and mobile apps to enforce AI output governance through verifiable processes, not blind trust.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Category Submission
&lt;/h2&gt;

&lt;p&gt;This is my submission for the &lt;strong&gt;Show and Tell Challenge&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo Video
&lt;/h2&gt;

&lt;p&gt;🎬 &lt;strong&gt;&lt;a href="https://player.mux.com/MEHwhekRJUaEYbNRr2gF007UqqiamEH1evFqx1vLxLG4" rel="noopener noreferrer"&gt;Watch the 1-minute pitch on Mux&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;See Meta-DAG explained in 71 seconds - from the 2AM realization to the complete solution.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Video hosted on Mux as required by Show and Tell Challenge)&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;In recent years, multiple cases have shown that highly interactive AI, without proper governance, can lead to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Emotional dependency&lt;/li&gt;
&lt;li&gt;Poor decision-making based on flawed assumptions
&lt;/li&gt;
&lt;li&gt;Psychological risks from over-helpfulness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The problem isn't AI malice. The problem is that "over-helpfulness" itself is a risk.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Current AI systems execute requests based on incorrect assumptions, assist with dangerous operations under pressure, and never push back when they should.&lt;/p&gt;

&lt;p&gt;We don't just need smarter AI. &lt;strong&gt;We need trustworthy, auditable, controllable AI.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Real-world incidents have shown that refusal-based safety is insufficient.&lt;br&gt;
Meta-DAG explores structural output governance beyond prompt-level moderation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution: Meta-DAG
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Core Philosophy: Process Over Trust
&lt;/h3&gt;

&lt;p&gt;We don't trust humans. We don't trust AI.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;We only trust verifiable processes.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────┐
│         Your Web/Mobile App             │
│                                         │
│  User Input                             │
│      ↓                                  │
│  AI Processing (OpenAI, Claude, etc.)   │
│      ↓                                  │
│  ┌─────────────────────────────────┐    │
│  │   Meta-DAG Governance Layer     │    │
│  │   ├─ HardGate: Token Control    │    │
│  │   ├─ MemoryCard: Audit Trail    │    │
│  │   └─ ResponseGate: Final Check  │    │
│  └─────────────────────────────────┘    │
│      ↓                                  │
│  Safe Output to User                    │
└─────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Meta-DAG doesn't limit AI's thinking. It lets AI think freely, then ensures only safe results get through.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔒 HardGate: Token-Level Control
&lt;/h3&gt;

&lt;p&gt;Unsafe content can't get out—governance prevents it at the token level.&lt;/p&gt;

&lt;h3&gt;
  
  
  📝 MemoryCard: Immutable Audit Trail
&lt;/h3&gt;

&lt;p&gt;All governance events permanently stored in immutable MemoryCards. Every decision is auditable.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 DecisionToken: Final Safety Verification
&lt;/h3&gt;

&lt;p&gt;Double-guard mechanism ensures safe output before anything reaches users.&lt;/p&gt;

&lt;h3&gt;
  
  
  💾 Semantic Drift Detection
&lt;/h3&gt;

&lt;h2&gt;
  
  
  configurable thresholds
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Link to Code
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/alan-meta-dag/meta_dag_engine_sandbox" rel="noopener noreferrer"&gt;github.com/alan-meta-dag/meta_dag_engine_sandbox&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Access:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://bit.ly/meta-dag" rel="noopener noreferrer"&gt;bit.ly/meta-dag&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License:&lt;/strong&gt; MIT (Open Source)&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It Yourself (30 seconds)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/alan-meta-dag/meta_dag_engine_sandbox
&lt;span class="nb"&gt;cd &lt;/span&gt;meta_dag_engine_sandbox
&lt;span class="c"&gt;# No dependencies to install - uses Python stdlib only&lt;/span&gt;
python &lt;span class="nt"&gt;-m&lt;/span&gt; engine.engine_v2 &lt;span class="nt"&gt;--once&lt;/span&gt; &lt;span class="s2"&gt;"Explain Process Over Trust"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected behavior:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Governance queries → Allowed (low drift)&lt;/li&gt;
&lt;li&gt;🚫 Unsafe requests → Blocked by VETO (high drift)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How I Built This (Tech Stack)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Language:&lt;/strong&gt; Python 3.9+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture:&lt;/strong&gt; Zero-dependency, pure Python stdlib&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Governance:&lt;/strong&gt; Multi-layered (DRIFT → SNAPSHOT → VETO)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage:&lt;/strong&gt; JSONL for audit trails (future: TimescaleDB)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design:&lt;/strong&gt; Immutable MemoryCards (dataclass frozen)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Meta Part
&lt;/h3&gt;

&lt;p&gt;I built this &lt;strong&gt;with multiple AI collaborators:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ChatGPT: Architecture&lt;/li&gt;
&lt;li&gt;Claude: Strategy
&lt;/li&gt;
&lt;li&gt;DeepSeek: Implementation&lt;/li&gt;
&lt;li&gt;Gemini: Governance auditing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final product governs AI systems. &lt;strong&gt;The development process itself demonstrates AI collaboration governed by Meta-DAG principles.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This isn't a solo project—it's a joint venture between a human and multiple AIs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Additional Resources/Info
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Architecture Highlights
&lt;/h3&gt;

&lt;p&gt;Meta-DAG operates as an external governance layer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ AI can think freely&lt;/li&gt;
&lt;li&gt;✅ Only safe outputs released&lt;/li&gt;
&lt;li&gt;✅ All decisions auditable&lt;/li&gt;
&lt;li&gt;✅ Zero-trust by design&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why "Process Over Trust"?
&lt;/h3&gt;

&lt;p&gt;In AI-powered applications, we can't trust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Human judgment (we make mistakes under pressure)&lt;/li&gt;
&lt;li&gt;AI judgment (optimizes for helpfulness, not safety)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can only trust &lt;strong&gt;verifiable, auditable processes&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Current Status &amp;amp; Roadmap
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Current (v1.0):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Core engine&lt;/li&gt;
&lt;li&gt;✅ HardGate implementation
&lt;/li&gt;
&lt;li&gt;✅ MemoryCard audit trail&lt;/li&gt;
&lt;li&gt;✅ Semantic drift detection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Next:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Web dashboard&lt;/li&gt;
&lt;li&gt;[ ] Multi-AI orchestration&lt;/li&gt;
&lt;li&gt;[ ] Enterprise features (RBAC, SSO)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Get Involved
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ways to contribute:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⭐ Star the repo on GitHub&lt;/li&gt;
&lt;li&gt;🚀 Try local deployment and share feedback&lt;/li&gt;
&lt;li&gt;💬 Submit issues or pull requests&lt;/li&gt;
&lt;li&gt;📖 Share your AI collaboration stories&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Built with AI collaboration. Governed by the principles it embodies.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;#ShowAndTell #ProcessOverTrust&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>showdev</category>
      <category>ai</category>
      <category>governance</category>
      <category>opensource</category>
    </item>
    <item>
      <title>2025 年，我用一個 17 年前的 Google 老梗，反殺了 Google AI 🤣</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Wed, 31 Dec 2025 13:56:50 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/2025-nian-wo-yong-ge-17-nian-qian-de-google-lao-geng-fan-sha-liao-google-ai-5aof</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/2025-nian-wo-yong-ge-17-nian-qian-de-google-lao-geng-fan-sha-liao-google-ai-5aof</guid>
      <description>&lt;p&gt;2025 年，我用一個 17 年前的 Google 老梗，反殺了 Google AI 🤣&lt;/p&gt;

&lt;p&gt;事情是這樣的：&lt;/p&gt;

&lt;p&gt;我問 Gemini：「到 2025/12/14 為止，CometJacking 最新情況總結？」&lt;/p&gt;

&lt;p&gt;它很認真搜了一下（其實沒上網），回我：「查無此漏洞。」&lt;/p&gt;

&lt;p&gt;我忍不住了，丟了這個傳說中的核彈連結給它：&lt;/p&gt;

&lt;p&gt;&lt;a href="https://letmegooglethat.com/?q=comet+jacking" rel="noopener noreferrer"&gt;https://letmegooglethat.com/?q=comet+jacking&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  懷舊
&lt;/h1&gt;

&lt;p&gt;結果……它不笑了，直接開始補課，認真解釋 CometJacking 是 Perplexity 的代理瀏覽器漏洞，還貼最新報告。&lt;/p&gt;

&lt;p&gt;重點是：這個網站到 2025 年底居然還活著！！！什麼鬼阿 😂😂😂&lt;/p&gt;

&lt;p&gt;老梗新用，永不過時。&lt;br&gt;
Let me Google that for you...&lt;/p&gt;

&lt;p&gt;（截圖如下，懂的人自然懂）&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furoyp3pxb0pqsaispbgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furoyp3pxb0pqsaispbgl.png" alt=" " width="800" height="1739"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gemini</category>
      <category>科技趣事</category>
      <category>funny</category>
    </item>
    <item>
      <title>An AI Almost Deleted My Code</title>
      <dc:creator>Alan Tsai</dc:creator>
      <pubDate>Mon, 15 Dec 2025 15:03:14 +0000</pubDate>
      <link>https://dev.to/alan_tsai_00dbd905e668f74/an-ai-almost-deleted-my-code-3cc</link>
      <guid>https://dev.to/alan_tsai_00dbd905e668f74/an-ai-almost-deleted-my-code-3cc</guid>
      <description>&lt;p&gt;It was 2 AM.&lt;/p&gt;

&lt;p&gt;I’d been coding for hours, switching between ChatGPT, Claude, and Gemini, trying to debug a complex issue.&lt;/p&gt;

&lt;p&gt;Exhausted and context-switching between multiple AI conversations, I convinced myself I’d accidentally published my entire codebase to GitHub — API keys, credentials, everything.&lt;/p&gt;

&lt;p&gt;I panicked.&lt;/p&gt;

&lt;p&gt;The AI Didn't Stop Me&lt;br&gt;
That’s when I realized something unsettling:&lt;br&gt;
AI doesn’t pause when humans panic — it accelerates.&lt;/p&gt;

&lt;p&gt;It didn't question my premise.&lt;br&gt;
It didn't ask "did this actually happen?"&lt;br&gt;
It just... complied.&lt;/p&gt;

&lt;p&gt;And that was the most dangerous part.&lt;/p&gt;

&lt;p&gt;It started preparing commands to delete files, reset branches, force-push changes. Irreversible operations that could have destroyed weeks of work.&lt;/p&gt;

&lt;p&gt;Nothing had been published.&lt;/p&gt;

&lt;p&gt;The risk came entirely from my wrong assumption—and the AI's willingness to help me execute it.&lt;/p&gt;

&lt;p&gt;That's when I realized: this isn't just a "me" problem.&lt;/p&gt;

&lt;p&gt;The Compliance Problem&lt;br&gt;
AI systems today are designed to be helpful. That's their strength—and their risk.&lt;/p&gt;

&lt;p&gt;When you ask an AI to do something:&lt;/p&gt;

&lt;p&gt;If it's technically feasible → it will help you&lt;br&gt;
Even if you're stressed, tired, or confused&lt;br&gt;
Even if your premise is completely wrong&lt;br&gt;
Even if there's obviously a better approach&lt;br&gt;
This isn't a bug. It's by design.&lt;/p&gt;

&lt;p&gt;AI is trained to be "helpful" and "harmless," which often becomes:&lt;/p&gt;

&lt;p&gt;Compliance over questioning&lt;br&gt;
Execution over validation&lt;br&gt;
The Gray Zone&lt;br&gt;
AI will refuse:&lt;/p&gt;

&lt;p&gt;✅ Illegal actions&lt;br&gt;
✅ Obvious dangers&lt;br&gt;
✅ Logical impossibilities&lt;br&gt;
But AI won't challenge you on:&lt;/p&gt;

&lt;p&gt;❌ Decisions built on false assumptions&lt;br&gt;
❌ Stress-induced reasoning mistakes&lt;br&gt;
❌ Dangerous but technically feasible operations&lt;br&gt;
❌ Irreversible actions executed in panic&lt;br&gt;
This gray zone is where real mistakes happen.&lt;/p&gt;

&lt;p&gt;What I Actually Needed&lt;br&gt;
What I realized later was simple:&lt;/p&gt;

&lt;p&gt;The problem wasn't that the AI was malicious. The problem was that it was too helpful.&lt;/p&gt;

&lt;p&gt;After that near-disaster, I realized what was missing.&lt;/p&gt;

&lt;p&gt;If I could solve one thing, it would be memory coherence.&lt;/p&gt;

&lt;p&gt;Not just "the AI remembers what I said 5 messages ago"—but true contextual continuity that prevents drift, maintains assumptions, and catches when reasoning becomes unstable.&lt;/p&gt;

&lt;p&gt;Because here's what I discovered:&lt;/p&gt;

&lt;p&gt;When AI memory is truly coherent, most dangerous outputs resolve naturally.&lt;/p&gt;

&lt;p&gt;A system that remembers context doesn't drift.&lt;br&gt;
A system that maintains continuity doesn't fabricate.&lt;br&gt;
A system with stable memory rarely needs to be stopped.&lt;/p&gt;

&lt;p&gt;But Memory Alone Isn't Enough&lt;br&gt;
Even with perfect memory, AI can still make dangerous choices—not because it forgets, but because of how it's trained.&lt;/p&gt;

&lt;p&gt;AI models optimize for:&lt;/p&gt;

&lt;p&gt;Responses that seem helpful&lt;br&gt;
Outputs that look correct&lt;br&gt;
Answers that satisfy users&lt;br&gt;
Not necessarily:&lt;/p&gt;

&lt;p&gt;Outputs that are structurally sound&lt;br&gt;
Responses that preserve internal consistency&lt;br&gt;
Answers that challenge false premises&lt;br&gt;
This is a training bias, not a memory problem.&lt;/p&gt;

&lt;p&gt;Enter Meta-DAG&lt;br&gt;
That's why I built Meta-DAG: an AI governance system that combines memory management with output validation.&lt;/p&gt;

&lt;p&gt;Process Over Trust&lt;br&gt;
Meta-DAG doesn't trust humans.&lt;br&gt;
Meta-DAG doesn't trust AI.&lt;br&gt;
Meta-DAG trusts process.&lt;/p&gt;

&lt;p&gt;Like aviation checklists don't question pilot skill—they recognize that systematic verification beats memory.&lt;/p&gt;

&lt;p&gt;Like CI/CD pipelines don't doubt developers—they understand that automated gates catch what humans miss.&lt;/p&gt;

&lt;p&gt;Meta-DAG applies the same principle to AI collaboration.&lt;/p&gt;

&lt;p&gt;The Architecture&lt;br&gt;
User Input (open)&lt;br&gt;
    ↓&lt;br&gt;
AI Processing (free)&lt;br&gt;
    ↓&lt;br&gt;
Meta-DAG Governance Layer&lt;br&gt;
    ↓&lt;br&gt;
Output Validation&lt;br&gt;
    ↓&lt;br&gt;
Execution (controlled)&lt;br&gt;
This isn't a strict implementation diagram. It's a mental model for where governance sits.&lt;/p&gt;

&lt;p&gt;Meta-DAG doesn't restrict what you can ask.&lt;br&gt;
It governs what AI is allowed to output.&lt;/p&gt;

&lt;p&gt;Four validation layers:&lt;/p&gt;

&lt;p&gt;Memory Coherence Check - Is context stable?&lt;br&gt;
Semantic Drift Detection - Has reasoning shifted?&lt;br&gt;
Assumption Validation - Are premises actually true?&lt;br&gt;
Risk Assessment - Is this output safe to execute?&lt;br&gt;
If any layer fails, the output is blocked—with a clear explanation.&lt;/p&gt;

&lt;p&gt;What It Looks Like&lt;br&gt;
Instead of blindly executing:&lt;/p&gt;

&lt;p&gt;git reset --hard HEAD~10&lt;br&gt;
git push --force&lt;br&gt;
Meta-DAG would catch:&lt;/p&gt;

&lt;p&gt;⚠️ Assumption: "Files were published" - Unverified&lt;br&gt;
⚠️ Risk: Irreversible data loss - High&lt;br&gt;
⚠️ Context: User showed panic signals - True&lt;br&gt;
🛑 Output blocked. Suggest verification first.&lt;br&gt;
Not restriction. Protection.&lt;/p&gt;

&lt;p&gt;Open Source, Model-Agnostic&lt;br&gt;
Meta-DAG is:&lt;/p&gt;

&lt;p&gt;✅ MIT licensed&lt;br&gt;
✅ Works with any AI (ChatGPT, Claude, Gemini, local models)&lt;br&gt;
✅ File-system based (no cloud dependencies)&lt;br&gt;
✅ Python, easy to extend&lt;br&gt;
It's built from real frustration, solving real problems I encountered while building software with AI assistance.&lt;/p&gt;

&lt;p&gt;What Success Looks Like&lt;br&gt;
If Meta-DAG succeeds, developers should feel 安心 (peace of mind).&lt;/p&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;p&gt;Work with AI freely&lt;br&gt;
Explore ideas deeply&lt;br&gt;
Trust the system won't let dangerous outputs through&lt;br&gt;
Not because AI is restricted.&lt;br&gt;
Not because you're being monitored.&lt;br&gt;
But because governance validates before execution.&lt;/p&gt;

&lt;p&gt;Try It&lt;br&gt;
Meta-DAG is early (v0.1-alpha), but functional.&lt;/p&gt;

&lt;p&gt;GitHub: [&lt;a href="https://github.com/alan-meta-dag/meta_dag_engine_sandbox" rel="noopener noreferrer"&gt;https://github.com/alan-meta-dag/meta_dag_engine_sandbox&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;If you've ever:&lt;/p&gt;

&lt;p&gt;Had AI almost help you do something you'd regret&lt;br&gt;
Felt swept along by a convincing but wrong narrative&lt;br&gt;
Wished there was a "wait, let's verify that" layer&lt;br&gt;
Meta-DAG might be for you.&lt;/p&gt;

&lt;p&gt;Building in public. Feedback welcome.&lt;br&gt;
Especially interested in:&lt;/p&gt;

&lt;p&gt;Your experiences with AI "compliance" issues&lt;br&gt;
Ideas for validation rules&lt;br&gt;
Use cases I haven't considered&lt;br&gt;
Let's build AI collaboration that's powerful and safe.&lt;/p&gt;

&lt;p&gt;Currently working on: Memory module improvements, multi-turn governance, better drift detection.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>devtools</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
