Mapping AI Cognition with the Dao-Fa-Shu-Qi Hierarchy
Note: This article’s core ideas and experimental framework are original to my independent exploration of AI cognitive evolution. AI was only used for minor English language polishing — all the research hypotheses, unsolved questions, and experimental design thinking are entirely my own.
Introduction: The Core Dilemma of AI Memory Management - Storage Without Abstraction
I’ve spent weeks experimenting with AI cognitive systems and working with large language models (LLMs) on real-world task execution, and I’ve noticed a critical, fundamental flaw: modern LLMs are essentially statistical probability models, their output logic is entirely tied to input consistency. Keep the input stable, and the output is predictably consistent too. This makes them unbeatable for fixed, scripted tasks — but completely useless when faced with even slightly novel scenarios that require flexible reasoning and experiential learning.
The root cause of this flaw is simple: an LLM’s parameters and weights are set in stone once training ends. These models can’t update their cognition from new interactive experiences, nor can they look back at past execution failures or successes and distill meaningful lessons from them. In short, current LLMs only have static training memory, not dynamic experiential memory — they store everything, but learn nothing of value from it.
That’s the motivation behind this small experiment: I want to build a memory management system for AI that enables abstraction from experience. My goal is to break through the cognitive rigidity caused by fixed parameters, and let intelligent agents do what humans do naturally: extract general rules and form cognitive abstractions from their own memories and practical experiences, then apply those distilled rules to new tasks and new practice scenarios. Ultimately, this system is meant to lift the task completion rate of AI agents in unscripted, real-world work.
This first part of the series is all about framing the problem clearly. I’ll use the traditional Chinese four-tier Dao-Fa-Shu-Qi cognitive hierarchy — a framework I stumbled on during my research into human cognitive distillation, and one that shocked me with how well it maps to AI’s current cognitive limitations — to unpack the underlying logic of AI memory management and experience abstraction. My aim here is to pinpoint the exact reason current AI agents fail to abstract from experience and use learned rules to guide new practice.
In the second part, I’ll move from problem-framing to practical implementation: I’ll walk through the three-stage operational process of this memory management system, built directly on the Dao-Fa-Shu-Qi hierarchy, and share the full design logic, step-by-step execution plans, and my early-stage experimental attempts with this framework.
The Dao-Fa-Shu-Qi Hierarchy: Underlying Logic for AI Memory Management and Experience Abstraction
Core Positioning of the Framework
This four-tier cognitive hierarchy isn’t just a philosophical concept — for AI, it maps directly to four distinct abstraction dimensions of memory management, moving from concrete instruction storage all the way to abstract rule distillation. It’s the cognitive anchor I’ve settled on for building an evolvable experiential memory system, because it clarifies a clear, progressive path for AI agents to move from mindless concrete operation to intentional abstract reasoning with their memories.
It’s not a perfect fit, of course. I’m still working through how to adapt the hierarchy’s more abstract layers (especially Dao) to concrete, buildable AI systems — but as a framework to diagnose AI’s current memory flaws, it’s unparalleled.
Detailed Explanation of the Four Tiers
For each tier, I’ll link the core concept directly to AI memory management and experience abstraction, pairing it with human memory parallels, the current state of AI capability at that tier, the core gaps we’re facing, and practical real-world examples. This hierarchy is a spectrum, not isolated silos — and the failure of current AI to move up this spectrum is exactly why it can’t learn from experience.
Level 4: Qi (Tool)
The most concrete, operational layer — this is AI’s memory of external tool-calling instructions, API interfaces, database query syntax, and basic execution code. For humans, this is the muscle memory of using a tool: knowing how to run docker-compose up in the CLI, or how to edit code in an IDE, without understanding why the command works or the underlying logic of the edit.
Current AI excels here — this tier is mature and well-developed. LLMs and AI agents can store and execute tool-related instructions stably, every single time. The core gap, though, is total lack of judgment: AI will run a tool instruction exactly as stored, even if the task scenario makes that instruction irrelevant or counterproductive. It has no ability to screen or prioritize its tool-layer memory for the task at hand.
Level 3: Shu (Technique)
The skill layer — this is AI’s memory of step-by-step execution processes for specific tasks, common pitfalls in single scenarios, and targeted fixes for those pitfalls. For humans, this is mastering a specific skill like debugging a memory leak or configuring a load balancer, but treating each execution as an isolated event with no connection to other similar tasks.
AI is relatively strong here too. We have rich skill libraries and fine-tuned models that let AI handle fixed-scenario tasks with impressive proficiency. But the core gap is isolation: AI stores these experiential skills as disconnected fragments, with no ability to induce commonalities between similar scenarios or reuse those lessons to form universal, adaptable skills. A Docker deployment skill stays a Docker deployment skill — it never becomes a general deployment skill.
Level 2: Fa (Method)
The framework layer — this is AI’s memory of decision-making frameworks, standard operating procedures (SOPs), and systematic approaches to entire categories of problems. For humans, this is the process we follow for problem-solving: test in staging before production release, isolate variables one by one when debugging, or build a rollback plan before any major change.
This tier is only partially realized in current AI. Agents can call pre-set decision trees and static SOP documents, but the content of those frameworks is fixed — set at the time of development, with no built-in way to evolve. The core gap is passivity: AI can only follow the fixed Fa it’s given, with no ability to iterate or optimize those methods based on new experiential feedback. If a new failure mode emerges that the SOP doesn’t account for, AI can’t update the framework to prevent that failure next time.
Level 1: Dao (Principle)
The most abstract, foundational layer — this is memory of cross-domain fundamental truths, first principles, and meta-cognitive rules that guide decision-making across all scenarios, regardless of the task or tool. For humans, this is the deep intuition that transcends specific skills: understanding that "all abstractions leak", that "complexity is the enemy of reliability", or that "isolating changes limits blast radius" — and applying that intuition to everything from software deployment to database migration to organizational change.
This tier is almost entirely blank in current AI. LLMs and agents can be told these principles, but they can’t independently discover or extract cross-domain core rules from scattered, diverse experiences. The core gap here is the biggest one: AI has no ability to perform high-level abstraction from multiple, seemingly unrelated experiences, and no way to form meta-cognitive memory that can be transferred across fields. This is the "gut feeling" senior human engineers have — and it’s the piece AI is missing entirely.
Core Conclusion of the Framework
High-quality memory management, for humans and for AI, is essentially the progressive distillation and dynamic update of memory from low-tier concrete forms to high-tier abstract forms. It’s not about storing more data — it’s about turning that data into rules, and those rules into principles. This is the core cognitive ability that current fixed-parameter AI lacks, and it’s the single most important key to enabling AI agents to abstract from experience and realize genuine cognitive evolution.
The Current State of AI Memory Management: "Perfect Storage" Without Abstraction Capability
If we map current AI’s memory capabilities onto the Dao-Fa-Shu-Qi hierarchy, the picture is stark and clear:
- Qi/Shu layer: ✅ Mature/Strong — AI can store and execute tool instructions and scenario-specific skills with near-perfect accuracy.
- Fa layer: ⚠️ Static and non-iterative — AI can call pre-built frameworks and SOPs, but can’t update or optimize them based on new experience.
- Dao layer: ❌ Almost blank — No cross-domain experience abstraction, no meta-cognitive memory, no intuitive principle-based reasoning.
The core contradiction of AI memory management that strikes me most deeply is this: AI’s "perfect storage" is actually its greatest weakness. Because of fixed parameters and weights, and a total lack of a selective screening and distillation mechanism, AI stores every single detail of every interaction in full volume — but it’s just mindless data accumulation. The undifferentiated retention of trivial details leaves AI with no way to separate the signal from the noise, and in turn, robs it of the core motivation for experience abstraction.
Humans have it backwards, in the best way: our memory is imperfect, limited, and fallible. We follow a "consolidation-forgetting-distillation" mechanism — we fade the trivial details of past experiences, and strengthen the core rules and principles behind them. This forgetting isn’t a flaw; it’s the mechanism for learning. I’ve yet to find a good answer for how much we should mimic this human forgetting in AI, or if that’s even the right path at all — but it’s impossible to ignore that human memory’s imperfection is what makes our abstraction ability possible.
This leads to my key inference for this experiment: building a memory management system for AI to abstract from experience is not about making AI’s storage better or bigger. It’s about designing a structured memory distillation and iteration path for it. The ultimate goal isn’t more storage — it’s to transform static experience storage into dynamic cognitive evolution, and let AI move up the Dao-Fa-Shu-Qi hierarchy on its own, from Qi to Dao, with every new experience.
Part 1 Summary
To boil down everything I’ve laid out here: the real reason current intelligent agents can’t learn from experience and can’t adapt to new, unscripted tasks is not a lack of memory capacity — it’s the lack of a dedicated experiential memory management system. These agents only have fixed parameter memory from their training phase; there’s no built-in logic for "storage-distillation-application" of new, real-world experiences. They store what they do, but they don’t learn from it.
The Dao-Fa-Shu-Qi four-tier cognitive hierarchy has become my North Star for fixing this gap. It provides a clear, tiered guide for building an abstractable, evolvable memory management system — one that clarifies how to turn low-tier scenario-specific experience into high-tier cross-domain universal rules, through progressive distillation. It’s not a perfect framework, but it’s the best one I’ve found to turn the vague goal of "AI learning from experience" into a concrete, buildable system.
In the second part of this series, I’ll move from theory to practice. I’ll walk through the three-stage practical process of this memory management system, built directly on this Dao-Fa-Shu-Qi hierarchy — including exactly how I’ve designed daily, weekly, and monthly memory distillation actions for AI agents. I’ll also share the full experimental design, my early-stage attempts to build and test this system, and the core unsolved problems I’m still grappling with right now — the real, messy challenges of turning this framework into something that actually works for AI.
Experimental Statement
This memory management system for AI to abstract from experience is only an exploratory attempt for AI cognitive evolution — nothing more, nothing less. At present, I have not completed a full end-to-end implementation and validation of this system, and its actual real-world effect, technical landing feasibility, and compute cost-benefit ratio are all still unknown. I am not certain that this system can effectively solve the core problems of AI’s experience abstraction and memory management.
The core intention of sharing this half-formed framework is simple: to throw out a brick to attract jade. I hope this unvalidated idea can trigger genuine thinking and discussion in the AI research community about how fixed-parameter AI can break through cognitive rigidity and realize independent, experience-based learning — especially for those of us building and experimenting with AI agents on the ground. I also look forward to communicating and colliding ideas with more researchers, developers, and AI enthusiasts in this field, to jointly explore more feasible, concrete directions for genuine AI cognitive evolution.
Top comments (0)