A few years ago I created a a framework called Assess-Decide-Do (ADD) to map how humans actually move through work—evaluating options, committing to decisions, executing tasks. Just three realms, with the goal of keeping a constant flow between them.
Last week I integrated it into Claude as an “operating system” layer.
What Changed
The LLM now tracks which cognitive phase I’m in. I can ask “Where are we in ADD?” and get: “Currently executing in Do.” For Claude Code users, there’s a persistent status bar:
[ADD Flow: 🔴 Assess | Exploring implementation options]
It updates automatically as the model detects behavioral shifts. At session end, you get a recap: realm transitions, time distribution, overall flow assessment.
Does the AI Actually Understand Me?
No. And this distinction matters more than it might seem.
LLMs predict plausible tokens. They don’t model your goals, don’t track your intentions, don’t “get” you. They have no orientation, no internal map of what you’re trying to accomplish. When Claude responds helpfully, it’s just navigating its probability space, not understanding.
The ADD mega-prompt doesn’t change this fundamental limitation. What it does is give the model a structure to pattern-match against. When I write exploratory, open-ended language, the model recognizes Assess patterns. When I shift to definitive, commitment-oriented language, it detects Decide. Action-focused, sequential language signals Do.
The model isn’t understanding my cognitive state. It’s mirroring it back through language recognition—which, it turns out, is exactly what LLMs are good at.
But Mirroring Might Be Enough
Here’s the surprise: the support we get from accurate mirroring is remarkably close to what we’d want from actual understanding.
When I’m in Assess mode—evaluating, brainstorming, daydreaming—the model doesn’t push me toward premature decisions. It recognizes the linguistic patterns of exploration and stays in that space with me. No friction. No “so what’s your conclusion?” pressure.
That permission alone changes the interaction quality dramatically.
After a week of 6-7 hour daily sessions, I create more. I stay in flow longer. I feel more relaxed. Not because Claude “gets” me, but because the tool responds appropriately to my current mental state based purely on how I’m expressing myself.
We don’t need AI consciousness. We need AI that fits our cognition. Language-based mirroring delivers that fit without requiring the impossible.
Human Amplification versus Human Replacement
We can frame AI as human replacement—matching creativity, autonomy, maybe consciousness—or as amplification: leveraging knowledge while remaining fundamentally a tool.
I’m betting on amplification. Not because it’s philosophically satisfying, but because it’s working right now. The ADD integration proves you can get meaningful cognitive support from pure pattern-matching. No understanding required.
World models might change everything. Prominent voices are calling the end of the LLM era. Maybe. But “right around the corner” has a way of staying around the corner, and I’d rather build with what works today.
Miscellaneous (but maybe useful) stuff
- I’ve been building things on the internet since the late 90s—coded, launched companies, wrote over a million words on productivity. The ADD framework came from decades of noticing how I actually work versus how productivity systems told me to work.
- The integration works with Claude, Gemini, Grok, and Kimi (Claude’s implementation is most refined). Mega-prompt and setup instructions: 👉 GitHub repo
- If you want to go deeper than the mega-prompt, I built addTaskManager, an iOS app implementing ADD as a full task management system.
- For the philosophical foundations behind ADD—why three realms, how they map to cognitive states, the theory underneath—I write about this on my blog: 👉 dragosroua.com
When you remove the friction between your thinking and your tools, what changes? I’m curious what this community’s experience is—whether you’ve tried cognitive frameworks with LLMs, or built your own approaches. Let’s discuss in the comments.
Top comments (0)