This post introduces the Augmanitai Lexicon, a structured terminology
repository for human-directed AI cognition.
The lexicon defines approximately 300 concepts, including newly coined terms,
developed to describe cognitive, epistemic, and safety-relevant phenomena
emerging from sustained human–AI collaboration.
Some core definitions:
Exocortex (External Cortical Architecture)
Definition:
A dynamic digital system that functions as a direct extension of human working memory.
The difference:
In contrast to a “Second Brain” (which is usually only a passive archive such as Notion or Evernote), the Exocortex is an active processor. Information is not merely stored, but recombined, synthesized, and critically examined through AI.
The message:
“I do not store things. I let them be processed externally.”
- Ontological Anchors
Definition:
Fixed, non-negotiable concepts and values that are embedded in the system as “pillars of truth.”
Function:
In a world in which AI can generate any opinion, these anchors serve as reference points. They prevent “truth drift.” When the AI hallucinates, it corrects itself against these anchors.
The message:
“My system has fixed principles that the algorithm cannot override.”
- Intentional Layer (The Layer of Will)
Definition:
The highest layer of the architecture, which remains purely biological (human). This is where the “why” and the “what” originate.
Function:
No AI action may begin unless this layer has defined a clear goal. This prevents meaningless prompt tinkering. First the will, then the tool.
The message:
“The AI delivers solutions, but I deliver the intention.”
- Interface Sovereignty
Definition:
Absolute control over when, how, and where the connection to AI is established.
Function:
It is the conscious decision against push notifications and in favor of pull-based communication. The user opens the channel, the user closes it. The system must never intrude into human focus on its own.
The message:
“I am the bouncer of my own mind.”
- Semantic Firewall
Definition:
A cognitive or technical filtering mechanism that checks incoming AI responses for plausibility, relevance, and hallucinations.
Function:
Before information is transferred from the Exocortex into the biological brain, it must pass this filter. This protects against the ingestion of “junk data.”
The message:
“I do not let unchecked code into my consciousness.”
- Biological Core
Definition:
The part of the architecture that is deliberately not digitized.
Content:
Intuition, ethical sensibility, bodily resonance (gut feeling), and genuine social connection.
Function:
The system is designed to protect this core. AI may act in an advisory capacity here, but must never intervene operationally.
The message:
“There is a zone that remains analog. This is my insurance.”
- Context Injection
Definition:
The precise, surgical injection of background information into an AI session.
Method:
Instead of simply throwing a question at the AI, one “injects” the specific frame beforehand (e.g., “You are now a skeptic from the year 2020”). This forces the architecture into a specific direction of thinking.
The message:
“The quality of the answer depends 100% on the quality of my context.”
- Synthetic Expert Clusters
Definition:
A group of predefined AI personas that are permanently available in the Exocortex (e.g., “The Critic,” “The Strategist,” “The Historian”).
Function:
One does not ask “the AI,” but convenes a virtual conference in which these clusters argue with one another.
The message:
“I do not have a chatbot. I have a staff.”
- Structured Resonance
Definition:
The planned back-and-forth oscillation of thoughts between human and machine.
Process:
Human (idea) → AI (critique) → Human (refinement) → AI (structuring).
It is not a simple retrieval, but a ping-pong game that increases quality with every exchange.
The message:
“We play the ball back and forth until the result is world-class.”
- Algorithmic Distance
Definition:
The architecturally intended “safety distance” from AI recommendations.
Function:
The system is built so that an AI answer is never immediately accepted as “truth.” There is always a built-in moment of hesitation (latency) to activate one’s own judgment.
The message:
“I trust the system, but I verify every line.”
It is not a product, a tutorial, or a framework for optimization.
It is a reference vocabulary.
Top comments (0)