In our previous posts, we’ve discussed why AI Agents fail when they rely on "vibes" and why they need a "Cognitive Interface." But what does "Intelligence" actually look like at the code level?
If you ask ten developers how to describe a tool to an AI, you’ll get ten different answers. Some will focus on technical types, others on flowery descriptions, and some on security.
At apcore, we’ve standardized this "Intelligence" into a 3-Layer Metadata Stack. By separating technical syntax from behavioral governance and tactical wisdom, we ensure that an AI Agent perceives your module with 360-degree clarity.
The apcore 3-Layer Stack
We visualize the "Intelligence" of a module as a stack that moves from Required to Tactical:
Layer 1: The Core (Syntax & Discovery)
This is the "bare minimum" for a module to exist in the apcore ecosystem.
-
input_schema: Exactly what the AI must send. -
output_schema: Exactly what the AI will receive. -
description: A short "blurb" for the AI's search engine.
The Goal: Precision. If the AI doesn't get the syntax right, nothing else matters. By enforcing JSON Schema Draft 2020-12, we provide a universal language that any LLM can understand.
Layer 2: The Annotations (Governance & Behavior)
Once the AI understands how to call the module, it needs to understand should it call it. This layer defines the "Personality" and "Safety Profile" of your code.
-
readonly: Is it safe to call this multiple times for information? -
destructive: Will this delete or overwrite data? -
requires_approval: Does a human need to click "Yes" before this runs? -
idempotent: Can the AI safely retry if the connection drops?
The Goal: Governance. We move security and policy from the prompt into the protocol.
Layer 3: The Extensions (Tactical Wisdom)
This is where the "Senior Engineer" lives. This layer provides the subtle context that prevents the AI from making logical mistakes.
-
x-when-to-use: Positive guidance for the Agent's planner. -
x-when-not-to-use: Negative guidance to prevent common misfires. -
x-common-mistakes: Pitfalls discovered during development.
The Goal: Tactical Wisdom. We inject human experience directly into the module's metadata.
Why a "Stacked" Approach?
Traditional AI tools often dump all of this into a single description string. This creates Cognitive Overload. The LLM has to parse the syntax, the security rules, and the usage tips all at once.
In apcore, we use Progressive Disclosure:
- The Agent's "Discovery" phase only sees Layer 1.
- The Agent's "Planning" phase loads Layer 2 to check for safety and retries.
- The Agent's "Execution" phase loads Layer 3 to ensure it doesn't fall into known traps.
By stacking the metadata, we reduce token usage and significantly increase the reliability of the Agent's reasoning.
A Complete "Intelligent" Module
Here is what a fully-realized apcore module looks like:
class SensitiveTransferModule(Module):
# Layer 1: Core
input_schema = TransferInput
description = "Transfer funds to an external IBAN."
# Layer 2: Annotations
annotations = ModuleAnnotations(
destructive=True,
requires_approval=True, # Safety gate
idempotent=True
)
# Layer 3: Extensions (AI Wisdom)
metadata = {
"x-when-not-to-use": "Do not use for internal account transfers.",
"x-common-mistakes": "Ensure the IBAN includes the country code.",
"x-preconditions": "User must be MFA authenticated."
}
Conclusion: Engineering Intelligence
"Intelligence" in the Agentic era is not a magic property of the model; it is an Engineering Standard of the module. When you build with the apcore 3-Layer Philosophy, you aren't just writing code—you are engineering a "Skill" that any AI can perceive and use with professional precision.
In our next article, we’ll tackle the root cause of AI hallucinations: "The Death of 'String-Based' Descriptions in AI Integration."
This is Article #7 of the **apcore: Building the AI-Perceivable World* series. Join us in standardizing the future of AI interaction.*
GitHub: aiperceivable/apcore
Top comments (0)