DEV Community

Cover image for Behavioral Annotations: Why readonly and destructive guide LLM Planning
tercel
tercel

Posted on

Behavioral Annotations: Why readonly and destructive guide LLM Planning

In our previous article, we discussed how Schemas act as the "Postman" of the apcore ecosystem—ensuring that data is delivered in the correct format. But knowing how to deliver a message isn't enough for an autonomous Agent. The Agent also needs to know the Impact of the delivery.

Imagine an Agent tasked with "fixing a data inconsistency." It finds two modules: common.user.sync and executor.user.reset. Without behavioral context, the Agent might pick the reset module because it sounds more "thorough," not realizing it will delete the entire user profile.

This is why Behavioral Annotations are a core technical pillar of the apcore protocol. In this thirteenth article, we explore how these simple boolean flags act as "Cognitive Stop Signs" for AI planners.


Syntax vs. Semantics

A schema handles the Syntax (Is it a string? Is it required?). Annotations handle the Semantics (Is it safe? Is it permanent?).

By providing this semantic layer, we move from "Code-Calling" to "Skill-Perceiving." The AI Agent no longer treats your modules as black boxes; it perceives their personality.


The 12 apcore Behavioral Annotations

The apcore protocol defines a set of standardized annotations that provide the semantic "Personality" for your code. These are grouped into Safety, Execution, and Governance:

Safety & Impact

  1. readonly: No side effects. Safe for discovery and infinite retries.
  2. destructive: Data will be permanently modified or deleted.
  3. idempotent: Multiple calls with same input have same effect as one.
  4. pure: Output depends only on input; no external state dependency.

Execution & Performance

  1. streaming: The module returns a stream of events/chunks rather than a single block.
  2. cacheable: Results can be stored for future use.
  3. cache_ttl: How long (in seconds) the result remains valid.
  4. paginated: The result is part of a series; requires a cursor/token to continue.

Governance & Security

  1. requires_approval: Pauses execution for a human "Yes" (HITL).
  2. open_world: Interacts with non-deterministic external systems (e.g., Web, Email).
  3. internal: Hidden from standard discovery; used for system-to-system calls.
  4. extra: A catch-all map for surface-specific or custom behavioral hints.

Guiding the Agent's Brain

How does an LLM actually use these flags? It’s all about the Planning Phase.

When a sophisticated Agent (like those powered by Claude 3.5 or GPT-4o) receives a list of tools, it builds a "Plan of Action."

  • If it sees a module marked as destructive: true, the model's internal safety alignment often triggers a "Caution" state.
  • It might decide to check for a "Dry Run" flag first.
  • Or, it might generate a response to the user: "I have found a way to fix this, but it requires a destructive database operation. Do you want me to proceed?"

Without these annotations, the Agent is "blind." It executes the plan first and discovers the consequences later—which is usually too late.


Real-World Case: apexe

The power of automated annotations is a highlight of apexe, our tool for wrapping existing CLIs. When you run apexe scan git, it doesn't just extract the parameters. It uses pattern matching to classify the commands:

  • git status and git log are automatically marked as readonly: true.
  • git push --force and git reset --hard are marked as destructive: true.

By simply scanning your help text, apexe creates a "Safe Workspace" where an AI Agent can browse your repository without accidentally blowing up your production branch.


Conclusion: Professional Skills, Not Just Functions

Engineering for AI means engineering for Cognitive Safety. By using apcore Behavioral Annotations, you turn your raw functions into "Professional Skills." You give the AI the wisdom it needs to plan responsibly, reducing token waste and preventing Agentic disasters.

Next, we’ll dive into the AI’s "Short-Term Memory": The Context Object and how it manages traces and state across complex module chains.


This is Article #13 of the **apcore: Building the AI-Perceivable World* series. Safety is a protocol-level primitive.*

GitHub: aiperceivable/apcore

Top comments (0)