<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: tercel</title>
    <description>The latest articles on DEV Community by tercel (@tercelyi).</description>
    <link>https://dev.to/tercelyi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tercelyi"/>
    <language>en</language>
    <item>
      <title>Context Object: State Management and Trace Propagation</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Tue, 05 May 2026 02:55:41 +0000</pubDate>
      <link>https://dev.to/tercelyi/context-object-state-management-and-trace-propagation-1dhb</link>
      <guid>https://dev.to/tercelyi/context-object-state-management-and-trace-propagation-1dhb</guid>
      <description>&lt;p&gt;In our previous articles, we explored the 11-step execution pipeline that secures every AI call. At the center of that pipeline sits a silent but essential hero: the &lt;strong&gt;Context Object&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If the pipeline is the "Heart" of apcore, the Context is its &lt;strong&gt;"Nervous System."&lt;/strong&gt; It is the object that carries state, identity, and tracing information from the first entry point down to the deepest nested module call. In this fourteenth article, we go deep into how apcore manages the "Short-Term Memory" of an Agentic system.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenge of Statelessness
&lt;/h2&gt;

&lt;p&gt;AI Agents often perform complex, multi-step tasks. An Agent might first call a &lt;code&gt;search&lt;/code&gt; module, then a &lt;code&gt;summarize&lt;/code&gt; module, and finally a &lt;code&gt;file.write&lt;/code&gt; module. &lt;/p&gt;

&lt;p&gt;In a traditional stateless architecture, these calls are isolated. The &lt;code&gt;file.write&lt;/code&gt; module doesn't know that it was triggered by a specific search result or that it’s part of a high-priority audit task. This lack of context makes debugging impossible and security fragile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;apcore&lt;/strong&gt; solves this by injecting a reference-shared &lt;code&gt;Context&lt;/code&gt; object into every execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Anatomy of the apcore Context
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;Context&lt;/code&gt; class (defined in &lt;code&gt;apcore.context&lt;/code&gt;) is a rich container that provides four critical capabilities:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. W3C-Compatible Tracing (&lt;code&gt;trace_id&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Every call chain in apcore is assigned a unique &lt;code&gt;trace_id&lt;/code&gt; (a UUID v4 by default). &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;W3C Compatibility&lt;/strong&gt;: apcore can ingest &lt;code&gt;TraceParent&lt;/code&gt; headers from external systems (like a web gateway), ensuring that your AI's "Thought Chain" is connected to the original user request in your distributed logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trace Propagation&lt;/strong&gt;: When Module A calls Module B, the &lt;code&gt;trace_id&lt;/code&gt; is automatically carried forward.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. The Audit Trail (&lt;code&gt;call_chain&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;The Context maintains a &lt;code&gt;call_chain&lt;/code&gt; list that grows as the execution moves deeper.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Example&lt;/em&gt;: &lt;code&gt;["api.v1.user", "orchestrator.order", "executor.payment"]&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;This provides a real-time "Stack Trace" for AI Agents, allowing the system to detect circular calls and enforce recursion limits.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Identity &amp;amp; Permissions (&lt;code&gt;identity&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;identity&lt;/code&gt; property carries the authenticated caller’s details, including their &lt;code&gt;id&lt;/code&gt;, &lt;code&gt;type&lt;/code&gt; (user/agent/system), and &lt;code&gt;roles&lt;/code&gt;. This is the data that the &lt;strong&gt;ACL system&lt;/strong&gt; uses to decide if a call should be allowed.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Shared Memory (&lt;code&gt;data&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Perhaps the most powerful feature is &lt;code&gt;context.data&lt;/code&gt;—a dictionary that is &lt;strong&gt;reference-shared&lt;/strong&gt; across the entire call chain.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unlike module inputs (which are local), &lt;code&gt;context.data&lt;/code&gt; allows modules to pass artifacts "sideways." &lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Real-world use case&lt;/em&gt;: A middleware can calculate a session token once and store it in &lt;code&gt;context.data&lt;/code&gt;, making it available to all subsequent modules in that chain without cluttering their input parameters.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Implementation: The Child Context Pattern
&lt;/h2&gt;

&lt;p&gt;How does apcore ensure that the context stays accurate during nested calls? It uses the &lt;strong&gt;Child Context Pattern&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When you call another module via &lt;code&gt;context.executor.call()&lt;/code&gt;, the system doesn't just pass the parent context. It creates a &lt;code&gt;.child()&lt;/code&gt; context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Inside Module A
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# This creates a child context with:
&lt;/span&gt;    &lt;span class="c1"&gt;# 1. Same trace_id
&lt;/span&gt;    &lt;span class="c1"&gt;# 2. Updated caller_id (now Module A)
&lt;/span&gt;    &lt;span class="c1"&gt;# 3. Appended call_chain
&lt;/span&gt;    &lt;span class="c1"&gt;# 4. SHARED data dictionary
&lt;/span&gt;    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;executor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;module_b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that the &lt;code&gt;caller_id&lt;/code&gt; always points to the immediate parent, while the &lt;code&gt;trace_id&lt;/code&gt; and &lt;code&gt;data&lt;/code&gt; remain consistent across the entire journey.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Turning Isolation into Collaboration
&lt;/h2&gt;

&lt;p&gt;By standardizing state management through the &lt;strong&gt;Context Object&lt;/strong&gt;, apcore turns a collection of isolated functions into a coherent, intelligent workforce. It provides the "Short-Term Memory" that AI Agents need to perform complex, traceable, and secure operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next, we’ll see how this identity data is used to enforce security in "Pattern-Based ACL: Securing the Boundaries of Agentic Autonomy."&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #14 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Identity and State are the foundation of Trust.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Behavioral Annotations: Why readonly and destructive guide LLM Planning</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Mon, 04 May 2026 01:07:46 +0000</pubDate>
      <link>https://dev.to/tercelyi/behavioral-annotations-why-readonly-and-destructive-guide-llm-planning-37nk</link>
      <guid>https://dev.to/tercelyi/behavioral-annotations-why-readonly-and-destructive-guide-llm-planning-37nk</guid>
      <description>&lt;p&gt;In our previous article, we discussed how &lt;strong&gt;Schemas&lt;/strong&gt; act as the "Postman" of the apcore ecosystem—ensuring that data is delivered in the correct format. But knowing &lt;em&gt;how&lt;/em&gt; to deliver a message isn't enough for an autonomous Agent. The Agent also needs to know the &lt;strong&gt;Impact&lt;/strong&gt; of the delivery.&lt;/p&gt;

&lt;p&gt;Imagine an Agent tasked with "fixing a data inconsistency." It finds two modules: &lt;code&gt;common.user.sync&lt;/code&gt; and &lt;code&gt;executor.user.reset&lt;/code&gt;. Without behavioral context, the Agent might pick the &lt;code&gt;reset&lt;/code&gt; module because it sounds more "thorough," not realizing it will delete the entire user profile.&lt;/p&gt;

&lt;p&gt;This is why &lt;strong&gt;Behavioral Annotations&lt;/strong&gt; are a core technical pillar of the apcore protocol. In this thirteenth article, we explore how these simple boolean flags act as "Cognitive Stop Signs" for AI planners.&lt;/p&gt;




&lt;h2&gt;
  
  
  Syntax vs. Semantics
&lt;/h2&gt;

&lt;p&gt;A schema handles the &lt;strong&gt;Syntax&lt;/strong&gt; (Is it a string? Is it required?). Annotations handle the &lt;strong&gt;Semantics&lt;/strong&gt; (Is it safe? Is it permanent?).&lt;/p&gt;

&lt;p&gt;By providing this semantic layer, we move from "Code-Calling" to "Skill-Perceiving." The AI Agent no longer treats your modules as black boxes; it perceives their personality.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 12 apcore Behavioral Annotations
&lt;/h2&gt;

&lt;p&gt;The apcore protocol defines a set of standardized annotations that provide the semantic "Personality" for your code. These are grouped into &lt;strong&gt;Safety&lt;/strong&gt;, &lt;strong&gt;Execution&lt;/strong&gt;, and &lt;strong&gt;Governance&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  Safety &amp;amp; Impact
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;readonly&lt;/code&gt;&lt;/strong&gt;: No side effects. Safe for discovery and infinite retries.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;destructive&lt;/code&gt;&lt;/strong&gt;: Data will be permanently modified or deleted.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;idempotent&lt;/code&gt;&lt;/strong&gt;: Multiple calls with same input have same effect as one.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;pure&lt;/code&gt;&lt;/strong&gt;: Output depends only on input; no external state dependency.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Execution &amp;amp; Performance
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;streaming&lt;/code&gt;&lt;/strong&gt;: The module returns a stream of events/chunks rather than a single block.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;cacheable&lt;/code&gt;&lt;/strong&gt;: Results can be stored for future use.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;cache_ttl&lt;/code&gt;&lt;/strong&gt;: How long (in seconds) the result remains valid.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;paginated&lt;/code&gt;&lt;/strong&gt;: The result is part of a series; requires a cursor/token to continue.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Governance &amp;amp; Security
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;code&gt;requires_approval&lt;/code&gt;&lt;/strong&gt;: Pauses execution for a human "Yes" (HITL).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;open_world&lt;/code&gt;&lt;/strong&gt;: Interacts with non-deterministic external systems (e.g., Web, Email).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;internal&lt;/code&gt;&lt;/strong&gt;: Hidden from standard discovery; used for system-to-system calls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;extra&lt;/code&gt;&lt;/strong&gt;: A catch-all map for surface-specific or custom behavioral hints.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Guiding the Agent's Brain
&lt;/h2&gt;

&lt;p&gt;How does an LLM actually use these flags? It’s all about the &lt;strong&gt;Planning Phase&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When a sophisticated Agent (like those powered by Claude 3.5 or GPT-4o) receives a list of tools, it builds a "Plan of Action." &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If it sees a module marked as &lt;code&gt;destructive: true&lt;/code&gt;, the model's internal safety alignment often triggers a "Caution" state. &lt;/li&gt;
&lt;li&gt;It might decide to check for a "Dry Run" flag first.&lt;/li&gt;
&lt;li&gt;Or, it might generate a response to the user: &lt;em&gt;"I have found a way to fix this, but it requires a destructive database operation. Do you want me to proceed?"&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without these annotations, the Agent is "blind." It executes the plan first and discovers the consequences later—which is usually too late.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real-World Case: &lt;code&gt;apexe&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The power of automated annotations is a highlight of &lt;strong&gt;apexe&lt;/strong&gt;, our tool for wrapping existing CLIs. When you run &lt;code&gt;apexe scan git&lt;/code&gt;, it doesn't just extract the parameters. It uses pattern matching to classify the commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git status&lt;/code&gt; and &lt;code&gt;git log&lt;/code&gt; are automatically marked as &lt;code&gt;readonly: true&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;git push --force&lt;/code&gt; and &lt;code&gt;git reset --hard&lt;/code&gt; are marked as &lt;code&gt;destructive: true&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By simply scanning your help text, apexe creates a "Safe Workspace" where an AI Agent can browse your repository without accidentally blowing up your production branch.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Professional Skills, Not Just Functions
&lt;/h2&gt;

&lt;p&gt;Engineering for AI means engineering for &lt;strong&gt;Cognitive Safety&lt;/strong&gt;. By using &lt;strong&gt;apcore Behavioral Annotations&lt;/strong&gt;, you turn your raw functions into "Professional Skills." You give the AI the wisdom it needs to plan responsibly, reducing token waste and preventing Agentic disasters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next, we’ll dive into the AI’s "Short-Term Memory": The Context Object and how it manages traces and state across complex module chains.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #13 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Safety is a protocol-level primitive.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>llm</category>
    </item>
    <item>
      <title>Strict Schema Enforcement: The Bedrock of AI Reliability</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Fri, 01 May 2026 11:21:24 +0000</pubDate>
      <link>https://dev.to/tercelyi/strict-schema-enforcement-the-bedrock-of-ai-reliability-1kdb</link>
      <guid>https://dev.to/tercelyi/strict-schema-enforcement-the-bedrock-of-ai-reliability-1kdb</guid>
      <description>&lt;p&gt;In the early days of AI tool-calling, we relied on a wing and a prayer. We gave an LLM a docstring and hoped it would guess the right types. If the Agent sent a string instead of a UUID, or a float instead of an integer, the system would crash, returning a generic 500 error that left the Agent stuck in an infinite retry loop.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;Parameter Hallucination&lt;/strong&gt;, and it is the single biggest obstacle to building production-grade AI systems.&lt;/p&gt;

&lt;p&gt;At &lt;strong&gt;apcore&lt;/strong&gt;, we solve this by making &lt;strong&gt;Strict Schema Enforcement&lt;/strong&gt; a protocol-level requirement. In this twelfth article of our series, we dive into why data contracts are the only way to build a reliable Cognitive Interface.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why JSON Schema Draft 2020-12?
&lt;/h2&gt;

&lt;p&gt;When we designed apcore, we didn't want to invent a new schema language. We chose &lt;strong&gt;JSON Schema Draft 2020-12&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Why? Because it is the "Universal Vocabulary" of the modern web. It is language-agnostic, widely supported, and incredibly expressive. By standardizing on this draft, we ensure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Cross-Language Consistency&lt;/strong&gt;: A schema defined in your Python backend is validated with the exact same logic in your Rust microservice.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Rich Polymorphism&lt;/strong&gt;: We can use &lt;code&gt;oneOf&lt;/code&gt; and &lt;code&gt;anyOf&lt;/code&gt; to define complex inputs that an LLM can actually reason about.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Self-Contained Definitions&lt;/strong&gt;: With &lt;code&gt;$ref&lt;/code&gt; resolution, apcore ensures that the LLM always receives a single, dereferenced schema, removing the need for the model to "fetch" external definitions.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Mandatory Perception
&lt;/h2&gt;

&lt;p&gt;In apcore, a module &lt;strong&gt;cannot&lt;/strong&gt; exist without an &lt;code&gt;input_schema&lt;/code&gt;. This isn't a suggestion; it’s an enforcement in the &lt;code&gt;Registry&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;By forcing developers to define the input contract upfront, we create a &lt;strong&gt;Safe Zone&lt;/strong&gt; for the AI Agent. The Agent no longer has to "guess" if a field is required or what its regex pattern is. It "perceives" the contract directly from the module metadata.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Strict" Mode
&lt;/h3&gt;

&lt;p&gt;apcore encourages the use of &lt;code&gt;additionalProperties: false&lt;/code&gt;. This tells the LLM: &lt;em&gt;"Do not hallucinate extra parameters. Only send exactly what is defined here."&lt;/em&gt; This small architectural choice significantly reduces token noise and increases the success rate of complex tool calls.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Execution Pipeline: Step 6
&lt;/h2&gt;

&lt;p&gt;The power of strict schemas is best seen in &lt;strong&gt;Step 6&lt;/strong&gt; of the apcore Execution Pipeline: &lt;strong&gt;Input Validation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Before your business logic ever touches the data, the apcore Executor runs a full schema validation. If the LLM makes a mistake—sending a string instead of a number—the Executor halts execution immediately.&lt;/p&gt;

&lt;p&gt;But here is the clever part: instead of a stack trace, it returns a &lt;strong&gt;Structured Validation Error&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SCHEMA_VALIDATION_ERROR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Input validation failed"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"details"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"field"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"reason"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"not a valid UUID"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ai_guidance"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The user_id must be a UUID format. Please re-check the user record and try again."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This error informs the Agent &lt;em&gt;exactly&lt;/em&gt; what went wrong, allowing it to self-correct and retry without human intervention.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Engineering the Contract
&lt;/h2&gt;

&lt;p&gt;If you want reliable AI Agents, you must stop "Prompting" your tools and start &lt;strong&gt;Engineering your Contracts&lt;/strong&gt;. Strict schema enforcement is not about adding friction; it’s about providing the semantic clarity that AI needs to act autonomously and safely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next, we’ll move from syntax to semantics: Behavioral Annotations. We’ll look at how 'readonly' and 'destructive' guide the LLM's planning process.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #12 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Reliability is a design choice.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>api</category>
      <category>llm</category>
    </item>
    <item>
      <title>The 11-Step Execution Pipeline: A Secured Journey for Every Call</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Thu, 30 Apr 2026 08:28:08 +0000</pubDate>
      <link>https://dev.to/tercelyi/the-11-step-execution-pipeline-a-secured-journey-for-every-call-2k25</link>
      <guid>https://dev.to/tercelyi/the-11-step-execution-pipeline-a-secured-journey-for-every-call-2k25</guid>
      <description>&lt;p&gt;When an AI Agent calls a tool, we often think of it as a simple "request-response" event. But in the &lt;strong&gt;apcore&lt;/strong&gt; world, every call is a mission-critical journey. Whether you are invoking a Python module or a Rust microservice, that call passes through a rigorous, &lt;strong&gt;11-step Execution Pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This pipeline is the "Heart" of the apcore engine. It ensures that every interaction is validated, authorized, and perfectly traceable. In this eleventh article, we’re going to open the hood and see exactly how apcore ensures reliability at scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  The apcore Execution Pipeline
&lt;/h2&gt;

&lt;p&gt;Every call through the &lt;code&gt;Executor.call()&lt;/code&gt; method follows this deterministic path:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Context Processing&lt;/strong&gt;: Create or update the &lt;code&gt;Context&lt;/code&gt;. Generate a &lt;code&gt;trace_id&lt;/code&gt; (if one doesn't exist) and update the &lt;code&gt;caller_id&lt;/code&gt; and &lt;code&gt;call_chain&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Safety Checks&lt;/strong&gt;: Verify the maximum call depth (default 8) to prevent circular calls from crashing the system.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Module Lookup&lt;/strong&gt;: Find the target module in the &lt;strong&gt;Registry&lt;/strong&gt; using its Canonical ID.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;ACL Check&lt;/strong&gt;: Perform the first-match-wins &lt;strong&gt;Access Control List&lt;/strong&gt; check. Does the caller have permission to invoke the target?&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Approval Gate&lt;/strong&gt;: Check if the module is marked as &lt;code&gt;requires_approval&lt;/code&gt;. If so, pause execution and wait for a human or automated response.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Input Validation&lt;/strong&gt;: Validate the incoming &lt;code&gt;dict&lt;/code&gt; against the module's &lt;code&gt;input_schema&lt;/code&gt; (JSON Schema Draft 2020-12).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Middleware: &lt;code&gt;before()&lt;/code&gt;&lt;/strong&gt;: Execute all registered middleware's &lt;code&gt;before()&lt;/code&gt; hooks in sequence (e.g., logging, metrics, caching).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Module Execution&lt;/strong&gt;: The actual &lt;code&gt;module.execute(inputs, context)&lt;/code&gt; call. This is where your business logic runs.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Output Validation&lt;/strong&gt;: Validate the returned result against the &lt;code&gt;output_schema&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Middleware: &lt;code&gt;after()&lt;/code&gt;&lt;/strong&gt;: Execute all middleware's &lt;code&gt;after()&lt;/code&gt; hooks in reverse order.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Return Result&lt;/strong&gt;: Hand the validated and enriched result back to the caller.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Why 11 Steps? (The Real-World Case of &lt;code&gt;apflow&lt;/code&gt;)
&lt;/h2&gt;

&lt;p&gt;You might wonder: "Isn't 11 steps overkill?"&lt;/p&gt;

&lt;p&gt;The answer lies in products like &lt;strong&gt;apflow&lt;/strong&gt;, our distributed task orchestration framework. In a cluster environment, where tasks are moving between nodes, you cannot afford "fuzzy" execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traceability at Scale
&lt;/h3&gt;

&lt;p&gt;By enforcing &lt;strong&gt;Step 1 (Context Processing)&lt;/strong&gt;, apflow ensures that a task triggered by a user's web request keeps the same &lt;code&gt;trace_id&lt;/code&gt; even as it moves from the Leader node to a remote Worker node. This is the only way to debug a "hallucinating" Agent in a distributed environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance in Autonomy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 5 (Approval Gate)&lt;/strong&gt; is critical for apflow's A2A (Agent-to-Agent) support. If an "Analyst Agent" wants to call a "Payment Agent," apflow uses this step to pause the workflow and wait for a human "Manager" to click "Approve" in the dashboard. Without this step, the system would lack a "Safety Valve."&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Without Borders
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 4 (ACL Check)&lt;/strong&gt; allows apflow to enforce "Role-Based" security. A &lt;code&gt;RestExecutor&lt;/code&gt; node might only be allowed to call &lt;code&gt;common.*&lt;/code&gt; modules, while a &lt;code&gt;SystemInfoExecutor&lt;/code&gt; node might have broader access.&lt;/p&gt;




&lt;h2&gt;
  
  
  Technical Rigor: Middleware &amp;amp; Error Guidance
&lt;/h2&gt;

&lt;p&gt;The pipeline isn't just a set of checks; it’s an extension point. In &lt;strong&gt;Step 7 and 10&lt;/strong&gt;, you can inject custom logic via Middleware. &lt;/p&gt;

&lt;p&gt;And if any step fails? apcore doesn't just throw a traceback. It provides &lt;strong&gt;Self-Healing Guidance&lt;/strong&gt;. If validation fails at &lt;strong&gt;Step 6&lt;/strong&gt;, the pipeline returns an error with &lt;code&gt;ai_guidance&lt;/code&gt;, telling the Agent exactly how to fix the input and retry.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: The Backbone of Trust
&lt;/h2&gt;

&lt;p&gt;Reliability in AI systems is not an accident; it is a structural property of the execution pipeline. By enforcing an 11-step journey, &lt;strong&gt;apcore&lt;/strong&gt; ensures that every AI call is as secure and predictable as a high-performance database transaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next, we’ll dive into the technical details of Article #12: Strict Schema Enforcement: The Bedrock of AI Reliability.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #11 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Join us as we build the engine of the Agentic era.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>security</category>
    </item>
    <item>
      <title>The Execution Pipeline: A Secured Journey for Every Call</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Thu, 23 Apr 2026 23:42:40 +0000</pubDate>
      <link>https://dev.to/tercelyi/the-11-step-execution-pipeline-a-secured-journey-for-every-call-50lp</link>
      <guid>https://dev.to/tercelyi/the-11-step-execution-pipeline-a-secured-journey-for-every-call-50lp</guid>
      <description>&lt;p&gt;When an AI Agent calls a tool, we often think of it as a simple "request-response" event. But in the &lt;strong&gt;apcore&lt;/strong&gt; world, every call is a mission-critical journey. Whether you are invoking a Python module or a Rust microservice, that call passes through a rigorous, &lt;strong&gt;11-step Execution Pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This pipeline is the "Heart" of the apcore engine. It ensures that every interaction is validated, authorized, and perfectly traceable. In this eleventh article, we’re going to open the hood and see exactly how apcore ensures reliability at scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  The apcore Execution Pipeline
&lt;/h2&gt;

&lt;p&gt;Every call through the &lt;code&gt;Executor.call()&lt;/code&gt; method follows this deterministic path:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Context Processing&lt;/strong&gt;: Create or update the &lt;code&gt;Context&lt;/code&gt;. Generate a &lt;code&gt;trace_id&lt;/code&gt; (if one doesn't exist) and update the &lt;code&gt;caller_id&lt;/code&gt; and &lt;code&gt;call_chain&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Safety Checks&lt;/strong&gt;: Verify the maximum call depth (default 8) to prevent circular calls from crashing the system.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Module Lookup&lt;/strong&gt;: Find the target module in the &lt;strong&gt;Registry&lt;/strong&gt; using its Canonical ID.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;ACL Check&lt;/strong&gt;: Perform the first-match-wins &lt;strong&gt;Access Control List&lt;/strong&gt; check. Does the caller have permission to invoke the target?&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Approval Gate&lt;/strong&gt;: Check if the module is marked as &lt;code&gt;requires_approval&lt;/code&gt;. If so, pause execution and wait for a human or automated response.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Input Validation&lt;/strong&gt;: Validate the incoming &lt;code&gt;dict&lt;/code&gt; against the module's &lt;code&gt;input_schema&lt;/code&gt; (JSON Schema Draft 2020-12).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Middleware: &lt;code&gt;before()&lt;/code&gt;&lt;/strong&gt;: Execute all registered middleware's &lt;code&gt;before()&lt;/code&gt; hooks in sequence (e.g., logging, metrics, caching).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Module Execution&lt;/strong&gt;: The actual &lt;code&gt;module.execute(inputs, context)&lt;/code&gt; call. This is where your business logic runs.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Output Validation&lt;/strong&gt;: Validate the returned result against the &lt;code&gt;output_schema&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Middleware: &lt;code&gt;after()&lt;/code&gt;&lt;/strong&gt;: Execute all middleware's &lt;code&gt;after()&lt;/code&gt; hooks in reverse order.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Return Result&lt;/strong&gt;: Hand the validated and enriched result back to the caller.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Why 11 Steps? (The Real-World Case of &lt;code&gt;apflow&lt;/code&gt;)
&lt;/h2&gt;

&lt;p&gt;You might wonder: "Isn't 11 steps overkill?"&lt;/p&gt;

&lt;p&gt;The answer lies in products like &lt;strong&gt;apflow&lt;/strong&gt;, our distributed task orchestration framework. In a cluster environment, where tasks are moving between nodes, you cannot afford "fuzzy" execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traceability at Scale
&lt;/h3&gt;

&lt;p&gt;By enforcing &lt;strong&gt;Step 1 (Context Processing)&lt;/strong&gt;, apflow ensures that a task triggered by a user's web request keeps the same &lt;code&gt;trace_id&lt;/code&gt; even as it moves from the Leader node to a remote Worker node. This is the only way to debug a "hallucinating" Agent in a distributed environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance in Autonomy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 5 (Approval Gate)&lt;/strong&gt; is critical for apflow's A2A (Agent-to-Agent) support. If an "Analyst Agent" wants to call a "Payment Agent," apflow uses this step to pause the workflow and wait for a human "Manager" to click "Approve" in the dashboard. Without this step, the system would lack a "Safety Valve."&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Without Borders
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 4 (ACL Check)&lt;/strong&gt; allows apflow to enforce "Role-Based" security. A &lt;code&gt;RestExecutor&lt;/code&gt; node might only be allowed to call &lt;code&gt;common.*&lt;/code&gt; modules, while a &lt;code&gt;SystemInfoExecutor&lt;/code&gt; node might have broader access.&lt;/p&gt;




&lt;h2&gt;
  
  
  Technical Rigor: Middleware &amp;amp; Error Guidance
&lt;/h2&gt;

&lt;p&gt;The pipeline isn't just a set of checks; it’s an extension point. In &lt;strong&gt;Step 7 and 10&lt;/strong&gt;, you can inject custom logic via Middleware. &lt;/p&gt;

&lt;p&gt;And if any step fails? apcore doesn't just throw a traceback. It provides &lt;strong&gt;Self-Healing Guidance&lt;/strong&gt;. If validation fails at &lt;strong&gt;Step 6&lt;/strong&gt;, the pipeline returns an error with &lt;code&gt;ai_guidance&lt;/code&gt;, telling the Agent exactly how to fix the input and retry.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: The Backbone of Trust
&lt;/h2&gt;

&lt;p&gt;Reliability in AI systems is not an accident; it is a structural property of the execution pipeline. By enforcing an 11-step journey, &lt;strong&gt;apcore&lt;/strong&gt; ensures that every AI call is as secure and predictable as a high-performance database transaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next, we’ll dive into the technical details of Article #12: Strict Schema Enforcement: The Bedrock of AI Reliability.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #11 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Join us as we build the engine of the Agentic era.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>security</category>
    </item>
    <item>
      <title>Directory-as-ID: Scaling Module Discovery Without Configuration</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Thu, 23 Apr 2026 10:42:20 +0000</pubDate>
      <link>https://dev.to/tercelyi/directory-as-id-scaling-module-discovery-without-configuration-492p</link>
      <guid>https://dev.to/tercelyi/directory-as-id-scaling-module-discovery-without-configuration-492p</guid>
      <description>&lt;p&gt;In the previous volume, we explored the vision of an "AI-Perceivable" world. Now, it’s time to go under the hood. The first technical pillar of the &lt;strong&gt;apcore&lt;/strong&gt; protocol is a deceptively simple idea: &lt;strong&gt;Directory-as-ID&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a traditional microservices or modular architecture, you often have a central registry, a massive YAML configuration file, or a complex dependency injection container. As your system grows from 10 modules to 1,000, this central "phonebook" becomes a bottleneck. It’s the source of merge conflicts, naming collisions, and "Scaling Rot."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;apcore&lt;/strong&gt; solves this by making the file system the source of truth. In this tenth article, we’ll look at the algorithm behind Directory-as-ID and why it’s essential for scaling AI-ready systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Algorithm: From Path to Canonical ID
&lt;/h2&gt;

&lt;p&gt;The principle is straightforward: &lt;strong&gt;The relative path of a module file is its unique identity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you have a module root directory (e.g., &lt;code&gt;extensions/&lt;/code&gt;), apcore scans the files and applies a deterministic mapping:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Remove the Root&lt;/strong&gt;: &lt;code&gt;extensions/executor/email/send.py&lt;/code&gt; -&amp;gt; &lt;code&gt;executor/email/send.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Remove Extension&lt;/strong&gt;: &lt;code&gt;executor/email/send.py&lt;/code&gt; -&amp;gt; &lt;code&gt;executor/email/send&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Normalize Separators&lt;/strong&gt;: &lt;code&gt;executor/email/send&lt;/code&gt; -&amp;gt; &lt;code&gt;executor.email.send&lt;/code&gt; (The Canonical ID)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why this matters for AI:
&lt;/h3&gt;

&lt;p&gt;AI Agents are highly sensitive to names. By using a hierarchical, directory-based naming convention, you naturally create &lt;strong&gt;Namespaces&lt;/strong&gt;. An Agent can quickly differentiate between &lt;code&gt;executor.user.delete&lt;/code&gt; and &lt;code&gt;admin.user.delete&lt;/code&gt; because the hierarchy provides the context.&lt;/p&gt;




&lt;h2&gt;
  
  
  Case Study: Zero-Config in &lt;code&gt;apexe&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The power of Directory-as-ID is best seen in real-world products like &lt;strong&gt;apexe&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  apexe: AI-fying the CLI Universe
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;apexe&lt;/strong&gt; is a tool that scans existing CLIs (like &lt;code&gt;git&lt;/code&gt; or &lt;code&gt;docker&lt;/code&gt;) and wraps them into apcore modules. When you run &lt;code&gt;apexe scan git&lt;/code&gt;, it generates a hierarchy of modules under your &lt;code&gt;~/.apexe/modules/&lt;/code&gt; directory. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git commit&lt;/code&gt; becomes &lt;code&gt;cli.git.commit&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;git push&lt;/code&gt; becomes &lt;code&gt;cli.git.push&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of Directory-as-ID, apexe doesn't need to manage a database of IDs. It simply writes the files to the right folders, and the apcore Registry "perceives" the entire CLI command tree instantly. This enables &lt;strong&gt;Dynamic Skill Discovery&lt;/strong&gt;: if you install a new CLI tool and scan it, your Agent can perceive it immediately without a single server restart.&lt;/p&gt;




&lt;h2&gt;
  
  
  Technical Rigor: Handling Multi-Language Drift
&lt;/h2&gt;

&lt;p&gt;A core challenge of a language-agnostic standard is that different languages have different naming conventions. Python likes &lt;code&gt;snake_case&lt;/code&gt;, while TypeScript prefers &lt;code&gt;camelCase&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The apcore protocol defines strict &lt;strong&gt;ID Normalization Rules&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Normalization&lt;/strong&gt;: All IDs are converted to a "Canonical" form (lowercase, snake_case) for the Registry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language Mapping&lt;/strong&gt;: Each SDK (Python, TS, Rust) handles the translation between the Canonical ID and the local file name (e.g., &lt;code&gt;SendEmail.ts&lt;/code&gt; maps to &lt;code&gt;send_email&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures that even in a polyglot enterprise, the AI Agent sees a single, consistent address space.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Scale is a Design Constraint
&lt;/h2&gt;

&lt;p&gt;Directory-as-ID is more than a convenience; it’s a design constraint for the Agentic Era. It enables &lt;strong&gt;Zero-Config Discovery&lt;/strong&gt;, eliminates registry bottlenecks, and provides a natural namespace for AI perception.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In the next article, we’ll dive into the heart of the engine: The 11-Step Execution Pipeline.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #10 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Join us as we go deep into the protocol.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>microservices</category>
      <category>softwareengineering</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Building for the Next 10 Years: The Design Principles of apcore</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Wed, 22 Apr 2026 12:40:38 +0000</pubDate>
      <link>https://dev.to/tercelyi/building-for-the-next-10-years-the-design-principles-of-apcore-3777</link>
      <guid>https://dev.to/tercelyi/building-for-the-next-10-years-the-design-principles-of-apcore-3777</guid>
      <description>&lt;p&gt;We’ve reached the end of Volume I of our series. We’ve explored the problems with "Vibe-based" engineering, the rise of the Cognitive Interface, and the immediate power of the apcore Adapter ecosystem. &lt;/p&gt;

&lt;p&gt;But as any experienced engineer knows, a standard is only as good as its foundational principles. In the fast-moving world of AI, where frameworks disappear every six months, how do we build something that will still be relevant in 2030?&lt;/p&gt;

&lt;p&gt;In this ninth and final post of our &lt;strong&gt;Manifesto&lt;/strong&gt;, we outline the five design principles that guide the apcore standard.&lt;/p&gt;




&lt;h2&gt;
  
  
  Principle #1: Schema-Enforced Everything
&lt;/h2&gt;

&lt;p&gt;In the early days of the web, "Postel’s Law" (be conservative in what you send, liberal in what you accept) was the rule. For AI Agents, we believe the opposite is true: &lt;strong&gt;Be strict in everything.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reliability in an Agentic system is impossible without strict contracts. In apcore, a module without a schema is not a module—it is a bug. By enforcing &lt;strong&gt;JSON Schema&lt;/strong&gt; at the protocol level, we ensure that the AI Agent and the code always speak a deterministic, validated language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle #2: Directory-as-ID (Zero-Config DX)
&lt;/h2&gt;

&lt;p&gt;Developer Experience (DX) is not a luxury; it is a security feature. When developers have to manually register every tool in a central configuration file, "Scaling Rot" sets in. People forget to update the config, IDs conflict, and the system becomes a mess.&lt;/p&gt;

&lt;p&gt;apcore uses the file system as the source of truth. The path &lt;code&gt;extensions/executor/email/send.py&lt;/code&gt; automatically becomes the Canonical ID &lt;code&gt;executor.email.send&lt;/code&gt;. This makes discovery natural, scalable, and impossible to "forget."&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle #3: Progressive Disclosure of Metadata
&lt;/h2&gt;

&lt;p&gt;AI Agents have limited context windows and expensive tokens. You cannot dump your entire 5,000-page API manual into every prompt. &lt;/p&gt;

&lt;p&gt;apcore is designed for &lt;strong&gt;Progressive Disclosure&lt;/strong&gt;. The Agent scans short &lt;code&gt;descriptions&lt;/code&gt; for discovery, checks &lt;code&gt;annotations&lt;/code&gt; for planning, and only reads the full &lt;code&gt;documentation&lt;/code&gt; when it’s ready to execute. This "just-in-time" metadata delivery is essential for large-scale Agentic systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle #4: Language-Agnostic &amp;amp; Protocol-First
&lt;/h2&gt;

&lt;p&gt;An AI-Perceivable module should not be a "Python feature." It is a structural property of the software. Whether your backend is in Rust, TypeScript, or Go, it must project the same "Cognitive Interface." &lt;/p&gt;

&lt;p&gt;By prioritizing the &lt;strong&gt;Protocol Specification&lt;/strong&gt; over any individual SDK, we ensure that an enterprise can build a heterogeneous, multi-language workforce that speaks a single semantic language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle #5: Governance as a First-Class Citizen
&lt;/h2&gt;

&lt;p&gt;Safety is not an "add-on" you prompt-engineer at the end. It must be baked into the lifecycle. &lt;strong&gt;Access Control Lists (ACL)&lt;/strong&gt;, &lt;strong&gt;Human-in-the-Loop (Approval Gates)&lt;/strong&gt;, and &lt;strong&gt;Structured Error Guidance&lt;/strong&gt; are core primitives of the apcore protocol. &lt;/p&gt;

&lt;p&gt;We don't "ask" the AI to be safe; we enforce safety at the runtime level.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary of Volume I: The Manifesto
&lt;/h2&gt;

&lt;p&gt;We started this series with a goal: to move from "Prompting Tools" to "Engineering Modules." Over these nine articles, we’ve laid out the vision for an &lt;strong&gt;AI-Perceivable World&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We’ve shown that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reliability comes from &lt;strong&gt;Enforcement&lt;/strong&gt;, not better prompts.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Cognitive Interface&lt;/strong&gt; is the next layer of the stack.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Adapter Pattern&lt;/strong&gt; allows one module to serve MCP, A2A, CLI, and Web.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What’s Next: Volume II — The Core Protocol Deep Dive
&lt;/h2&gt;

&lt;p&gt;Now that we’ve established the &lt;em&gt;Why&lt;/em&gt;, it’s time to look at the &lt;em&gt;How&lt;/em&gt;. In our next volume, we will go "Under the Hood." We’ll look at the actual algorithms behind Directory-as-ID, the 11-step Execution Pipeline, and the math of pattern-based ACL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned. The deep dive begins.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #9 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Join us in building the architecture for the next decade.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The Death of "String-Based" Descriptions in AI Integration</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Mon, 20 Apr 2026 11:26:32 +0000</pubDate>
      <link>https://dev.to/tercelyi/the-death-of-string-based-descriptions-in-ai-integration-327l</link>
      <guid>https://dev.to/tercelyi/the-death-of-string-based-descriptions-in-ai-integration-327l</guid>
      <description>&lt;p&gt;In the early days of building AI tools, we all followed the same pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Define a function.&lt;/li&gt;
&lt;li&gt; Write a clever docstring like: &lt;em&gt;"This tool is very fast and deletes users securely."&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt; Pass it to the LLM.&lt;/li&gt;
&lt;li&gt; Pray it understands what "very fast" and "securely" means.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As we move into 2026, it’s time to admit the truth: &lt;strong&gt;Free-form string descriptions are the #1 reason AI Agents fail.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you rely on "Vibes" (Prompt Engineering) to define your tools, you are essentially asking the AI to guess your intent. In this eighth post of our &lt;strong&gt;apcore&lt;/strong&gt; series, we explain why we’re moving beyond "String-Based" descriptions toward a world of &lt;strong&gt;Structured Metadata&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Illusion of Clarity
&lt;/h2&gt;

&lt;p&gt;LLMs are semantic engines. They look for patterns in language. If you have two tools in your system—&lt;code&gt;remove_user&lt;/code&gt; and &lt;code&gt;delete_account&lt;/code&gt;—an LLM might treat them as synonyms. Even if you add a sentence explaining the difference, the model’s internal weights might still bias it toward one over the other based on its training data.&lt;/p&gt;

&lt;p&gt;This is what we call &lt;strong&gt;"Description Drift."&lt;/strong&gt; As your engineering team grows, different developers write descriptions in different styles. One uses emojis, another uses technical jargon, and a third uses vague adjectives. &lt;/p&gt;

&lt;p&gt;To the AI, your "Cognitive Interface" starts to look like a messy, unorganized library. The result? The Agent calls the wrong tool at the wrong time, leading to security breaches or data loss.&lt;/p&gt;




&lt;h2&gt;
  
  
  apcore’s Dual-Layered Metadata
&lt;/h2&gt;

&lt;p&gt;At &lt;strong&gt;apcore&lt;/strong&gt;, we’ve replaced the single-string description with a &lt;strong&gt;Dual-Layered Metadata Model&lt;/strong&gt;. We borrow this concept from the way humans learn complex skills: we scan the table of contents first, and only read the detailed manual when we’re ready to act.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The Discovery Layer (&lt;code&gt;description&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;A mandatory, short string (max 200 characters). This is optimized for the AI's "Search" phase. It tells the AI: &lt;em&gt;"This tool exists, and this is its high-level purpose."&lt;/em&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Purpose&lt;/strong&gt;: Discovery and RAG (Retrieval-Augmented Generation).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. The Cognitive Layer (&lt;code&gt;documentation&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;A long-form, Markdown-supported field (up to 5000 characters). This is the "Manual." It contains detailed use cases, constraints, business logic, and "What NOT to do" warnings. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Purpose&lt;/strong&gt;: Planning and Precision. The AI only "reads" this once it has tentatively selected the tool for the task.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PaymentModule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Module&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Discovery: Fast, cheap to read
&lt;/span&gt;    &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Process credit card payments via Stripe.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="c1"&gt;# Cognition: Detailed, read only when needed
&lt;/span&gt;    &lt;span class="n"&gt;documentation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    ## Usage Rules
    - Only use for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;USD&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; transactions. For others, use `executor.fx.payment`.
    - Maximum single charge: $5,000.
    - Requires `stripe_api_key` in the system configuration.

    ## Common Pitfalls
    - Do NOT call this twice for the same order_id; it is not idempotent by default.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Schema-Enforced Descriptions
&lt;/h2&gt;

&lt;p&gt;In apcore, having a description and a schema isn't a "best practice"—it's an &lt;strong&gt;Enforcement&lt;/strong&gt;. You literally cannot register a module in an apcore Registry without providing these metadata fields.&lt;/p&gt;

&lt;p&gt;This ensures that your system is &lt;strong&gt;Self-Documenting&lt;/strong&gt; for AI. Every time a new developer adds a module, they are forced to define its "Cognitive Interface." This prevents "invisible tools" from cluttering your system and ensures that the AI always has the information it needs to succeed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Moving Beyond "Adjective Engineering"
&lt;/h2&gt;

&lt;p&gt;Instead of telling the AI that a tool is "safe," we use &lt;strong&gt;Behavioral Annotations&lt;/strong&gt;. We mark it as &lt;code&gt;destructive=False&lt;/code&gt; and &lt;code&gt;requires_approval=True&lt;/code&gt;. These aren't just strings; they are structured primitives that the apcore Executor uses to govern the call.&lt;/p&gt;

&lt;p&gt;By moving from "Strings" to "Structures," we reduce the cognitive load on the LLM. It no longer has to "guess" the intent of your code; it simply "perceives" the contract.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In our next article, we wrap up Volume I by looking at the "Design Principles of apcore" that will guide us for the next 10 years.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #8 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Engineering metadata is the foundation of reliable AI.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Standardizing "Intelligence": The 3-Layer Metadata Philosophy</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Sat, 18 Apr 2026 00:12:45 +0000</pubDate>
      <link>https://dev.to/tercelyi/standardizing-intelligence-the-3-layer-metadata-philosophy-392i</link>
      <guid>https://dev.to/tercelyi/standardizing-intelligence-the-3-layer-metadata-philosophy-392i</guid>
      <description>&lt;p&gt;In our previous posts, we’ve discussed why AI Agents fail when they rely on "vibes" and why they need a "Cognitive Interface." But what does "Intelligence" actually look like at the code level? &lt;/p&gt;

&lt;p&gt;If you ask ten developers how to describe a tool to an AI, you’ll get ten different answers. Some will focus on technical types, others on flowery descriptions, and some on security. &lt;/p&gt;

&lt;p&gt;At &lt;strong&gt;apcore&lt;/strong&gt;, we’ve standardized this "Intelligence" into a &lt;strong&gt;3-Layer Metadata Stack&lt;/strong&gt;. By separating technical syntax from behavioral governance and tactical wisdom, we ensure that an AI Agent perceives your module with 360-degree clarity.&lt;/p&gt;




&lt;h2&gt;
  
  
  The apcore 3-Layer Stack
&lt;/h2&gt;

&lt;p&gt;We visualize the "Intelligence" of a module as a stack that moves from &lt;strong&gt;Required&lt;/strong&gt; to &lt;strong&gt;Tactical&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 1: The Core (Syntax &amp;amp; Discovery)
&lt;/h3&gt;

&lt;p&gt;This is the "bare minimum" for a module to exist in the apcore ecosystem. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;input_schema&lt;/code&gt;&lt;/strong&gt;: Exactly what the AI must send.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;output_schema&lt;/code&gt;&lt;/strong&gt;: Exactly what the AI will receive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;description&lt;/code&gt;&lt;/strong&gt;: A short "blurb" for the AI's search engine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Goal&lt;/strong&gt;: Precision. If the AI doesn't get the syntax right, nothing else matters. By enforcing &lt;strong&gt;JSON Schema Draft 2020-12&lt;/strong&gt;, we provide a universal language that any LLM can understand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 2: The Annotations (Governance &amp;amp; Behavior)
&lt;/h3&gt;

&lt;p&gt;Once the AI understands &lt;em&gt;how&lt;/em&gt; to call the module, it needs to understand &lt;em&gt;should&lt;/em&gt; it call it. This layer defines the "Personality" and "Safety Profile" of your code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;readonly&lt;/code&gt;&lt;/strong&gt;: Is it safe to call this multiple times for information?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;destructive&lt;/code&gt;&lt;/strong&gt;: Will this delete or overwrite data?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;requires_approval&lt;/code&gt;&lt;/strong&gt;: Does a human need to click "Yes" before this runs?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;idempotent&lt;/code&gt;&lt;/strong&gt;: Can the AI safely retry if the connection drops?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Goal&lt;/strong&gt;: Governance. We move security and policy from the prompt into the protocol.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 3: The Extensions (Tactical Wisdom)
&lt;/h3&gt;

&lt;p&gt;This is where the "Senior Engineer" lives. This layer provides the subtle context that prevents the AI from making logical mistakes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;x-when-to-use&lt;/code&gt;&lt;/strong&gt;: Positive guidance for the Agent's planner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;x-when-not-to-use&lt;/code&gt;&lt;/strong&gt;: Negative guidance to prevent common misfires.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;x-common-mistakes&lt;/code&gt;&lt;/strong&gt;: Pitfalls discovered during development.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Goal&lt;/strong&gt;: Tactical Wisdom. We inject human experience directly into the module's metadata.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why a "Stacked" Approach?
&lt;/h2&gt;

&lt;p&gt;Traditional AI tools often dump all of this into a single &lt;code&gt;description&lt;/code&gt; string. This creates &lt;strong&gt;Cognitive Overload&lt;/strong&gt;. The LLM has to parse the syntax, the security rules, and the usage tips all at once.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;apcore&lt;/strong&gt;, we use &lt;strong&gt;Progressive Disclosure&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Agent's "Discovery" phase only sees &lt;strong&gt;Layer 1&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The Agent's "Planning" phase loads &lt;strong&gt;Layer 2&lt;/strong&gt; to check for safety and retries.&lt;/li&gt;
&lt;li&gt;The Agent's "Execution" phase loads &lt;strong&gt;Layer 3&lt;/strong&gt; to ensure it doesn't fall into known traps.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By stacking the metadata, we reduce token usage and significantly increase the reliability of the Agent's reasoning.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Complete "Intelligent" Module
&lt;/h2&gt;

&lt;p&gt;Here is what a fully-realized apcore module looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SensitiveTransferModule&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Module&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Layer 1: Core
&lt;/span&gt;    &lt;span class="n"&gt;input_schema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;TransferInput&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Transfer funds to an external IBAN.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="c1"&gt;# Layer 2: Annotations
&lt;/span&gt;    &lt;span class="n"&gt;annotations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ModuleAnnotations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;destructive&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;requires_approval&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Safety gate
&lt;/span&gt;        &lt;span class="n"&gt;idempotent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Layer 3: Extensions (AI Wisdom)
&lt;/span&gt;    &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x-when-not-to-use&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Do not use for internal account transfers.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x-common-mistakes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Ensure the IBAN includes the country code.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x-preconditions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User must be MFA authenticated.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion: Engineering Intelligence
&lt;/h2&gt;

&lt;p&gt;"Intelligence" in the Agentic era is not a magic property of the model; it is an &lt;strong&gt;Engineering Standard&lt;/strong&gt; of the module. When you build with the apcore 3-Layer Philosophy, you aren't just writing code—you are engineering a "Skill" that any AI can perceive and use with professional precision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In our next article, we’ll tackle the root cause of AI hallucinations: "The Death of 'String-Based' Descriptions in AI Integration."&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #7 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Join us in standardizing the future of AI interaction.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>"Beyond the Brain: Exposing AI Modules to REST, gRPC, and GraphQL"</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Fri, 17 Apr 2026 12:03:51 +0000</pubDate>
      <link>https://dev.to/tercelyi/beyond-the-brain-exposing-ai-modules-to-rest-grpc-and-graphql-1goh</link>
      <guid>https://dev.to/tercelyi/beyond-the-brain-exposing-ai-modules-to-rest-grpc-and-graphql-1goh</guid>
      <description>&lt;p&gt;We’ve seen how &lt;strong&gt;apcore&lt;/strong&gt; modules can be called by Claude via MCP, by humans via the CLI, and by other Agents via A2A. But in the enterprise, there is a massive world of legacy systems, mobile apps, and web frontends that still speak the traditional languages of the web: &lt;strong&gt;REST, gRPC, and GraphQL&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The challenge for most developers is "Interface Duplication." You build a tool for your AI Agent, and then you have to rewrite the same validation, security, and documentation logic to expose it as a REST API for your React frontend.&lt;/p&gt;

&lt;p&gt;In this sixth article of our series, we look at how apcore turns your AI-Perceivable modules into universal web services using framework adapters like &lt;code&gt;fastapi-apcore&lt;/code&gt; and &lt;code&gt;flask-apcore&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The "Surface" Philosophy
&lt;/h2&gt;

&lt;p&gt;In apcore, we treat different protocols as &lt;strong&gt;Surfaces&lt;/strong&gt;. A surface is a thin layer that projects the internal logic of an apcore module into a specific network format. &lt;/p&gt;

&lt;p&gt;By using framework-specific integrations, you can project your entire module &lt;strong&gt;Registry&lt;/strong&gt; onto the web with zero duplication.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. RESTful Auto-Mapping
&lt;/h3&gt;

&lt;p&gt;With adapters like &lt;code&gt;flask-apcore&lt;/code&gt;, your module &lt;code&gt;executor.user.get_profile&lt;/code&gt; automatically becomes a REST endpoint:&lt;br&gt;
&lt;code&gt;GET /api/v1/executor/user/get_profile&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;input_schema&lt;/code&gt; is used to validate the incoming JSON body or query parameters, and the &lt;code&gt;output_schema&lt;/code&gt; ensures the response is consistent. This is done at the middleware level, ensuring your business logic stays "pure."&lt;/p&gt;
&lt;h3&gt;
  
  
  2. gRPC for High-Performance
&lt;/h3&gt;

&lt;p&gt;For internal microservices, apcore integrations can generate Protobuf definitions on the fly, allowing your legacy Java or Go services to call your AI-Perceivable Python modules with binary efficiency.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why Web Exposure is Better with apcore
&lt;/h2&gt;

&lt;p&gt;When you use an apcore framework adapter, you aren't just getting an auto-generated router. You are getting the entire apcore &lt;strong&gt;Execution Pipeline&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unified ACL&lt;/strong&gt;: The same pattern-based access control that protects your modules from AI hallucinations also protects them from unauthorized web requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trace Propagation&lt;/strong&gt;: If a web request triggers an apcore module, the adapter captures the &lt;code&gt;trace-id&lt;/code&gt; (or generates a new one) and propagates it through any internal module-to-module calls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema-Driven Documentation&lt;/strong&gt;: Your apcore &lt;code&gt;description&lt;/code&gt; and &lt;code&gt;documentation&lt;/code&gt; fields can be automatically exported as Swagger/OpenAPI docs for your frontend team.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Code Showcase: The Universal Backend
&lt;/h2&gt;

&lt;p&gt;Imagine you are building a simple "Weather Module" using FastAPI. Here is how you expose it to the world:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# The AI-Perceivable Module
&lt;/span&gt;&lt;span class="nd"&gt;@module&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;common.weather.get&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Get current weather for a city.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_weather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;condition&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sunny&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# The Web Surface (FastAPI)
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi_apcore&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;register_routes&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nf"&gt;register_routes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;registry&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# All modules now have REST endpoints automatically
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One definition. Your AI Agent calls it. Your CLI tool calls it. Your React app calls it. &lt;strong&gt;Zero duplication.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: The Bridge to the Future
&lt;/h2&gt;

&lt;p&gt;The "Agentic Era" doesn't mean we throw away the web. It means we upgrade the web to be AI-Perceivable. Framework adapters ensure that your AI investment is also a "Web 2.0" investment, creating a bridge between your legacy infrastructure and your autonomous future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now that we’ve seen the power of the apcore Adapter Ecosystem, it’s time to go under the hood. In the next article, we’ll dive into the 3-Layer Metadata Philosophy that makes all of this possible.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #6 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Build once, serve everywhere.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore" rel="noopener noreferrer"&gt;aiperceivable/apcore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>api</category>
      <category>architecture</category>
      <category>python</category>
    </item>
    <item>
      <title>"The Agent Workforce: Enabling Autonomous Agent-to-Agent Collaboration"</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Thu, 16 Apr 2026 10:54:38 +0000</pubDate>
      <link>https://dev.to/tercelyi/the-agent-workforce-enabling-autonomous-agent-to-agent-collaboration-17o1</link>
      <guid>https://dev.to/tercelyi/the-agent-workforce-enabling-autonomous-agent-to-agent-collaboration-17o1</guid>
      <description>&lt;p&gt;Until now, most AI interactions have followed a human-to-AI pattern: you type a prompt, and the AI calls a tool. But as we move toward the next phase of the Agentic era, a new pattern is emerging: &lt;strong&gt;Agent-to-Agent (A2A) Collaboration.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine a "CEO Agent" that needs to file a legal report. Instead of doing everything itself, it discovers a "Legal Specialist Agent," perceives its skills, and delegates the task.&lt;/p&gt;

&lt;p&gt;The challenge? How do these Agents speak the same language? How does one Agent "perceive" the capabilities and safety boundaries of another?&lt;/p&gt;

&lt;p&gt;In this fifth article of our series, we move beyond humans and look at how apcore provides the "Social Contract" for an autonomous Agentic workforce via the &lt;strong&gt;apcore-a2a&lt;/strong&gt; adapter.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Tools to Skills: The Agent Card
&lt;/h2&gt;

&lt;p&gt;In the traditional world, we think of code as "Tools" (functions). In the Agent-to-Agent world, we think of code as &lt;strong&gt;"Skills"&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;apcore-a2a&lt;/strong&gt; solves this by automatically generating a standards-compliant &lt;strong&gt;Agent Card&lt;/strong&gt; (&lt;code&gt;/.well-known/agent.json&lt;/code&gt;) from your apcore module metadata. This card tells other Agents on the network:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Identity&lt;/strong&gt;: "I am the &lt;code&gt;legal.document_analyzer&lt;/code&gt; Agent."&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Perception&lt;/strong&gt;: "I handle PDF and Word docs. My &lt;code&gt;description&lt;/code&gt; is X, and my &lt;code&gt;documentation&lt;/code&gt; is Y."&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Governance&lt;/strong&gt;: "I am &lt;code&gt;readonly&lt;/code&gt; and do not require approval for discovery, but I do require approval for &lt;code&gt;destructive&lt;/code&gt; edits."&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The A2A Task Lifecycle
&lt;/h2&gt;

&lt;p&gt;Unlike simple API calls, Agentic tasks are often long-running. &lt;strong&gt;apcore-a2a&lt;/strong&gt; manages the entire &lt;strong&gt;Task Lifecycle&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Submitted&lt;/strong&gt;: The task is received and validated against the &lt;code&gt;input_schema&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Working&lt;/strong&gt;: The task is executing in the background.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Completed/Failed&lt;/strong&gt;: The final result or error is captured.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Input-Required&lt;/strong&gt;: The task pauses if it needs additional information from the caller.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using the &lt;code&gt;apcore-a2a&lt;/code&gt; client, you can submit a message and track its status in real-time, or use &lt;strong&gt;SSE Streaming&lt;/strong&gt; (&lt;code&gt;message/stream&lt;/code&gt;) to get live status and artifact updates as they happen.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deep Dive: Bridging Authentication
&lt;/h2&gt;

&lt;p&gt;Security in an Agent-to-Agent network is critical. You can't have a "hallucinating" Agent calling your &lt;code&gt;bank.transfer&lt;/code&gt; module without permission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;apcore-a2a&lt;/strong&gt; provides a sophisticated &lt;strong&gt;JWTAuthenticator&lt;/strong&gt;. It bridges incoming A2A tokens directly into apcore's &lt;code&gt;Identity&lt;/code&gt; context. By using a &lt;code&gt;ClaimMapping&lt;/code&gt;, you can map claims like &lt;code&gt;sub&lt;/code&gt;, &lt;code&gt;roles&lt;/code&gt;, and &lt;code&gt;org&lt;/code&gt; to apcore's Role-Based ACL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;auth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;JWTAuthenticator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-secret-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;claim_mapping&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ClaimMapping&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id_claim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sub&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;roles_claim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;roles&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;registry&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, every cross-agent call is authenticated and governed by the same pattern-based security that protects your internal modules.&lt;/p&gt;




&lt;h2&gt;
  
  
  The A2A Explorer: Visualizing Autonomy
&lt;/h2&gt;

&lt;p&gt;Building collaborative Agents shouldn't be a "black box." When you run apcore-a2a with the &lt;code&gt;explorer=True&lt;/code&gt; flag, it launches a browser-based UI.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;A2A Explorer&lt;/strong&gt; allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Discover Skills&lt;/strong&gt;: Browse the Agent Card and see what skills are available.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Send Messages&lt;/strong&gt;: Manually invoke skills to test the response format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stream Status&lt;/strong&gt;: Watch the task lifecycle in real-time as the Agent moves from "Submitted" to "Completed."&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion: The Language of Agents
&lt;/h2&gt;

&lt;p&gt;Agent-to-Agent collaboration is the next frontier of productivity. But for it to work, we need more than just a communication pipe; we need a &lt;strong&gt;Perception Standard&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;apcore&lt;/strong&gt; provides that standard, and &lt;code&gt;apcore-a2a&lt;/code&gt; is the bridge that turns your code into a professional Agentic workforce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next, we’ll look at "Beyond the Brain: Exposing AI Modules to REST, gRPC, and GraphQL."&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #5 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Join us in building the infrastructure for autonomous collaboration.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore-a2a" rel="noopener noreferrer"&gt;aiperceivable/apcore-a2a&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>automation</category>
    </item>
    <item>
      <title>"Bridging the Terminal Gap: Instant CLI Tools for the Agentic Era"</title>
      <dc:creator>tercel</dc:creator>
      <pubDate>Sat, 28 Mar 2026 06:39:20 +0000</pubDate>
      <link>https://dev.to/tercelyi/bridging-the-terminal-gap-instant-cli-tools-for-the-agentic-era-5gn5</link>
      <guid>https://dev.to/tercelyi/bridging-the-terminal-gap-instant-cli-tools-for-the-agentic-era-5gn5</guid>
      <description>&lt;p&gt;If you’re building AI Agents, you probably spend a lot of time in two places: the LLM’s chat window and your terminal.&lt;/p&gt;

&lt;p&gt;One of the most frustrating parts of Agent development is the "Black Box" problem. Your Agent calls a tool, the tool fails, and you’re left digging through logs to understand why. Was it the parameters? The environment? Or the tool logic itself?&lt;/p&gt;

&lt;p&gt;At &lt;strong&gt;apcore&lt;/strong&gt;, we believe that if a module is "AI-Perceivable," it should also be "Developer-Perceivable." This is why we prioritize terminal accessibility as a first-class citizen of our ecosystem.&lt;/p&gt;

&lt;p&gt;In this fourth article of our series, we move from high-level vision to the terminal, showing how you can instantly transform any apcore module into a powerful CLI tool for human debugging and system interaction via &lt;strong&gt;apcore-cli&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Convention-over-Configuration (§5.14)
&lt;/h2&gt;

&lt;p&gt;The "Zero Import" way to build CLI tools is one of the most powerful features of apcore-cli. By using the &lt;code&gt;ConventionScanner&lt;/code&gt; from the &lt;strong&gt;apcore-toolkit&lt;/strong&gt;, apcore-cli can turn a directory of plain Python files into a professional terminal application.&lt;/p&gt;

&lt;p&gt;No base classes. No decorators. No imports from apcore. Just functions with PEP 484 type hints.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# commands/deploy.py
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;deploy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Deploy the app to the given environment.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deployed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;env&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By dropping this file into a &lt;code&gt;commands/&lt;/code&gt; directory, apcore-cli automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infers the &lt;code&gt;input_schema&lt;/code&gt; and &lt;code&gt;output_schema&lt;/code&gt; from the type hints.&lt;/li&gt;
&lt;li&gt;Generates the CLI command: &lt;code&gt;apcore-cli deploy deploy --env prod&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Provides the &lt;code&gt;--help&lt;/code&gt; text from the function's docstring.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Deep Dive: The Magic of Reflection
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;apcore-cli&lt;/code&gt; tool uses the &lt;strong&gt;Registry&lt;/strong&gt; to discover all available modules and their schemas. It then uses reflection to map &lt;strong&gt;Canonical IDs&lt;/strong&gt; directly to subcommands.&lt;/p&gt;

&lt;p&gt;For example, if you have a module with the ID &lt;code&gt;executor.email.send_email&lt;/code&gt;, apcore-cli will automatically generate the following command structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;apcore-cli executor email send-email &lt;span class="nt"&gt;--to&lt;/span&gt; &lt;span class="s2"&gt;"dev@example.com"&lt;/span&gt; &lt;span class="nt"&gt;--subject&lt;/span&gt; &lt;span class="s2"&gt;"Hello"&lt;/span&gt; &lt;span class="nt"&gt;--body&lt;/span&gt; &lt;span class="s2"&gt;"World"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Boolean Flag Pairs &amp;amp; Enum Choices
&lt;/h3&gt;

&lt;p&gt;Because every apcore module &lt;strong&gt;must&lt;/strong&gt; have an &lt;code&gt;input_schema&lt;/code&gt;, the CLI doesn't guess. It uses that schema to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate &lt;code&gt;--verbose&lt;/code&gt; / &lt;code&gt;--no-verbose&lt;/code&gt; pairs for boolean fields.&lt;/li&gt;
&lt;li&gt;Provide shell-validated enum choices (e.g., &lt;code&gt;--format json&lt;/code&gt;) from JSON Schema &lt;code&gt;enum&lt;/code&gt; properties.&lt;/li&gt;
&lt;li&gt;Enforce required fields, showing them as &lt;code&gt;[required]&lt;/code&gt; in the &lt;code&gt;--help&lt;/code&gt; text.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  STDIN Piping (The Unix Way)
&lt;/h2&gt;

&lt;p&gt;A module in apcore is a universal unit of functionality. By exposing your modules via a CLI, you gain the power of Unix pipes. You can pipe JSON input directly into a module, and CLI flags will override specific keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Pipe JSON input and override a parameter&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'{"a": 100, "b": 200}'&lt;/span&gt; | apcore-cli math.add &lt;span class="nt"&gt;--input&lt;/span&gt; - &lt;span class="nt"&gt;--a&lt;/span&gt; 999
&lt;span class="c"&gt;# {"sum": 1199}&lt;/span&gt;

&lt;span class="c"&gt;# Chain with other tools&lt;/span&gt;
apcore-cli sysutil.info | jq &lt;span class="s1"&gt;'.os, .hostname'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  TTY-Adaptive Output
&lt;/h2&gt;

&lt;p&gt;One of the most powerful features of &lt;code&gt;apcore-cli&lt;/code&gt; is its ability to adapt its output based on the environment. If you’re in a terminal (TTY), it renders a rich, human-readable table. If you’re in a pipe, it outputs raw JSON for further processing.&lt;/p&gt;

&lt;p&gt;This makes apcore-cli the perfect tool for developers to debug their Agent's skills and for sysadmins to automate their workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Bridging the Gap
&lt;/h2&gt;

&lt;p&gt;Reliable AI systems are built by developers who can "see" what their Agents are doing. &lt;strong&gt;apcore-cli&lt;/strong&gt; bridges the gap between the terminal and the LLM, giving you a shared interface to inspect, test, and control your AI-Perceivable modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now that we’ve mastered the terminal, it’s time to go back to the Agentic workforce. In the next article, we’ll dive into apcore-a2a: How Agents use these same modules to collaborate autonomously.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Article #4 of the **apcore: Building the AI-Perceivable World&lt;/em&gt;* series. Join us in making the terminal a first-class citizen of the Agentic Era.*&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/aiperceivable/apcore-cli" rel="noopener noreferrer"&gt;aiperceivable/apcore-cli&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>cli</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
