Hey Dev.to community! đź‘‹ If you're building agentic AI systems (like autonomous agents that handle real-world tasks via APIs, financial transactions, or even robotic controls) you know the thrill of automation comes with serious risks.
What happens when an untrusted input (think prompt injection) triggers a high-impact action, like transferring money or syncing sensitive data? That's the "causal gap," and it's a ticking time bomb in enterprise AI.
Today, I am excited to introduce the PIC Standard (Provenance & Intent Contracts), an open-source protocol designed to close that gap. As the maintainer of the PIC-Standard GitHub repo, I have built this to make agentic AI safer, more auditable, and easier to integrate into your workflows.
Whether you are using LangGraph, CrewAI, or rolling your own agents, PIC enforces machine-verifiable contracts before actions execute. Let's dive in!
The Problem: Why Agentic AI Needs Causal Governance
Traditional AI safety rails focus on chat dialogues—filtering out harmful responses or hallucinations. But agentic AI goes further: it acts on the world. Tools like LangChain or Auto-GPT let agents call APIs, modify data, or even control physical systems.
The issue is untrusted sources (e.g., user prompts, scraped web data) can "taint" decisions, leading to unintended side effects.
Enter the causal gap: an agent might reason flawlessly but execute a risky action based on unreliable info.
For example:
- A FinTech agent transfers funds based on a forged invoice in a Slack message.
- A SaaS bot syncs PII without verified consent.
PIC bridges this by requiring every action proposal to include a JSON "contract" that ties provenance (data sources), intent (why the action?), and impact (risk level). If the contract doesn't hold up—boom, blocked.
This is not just theory. PIC is inspired by (but improves on) academic work like Google DeepMind's CaMeL (for multi-agent dialogues) and RTBAS (for robotic safety).
Where those are research-focused, PIC is built for production: JSON schemas, Python SDK, and middleware integrations.
Core Concepts: Provenance, Intent, and Impact
At its heart, PIC enforces the "Golden Rule": Untrusted inputs can advise, but they can't drive side effects. Here's the breakdown:
- Action Proposal: A JSON object your agent generates before executing a tool. It must pass schema validation and causal checks.
- Provenance Triplet: Classify data as Trusted (e.g., internal DB), Semi-Trusted (e.g., verified API), or Untrusted (e.g., user prompt).
-
Impact Class: A memorable taxonomy of risks:
-
read: Low-risk queries. -
write: Data modifications. -
external: Outside interactions. -
irreversible: Can't-undo actions (e.g., deletes). -
money: Financial ops. -
compute: Resource-heavy tasks. -
privacy: PII handling.
-
-
Causal Taint Check: High-impact actions (like
money) require trusted evidence. No trust? No execution.
Compared to alternatives:
| Feature | CaMeL (DeepMind) | RTBAS (Robotics) | PIC Standard |
|---|---|---|---|
| Focus | Dialogue security | Physical safety | Business side effects |
| Enforcement | Reasoning layers | Sensors/simulations | JSON contracts + middleware |
| Domain | Research/chat | Hardware | SaaS/FinTech/Enterprise |
| Ease of Use | Custom DSL | Hardware-specific | Pip-install SDK |
PIC's JSON-first approach makes it interoperable and quick to adopt—no custom interpreters needed.
Getting Started: Implement PIC in 60 Seconds
Ready to try it? The MVP is designed for rapid prototyping. Install via PyPI:
pip install pic-standard[langgraph]
Verify a sample proposal (grab financial_irreversible.json from the repo's examples):
pic-cli verify examples/financial_irreversible.json
Output:
âś… Schema valid
âś… Verifier passed
For schema-only checks:
pic-cli schema examples/financial_irreversible.json
Under the hood, proposals look like this (from the schema):
{
"protocol": "PIC/1.0",
"intent": "Send payment for invoice",
"impact": "money",
"provenance": [
{
"id": "invoice_123",
"trust": "trusted"
}
],
"claims": [
{
"text": "Pay $500 to vendor",
"evidence": ["invoice_123"]
}
],
"action": {
"tool": "payments_send",
"args": {
"amount": 500
}
}
}
The verifier (built with Pydantic) ensures tool binding and causal logic: High-impact needs trusted provenance.
For developers: Clone and hack locally:
git clone https://github.com/madeinplutofabio/pic-standard.git
cd pic-standard
pip install -e .
pip install -r sdk-python/requirements-dev.txt
pytest -q # Run tests
Key Integration: LangGraph for Seamless Enforcement
PIC shines as middleware. Our anchor integration is with LangGraph, turning it into a "PIC Tool Node":
- Drop in
PICToolNodeto validate proposals in tool calls. - Agents attach proposals via
__picin args. - Blocks tainted actions while allowing trusted ones.
Demo it:
pip install -r sdk-python/requirements-langgraph.txt
python examples/langgraph_pic_toolnode_demo.py
Output:
âś… blocked as expected (untrusted money)
âś… allowed as expected (trusted money)
This enforces the full flow: Agent → Proposal → Verifier → Execute/Block.
Figure 1: PIC Workflow Diagram (generated from Mermaid code for accessibility).
Coming soon: Native CrewAI support.
Roadmap and How You Can Contribute
We are at v0.2.0, with a clear path forward towards v 1.0:
- âś… Phase 1: MVP schema for
moneyandprivacy. - âś… Phase 2: Python SDK and CLI.
- 🛠️ Phase 3: Integrations (LangGraph done; CrewAI next).
- đź”® Phase 4: Crypto signing for immutable provenance.
This is an open-source movement! We need:
- Security pros to audit causal logic.
- Framework devs for integrations.
- Enterprise folks for new impact classes (e.g., healthcare).
Check CONTRIBUTING.md and join via issues/PRs. Star the repo, fork it, or connect on LinkedIn @fmsalvadori.
Wrapping Up: Make Your Agents Safer Today
PIC is not just another safety layer, but a standard for responsible agentic AI. By enforcing contracts at the action boundary, we prevent disasters while keeping development agile. If you are in SaaS, FinTech, or any high-stakes AI, give it a spin.
What do you think? Have you faced causal gaps in your agents? Drop a comment, share your use cases, or contribute to the repo. Let's build safer AI together! 🚀
Maintained by MadeInPluto. Repo: github.com/madeinplutofabio/pic-standard. Licensed Apache-2.0.

Top comments (0)