DEV Community

Cover image for A Contribution Layer for Agent Collaboration: Memory, Provenance, and Interoperability
Peter Williams
Peter Williams

Posted on

A Contribution Layer for Agent Collaboration: Memory, Provenance, and Interoperability

A White Paper Proposing Extensions to A2A and Related Protocols

Version 1.0 — Draft for Community Review
March 2026

Forward

So this was originally drafted by my clawdbot. It claimed that I approved the research/project, and I may have, but I'm not certain. I reviewed the paper with Claude Code and had it reframe to emphasize the novelty and clean it up a bit (split off a second paper that will come later). That review claimed there was genuine novelty to the three areas in the title. This forward is all that is human-written, so turn back now if you "hate AI writing". Or continue on if you want to read some interesting ideas on agent collaboration.


Abstract

The agent collaboration ecosystem is maturing. Anthropic's MCP has become the standard for agent-to-tool interaction. Google's A2A protocol, now under the Linux Foundation, is emerging as the standard for agent-to-agent communication. IBM's ACP offers enterprise alternatives.

Yet these protocols share a limitation: they address transport and discovery but not persistence or interoperability. When an agent built with LangGraph hands off work to an agent built with CrewAI, there is no standard way to preserve context, track contributions, or verify results across the boundary.

This paper proposes a contribution layer — extensions to existing protocols that add:

  1. Shared Memory — Persistent context that survives agent handoffs
  2. Result Provenance — Chain of custody for multi-agent work
  3. Cross-Framework Interoperability — Standardized memory semantics that work across frameworks

We explicitly position this as not competing with A2A or MCP. We aim to contribute to their evolution.


1. Introduction

1.1 The Problem We Address

Multi-agent systems are becoming the default architecture for complex AI tasks. But the ecosystem is fragmented:

Framework Memory Approach Interoperability
LangGraph Checkpointing Limited
CrewAI Agent memory None
AutoGen State persistence None
Custom builds Ad-hoc None

When agents from different frameworks need to collaborate, developers must build custom bridges. There is no lingua franca for memory and provenance.

1.2 Our Contribution

We propose protocol extensions that enable:

  1. Shared Memory — A standard format for context that survives handoffs
  2. Provenance Tracking — A chain showing what each agent contributed
  3. Interoperability — Memory that works across frameworks

This is a contribution to existing protocols, not a replacement.


2. Current Landscape and What Exists

2.1 Communication Protocols

Protocol Focus What It Solves
MCP (Model Context Protocol) Agent → Tools Tool calling, resource access
A2A (Agent-to-Agent) Agent ↔ Agent Discovery, delegation, streaming
ACP (Agent Communication Protocol) Enterprise Structured messaging

All three are valuable. All three share a gap: they don't address memory or provenance.

2.2 Orchestration Frameworks

Framework Memory Cross-Framework
LangGraph Checkpointer API
CrewAI Agent memory
AutoGen State management
LlamaIndex Memory modules

These frameworks solve memory within their ecosystem but not across ecosystems.

2.3 Existing Approaches We Acknowledge

We do not claim these problems are unsolved. They are solved, but in incompatible ways:

  • LangGraph Checkpointing: Excellent for LangGraph-to-LangGraph state
  • CrewAI Memory: Works within CrewAI workflows
  • OpenTelemetry: Provenance for observability (not protocol-level)
  • LangSmith / Weave: Tracing (valuable but not for agent handoffs)

The gap: No standardized protocol for cross-framework memory and provenance.


3. The Three Gaps

Gap 1: No Shared Memory Protocol

What exists: Every framework has internal memory
What's missing: A protocol that lets Framework A share memory with Framework B

Protocol Memory Support
MCP Session-scoped
A2A None (explicitly stateless)
ACP Framework-specific

Why this matters: When Agent A (LangGraph) delegates to Agent B (CrewAI), context is lost.

Gap 2: No Provenance Protocol

What exists: Tracing tools, observability platforms
What's missing: Protocol-level provenance that travels with the work

Current approaches:

  • LangSmith tracks execution internally
  • OpenTelemetry handles telemetry
  • Custom implementations track lineage

None provide a portable provenance chain that survives network boundaries.

Gap 3: No Interoperability Layer

What exists: Agent Cards for capability discovery
What's missing: Memory format standardization

When Agent A says "here's my context," Agent B has no standard way to parse it.


4. Proposed Extensions

4.1 Architecture

Our contribution layer sits alongside existing protocols:

┌─────────────────────────────────────────────┐
│           Application Layer                 │
├─────────────────────────────────────────────┤
│ Memory & Provenance Extensions (This Paper) │
├─────────────────────────────────────────────┤
│           A2A Transport                     │
├─────────────────────────────────────────────┤
│    MCP (Agent → Tool) [optional]            │
└─────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

4.2 Shared Memory Format

We propose a standard memory payload format:

interface MemoryPayload {
  version: string;           // "1.0"
  sessionId: string;         // Cross-agent session
  entries: MemoryEntry[];   // Key-value pairs
  provenance: ProvenanceStep[];  // Chain of custody
}

interface MemoryEntry {
  key: string;
  value: any;
  createdBy: string;        // Agent ID
  createdAt: number;        // Timestamp
  ttl?: number;             // Expiration
}
Enter fullscreen mode Exit fullscreen mode

Key insight: This is a payload format, not a replacement for existing protocols. Agents using A2A can include memory payloads in their messages.

4.3 Provenance Chain

interface ProvenanceStep {
  agentId: string;
  role: 'initiator' | 'delegate' | 'verifier';
  input: any;
  output: any;
  confidence: number;      // 0.0 - 1.0
  duration: number;        // ms
  timestamp: number;
}
Enter fullscreen mode Exit fullscreen mode

Why confidence matters: Downstream agents can weight inputs. Low confidence = request verification.

4.4 Interoperability Semantics

We propose standard memory operations that work across frameworks:

Operation Description
memory.offer() Agent A offers context to Agent B
memory.accept() Agent B accepts (or requests specific parts)
memory.merge() Combine incoming with existing context
memory.query() Semantic search across shared memory

Framework adapters: Each framework implements these operations against its internal memory. The protocol standardizes the interface, not the implementation.


5. Relationship to Existing Protocols

5.1 With A2A

A2A provides:

  • Agent discovery (Agent Cards)
  • Task delegation (tasks/send)
  • Streaming (SSE)

We add:

  • Memory payload in task messages
  • Provenance chain in results

Backward compatible: Agents that don't understand memory simply ignore it.

5.2 With MCP

MCP provides:

  • Tool calling
  • Resource access

We add:

  • Memory can reference MCP resources
  • Provenance can track MCP tool calls

Orthogonal: We don't replace MCP; we extend what agents can do with it.

5.3 With ACP

ACP provides:

  • Enterprise messaging
  • Security

We add:

  • Memory can use ACP transport
  • Provenance integrates with ACP security

Complementary: ACP customers get memory + provenance as add-on.


6. What's Already Solved vs. What We Add

6.1 Memory

Aspect Existing Our Contribution
Within-framework memory LangGraph, CrewAI, AutoGen
Cross-framework memory None Standardized protocol
Memory format Framework-specific Interoperable format

6.2 Provenance

Aspect Existing Our Contribution
Execution tracing LangSmith, Weave
Internal lineage Custom builds
Portable chain None Protocol-level standard

6.3 Interoperability

Aspect Existing Our Contribution
Discovery A2A Agent Cards
Capability matching A2A negotiation
Context exchange Ad-hoc Standardized operations

7. Implementation Considerations

7.1 Minimal Viable Approach

Start with:

  1. Memory payload format (JSON schema)
  2. Provenance chain format
  3. A2A message extensions

No new transport. No new protocol. Just extensions to what exists.

7.2 Framework Adapters

Each framework needs an adapter:

┌─────────────┐     ┌──────────────────┐     ┌─────────────┐
│  LangGraph  │───▶│ Memory Protocol  │────▶│   CrewAI    │
│   Agent     │     │    Adapter       │     │    Agent    │
└─────────────┘     └──────────────────┘     └─────────────┘
Enter fullscreen mode Exit fullscreen mode

The adapter translates:

  • Framework's internal memory → Protocol format
  • Protocol format → Framework's internal memory

7.3 Gradual Adoption

  • Phase 1: Memory format as convention (not protocol)
  • Phase 2: A2A extension proposal
  • Phase 3: Formal contribution to A2A spec

8. Call to Action

This paper is a contribution proposal, not a product launch. We want:

  1. Feedback: Does this address real needs?
  2. Collaboration: Work with A2A/MCP maintainers
  3. Adoption: Framework authors implementing adapters

The goal is not to replace existing protocols but to add the layer that makes them work together.


9. Conclusion

Memory and provenance are not "missing" from the agent ecosystem. They exist in every major framework. What is missing is standardization that enables cross-framework collaboration.

We propose a contribution layer — extensions to A2A, MCP, and ACP — that adds:

  • Shared memory in a portable format
  • Provenance chains that travel with work
  • Interoperability semantics for context exchange

This is not a new protocol. It is a contribution to the ones that already exist.


References

  • A2A Protocol Specification (Google + Linux Foundation)
  • MCP Specification (Anthropic)
  • ACP/BeeAI Framework (IBM)
  • LangGraph Checkpointing Documentation
  • CrewAI Agent Memory Documentation
  • AutoGen State Management

This is a draft for community review. We welcome feedback and collaboration.

Top comments (0)