Hi DEV community,
I’m building Protolink, a Python framework for building agent systems inspired by Google’s A2A protocol.
Currently in prototype stage.
Key points:
• Centralized agent model: single agent handles both client/server roles.
• Reduced boilerplate: focus on logic, not infrastructure.
• LLM integration: API (OpenAI, Anthropic), local models (llama.cpp), LangChain chains.
• Tool integration: native Python, MCP, or LangChain tools via adapters.
• Flexible transport layer: HTTP, WebSocket, or in-memory
It’s inspired by Google’s A2A protocol, but the goal is practical: unify these capabilities while keeping things simple.
Yes, it’s a lot of buzzwords, but the aim is to actually make them work together in a low-boilerplate framework.
GitHub: https://github.com/nMaroulis/protolink
Code Snippet:
from protolink.agents import Agent
from protolink.models import AgentCard
from protolink.transport import HTTPTransport
from protolink.tools import MCPToolAdapter
from protolink.llms.api import OpenAILLM
# Define the agent card
agent_card = AgentCard(
name="example_agent",
description="A dummy agent",
)
# Initialize the transport
transport = HTTPTransport()
# OpenAI API LLM
llm = OpenAILLM(model="gpt-5.1")
# Initialize the agent
agent = Agent(agent_card, transport, llm)
# Add Native tool
@agent.tool(name="add", description="Add two numbers")
async def add_numbers(a: int, b: int):
return a + b
# Add MCP tool
mcp_tool = MCPToolAdapter(mcp_client, "multiply")
agent.add_tool(mcp_tool)
# Start the agent
agent.start()
Feedback, ideas, or early experiments are very welcome! 🙌
Top comments (0)