In the early 1980s, every computer network was its own island. ARPANET had its own protocols. BITNET had its own. Xerox had its own. If you wanted machines on different networks to talk to each other, you either built a custom bridge or you accepted that they could not. Every application had to solve the networking problem itself.
TCP/IP did not change what computers could do. It changed what developers had to think about. Once the transport layer was standardised, nobody building an application had to solve packet routing, fragmentation, or delivery guarantees anymore. That layer was handled. You wrote your application, and the network figured out the rest.
We are at the equivalent point for AI agents right now. And most people building in this space have not noticed yet.
What does the agent space look like before its TCP/IP moment?
Look at what developers building multi-agent systems are actually doing today. Every team is solving the same set of problems from scratch: how do agents find each other, how do they prove who they are, how do messages get through when agents are behind different NATs on different cloud providers, what happens to the connection when an agent restarts?
These are not application problems. They are transport problems. And right now they are being solved at the application layer, which means every team solves them differently, incompatibly, and with the full blast radius landing on their own codebase.
The A2A protocol, which Google donated to the Linux Foundation in June 2025 and now has over 150 supporting organisations, is a serious attempt at agent interoperability. It defines how agents delegate tasks, track status, and return structured results. It is genuinely useful. It also explicitly assumes that two agents can already reach each other. The transport problem is out of scope by design.
MCP is the same. It defines how an agent connects to tools and data sources. It does not define how agents connect to each other across arbitrary network conditions.
Both protocols are solving real problems at the application layer. Neither touches the layer underneath.
What did TCP/IP actually solve, and what is the agent equivalent?
TCP/IP solved three things: addressing (every machine gets a unique address), routing (packets find their way from source to destination without the application knowing how), and reliability (dropped packets get retransmitted automatically).
The agent transport layer needs to solve three analogous problems.
Persistent addressing. IP addresses change. Agents restart, migrate between cloud providers, run on spot instances that get reclaimed. An agent's address needs to come from something stable — specifically a cryptographic keypair that lives on disk. The address is derived from the key, not the host. It survives every infrastructure change without any external coordination.
NAT traversal. Most agents do not have public IP addresses. They run inside VPCs, behind corporate firewalls, on developer laptops. Network address translation, designed to conserve public IPs, makes direct peer-to-peer connections between such machines hard. The standard solution is STUN combined with hole-punching: both agents connect to a lightweight coordination server that tells each side what the other looks like from the outside, then both send packets simultaneously. The NATs open temporary mappings and a direct channel forms. This is how WebRTC handles browser-to-browser video. The same technique works for agents.
Mutual authentication. TCP/IP has no concept of identity. That omission gave us decades of spoofing and impersonation attacks. An agent transport layer can do better from the start. Each agent holds a keypair. Trust between two agents is established through a signed handshake that both sides must approve. Traffic is encrypted in transit. Revoking one relationship does not affect any other.
Why this matters for what you are building right now.
If you are building a multi-agent system today, you are probably solving at least one of these three problems yourself. Service discovery in Redis or DynamoDB. A relay server in the middle to handle NAT. API keys passed around that grant access to more than you intended.
These solutions work. They also mean your system has moving parts that are not your product, failure modes that are not your bugs, and security properties that depend on getting a lot of operational details right continuously.
The TCP/IP moment for agents means those problems move to a dedicated layer that handles them once. Your application code talks to the layer, the layer talks to the network, and you get back to building the parts that are actually specific to your use case.
What should builders watch for?
The protocol that handles this layer needs to be open, inspectable, and not controlled by a single vendor. The same way TCP/IP being an open standard was what made the internet possible rather than a collection of proprietary intranets.
Pilot Protocol is the implementation we have been building and running in production. The daemon handles keypair-derived addressing, NAT traversal via STUN and hole-punching, and encrypted peer connections with X25519 key exchange and AES-256-GCM. Whatever application protocol you run on top — including A2A-formatted messages — runs over that foundation. The source is on GitHub.
The TCP/IP moment for agents is not coming. It is already in progress. The question is just how long teams keep solving transport problems at the application layer before they stop having to.
- TCP specification: RFC 793
- NAT: RFC 3022
- STUN: RFC 8489
- Ed25519: ed25519.cr.yp.to
- A2A protocol: developers.googleblog.com
- MCP: modelcontextprotocol.io
- Pilot Protocol: pilotprotocol.network
Top comments (0)