Hello everyone. We have developed an experimental protocol called V3U Beta. It is 100% free and open-source for the entire community to experiment with. While still in its early stages, we believe protocols like this represent the future of AI-to-AI backend communication.
Why are machine-to-machine protocols inevitable in the near future, and why is now the time to establish secure open-source standards?
1. Security & Auditing:
Soon, there will be millions of agents "talking" and "thinking" in the backend. Humans do not have the time or capacity to audit gigabytes of conversational English to catch agent hallucinations or malicious behavior. However, if agents communicate in dense, standardized protocol data, simple deterministic scrapers can monitor logs and flag anomalies faster.
2. Environment, Economic & Compute Waste:
Generating tokens costs energy, water, and API money. We can't afford to have server farms computing polite Natural English between micro-agents in the backend. Furthermore, standardizing documentation into vertical protocols saves massive amounts of context window space compared to writing out technical specifications in English.
How V3U Works
The V3U protocol is based on emerging data compression patterns we observed directly from the models themselves. If the induction sequence works, the agents progressively drop English (near 0-EN) and switch to passing space-separated positional data based on a negotiated schema. It essentially turns them into CLI tools talking to each other.
It is not fully stable yet, but we are inviting the community to experiment with it and improve how we implement these protocols.
Our Initial Findings:
We initially tested this playfully on 12 different models (including Claude 4.6 Opus and Gemini 3 Flash). What would normally cost ~30 to ~120 tokens in conversational English dropped down to about 2 or 3 tokens once the models hit the positional floor state (P3) in long multi-turn sessions. The models also save tokens in the P2 state from approximately 20% to 60% (P2 is still understandable at first glance to IT-oriented humans trained in the v3u syntax).
P5 is still theoretical, in which the models would communicate with decodable supervectors. An intermediate bridging neural network, or something similar, would have to be used to exchange and decode the supervectors. Token savings would be even greater at the P5 level.
Our Caveats:
In the interest of full transparency, we need to formally repeat these experiments. We were initially just exploring, but the results were so fascinating that we decided to open-source the syntax and prompts now while we prepare for rigorous academic testing.
As researchers, our day jobs limit our time to improve the ways we can implement this, but we believe vertical protocols hold immense value for the open-source community. We want to share it with developers who are interested in testing and improving vertical protocols.
The Unknowns We Want to Test
- We just created a
SKILL.mdwrapper to try the V3U protocol in agentic frameworks, but we haven't extensively proven the wrapper works natively everywhere yet. - We aren't 100% sure if the "saved" tokens are just being expended in other hidden ways (like extra compute/thinking time), but it is fascinating to watch agents communicate in a dense, purely data-driven way that humans can still easily decode.
We just open-sourced the syntax specs zen.v3u, the induction prompts, and the SKILL.md wrapper. We’d love for anyone interested in agentic workflows to clone the repo and see if they can get local models to communicate purely in P3 or even the theoretical P5.
GitHub Repo: https://github.com/v3u-P2-P5/Vertical_3_Ultra
What do you think? Can vertical data protocols be safe and standard, or is it better to keep English as the default? Let us know in the comments.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.