I stopped chatting with LLMs and built Synt-E, a protocol to make them faster, cheaper, and more reliable. And it all runs locally.
Press enter or click to view image in full size
We’ve all gotten used to treating ChatGPT and other LLMs like digital colleagues. We write polite, complete sentences, full of “hellos,” “please,” and conversational fluff. It works, but it’s a terribly inefficient habit. It’s like driving on the highway stuck in first gear.
Every word we write to an AI has a cost. A cost in tokens (the currency of APIs), in latency (the time you wait), and in ambiguity (the risk that the AI misunderstands). After spending hours optimizing my prompts, I realized the problem wasn’t what I was asking, but how I was asking it.
The solution? Stop speaking our language and start speaking theirs.
The Hidden Cost of Natural Language
Imagine you want to ask an LLM to write a simple script.
Become a member
The Human Way (and the expensive way):
“Hello, would you be so kind as to write me a Python script that allows me to analyze the data contained in a CSV file?” (26 words, ~35 tokens)
This request is filled with “noise” — words that a human appreciates but are just extra data for a computer to process.
The Efficient Way (Synt-E):
task:code lang:python action:analyze_data format:csv (5 words, 5 tokens)
The result is the same, but the second command is over 80% shorter. At an industrial scale, this difference translates into thousands of dollars saved and a dramatically faster user experience.
The True Native Language of LLMs
The secret behind Synt-E is simple: an LLM’s true native language isn’t conversational English. It’s structured, technical English.
These models have been trained on billions of documents, but most importantly, on:
- Source code (Python, Java, etc.)
- Configuration files (JSON, YAML)
- Terminal commands
- Technical documentation For an AI, key:value syntax isn’t an invention; it’s a pattern it has seen an infinite number of times. It is the fundamental structure of its “thought process.” Natural Language: A winding country road. The AI gets to its destination, but it has to slow down, interpret, and might get lost. Synt-E: A six-lane highway. The path is direct, the speed is maximum, and the risk of error is almost zero. Building a Thought Compiler with Ollama To prove this concept, I created a simple Python script that acts as a “compiler.” It takes a request in plain English (or any other language) and translates it into the Synt-E protocol, using an LLM that runs 100% locally thanks to Ollama. The most interesting part was choosing the right model. I started with Llama 3.1 Instruct, a powerful model trained by Meta to be a perfect assistant. It failed miserably. It was so “helpful” that when I asked it to translate a request to write code, it ignored my instructions and wrote the code instead. The breakthrough came with a “rawer” model, gpt-oss:20b. Being less “domesticated,” it was much more obedient to my SYSTEM_PROMPT, which forced it into a single role: that of a compiler.
Here is the result of the test that made all other models fail:
YOU > Write a Python script that uses Keras to train an RNN for sentiment analysis.
AI > task:write_script language:python libraries:keras model:RNN dataset:movie_reviews task:sentiment_analysis
No code. No explanations. Just a pure, dense command, immediately usable by another AI agent.
The Future is Structured
This experiment has convinced me that the future of AI interaction, especially in professional and automated contexts, will not be conversational. It will be structured.
Synt-E is just a prototype, but it represents a paradigm shift:
From Prompt to Protocol: We stop “whispering” to the AI and start giving it clear commands.
Efficiency by Design: We design our systems to minimize tokens and latency from the ground up.
M2M Reliability: We create a standard language that allows AI agents to communicate with each other without ambiguity, making complex and testable pipelines possible.
If this idea fascinates you and you want to try the compiler yourself, I’ve put all the code and instructions on GitHub. It’s open-source, easy to run, and ready to be explored.
➡️ Find the complete project here:
https://github.com/NeuroTinkerLab/synt-e-project
Let’s stop chatting. Let’s start compiling.
Top comments (0)