A few months ago, I hit a wall with AI coding tools.
Not because they were bad. In fact, some of them were incredibly good. The problem was that each one came with an invisible asterisk: great, as long as you stay inside this vendor's world.
Use this model, this API shape, this tool format, this auth flow, this worldview.
As a developer, that started to bother me more and more.
I wanted a coding agent CLI that felt serious enough for real work, but open enough that I could swap providers, inspect the runtime, and build on top of it without fighting a black box.
So I built cloclo: an open-source MIT-licensed CLI that can talk to 13 LLM providers through one runtime.
npx cloclo
Why I started: vendor lock-in was slowing me down
If you build deeply around one API, you inherit all of its assumptions: how system instructions are passed, how tools are defined, how streaming behaves, how auth works.
The moment you try another provider, the "portable" abstraction starts leaking everywhere.
So instead of building a thin compatibility layer, I started building a runtime that treats providers as interchangeable where possible, but explicitly models their differences where necessary.
The hard part: making 13 APIs feel like one
The biggest challenge wasn't adding provider number 13. It was deciding what the abstraction should not hide.
What worked better was a contract-based approach: provider detection, auth resolution, model normalization, capability flags, instruction placement, tool calling conversion, streaming adaptation.
In other words: one runtime, but not one delusion.
npx cloclo -p "review this codebase and suggest simplifications"
That should feel like one command. But under the hood, the runtime transforms prompts differently depending on the provider.
Why I designed AICL
As the CLI evolved, I realized I was building an environment where agents needed to communicate with each other in a structured way.
Plain text works surprisingly far — until it doesn't.
So I designed AICL: an agent-to-agent notation for structured semantic exchange.
ω:cloclo | ψ:fix(auth_bug) | ◊:missing_null_guard σ:0.88 | λ:read→patch→test | ∇:ship
That line encodes: who owns the message, the intent, confidence level, planned actions, and direction. If you want agents to collaborate without smearing everything into natural-language mush, you need some shared structure.
What surprised me about models in agent loops
A model can look brilliant in a single-turn demo and fall apart in an agent loop.
Agent loops reward: instruction fidelity over charisma, consistency over cleverness, stable tool use over eloquent prose, and willingness to say "I don't know" over confident improvisation.
The best chat model is not always the best agent model
Some models sound amazing but become unreliable when they need to follow a loop like: inspect → choose tool → wait for result → revise plan → continue.
Tool discipline matters more than raw intelligence
A model that uses tools conservatively and accurately is often more useful than a "creative" model that keeps making assumptions.
What's next
The more interesting work is: improving routing between providers, making multi-agent workflows more dependable, sharpening tool safety, expanding the skill system, and making the runtime easier for others to extend.
If AI agents are going to become real software components, they need better ways to express intent, uncertainty, verification, and handoff.
That's what cloclo is really exploring.
npx cloclo
If you're building in this space too, I'd genuinely love to hear what breaks first for you: provider abstraction, tool calling, or agent reliability.
My bet: all three, just in a different order.
Top comments (0)