Everyone is building **multi-agent systems. Nobody agrees on how to describe them.
I've been building with LLMs for the past two years. I've us...
For further actions, you may consider blocking this person and/or reporting abuse
SQL worked because it abstracted the data, not the engine. SLANG abstracts the intent, not the agent. That's the right layer. The frameworks you're replacing all optimize for the developer writing the code. You're optimizing for the person who has to read it six months later including the LLM.
Thanks so much for this analysis! I'm looking for people to start experimenting with slang. If you'd like to try it and need support, you can find everything on the GitHub repo!
Alright I will take a look
I've been running a CrewAI setup with a RAG layer where Bright Data handles live web scraping, so the coordination problem is familiar territory. Going to give SLANG a shot.
For what it's worth, in my setup the coordination overhead was manageable once I got the crew structure right. The bigger blocker ended up being the context layer i.e what each agent actually knows, when it knows it, and how that degrades as you chain more steps. That might just be a function of my use case though.
Thanks for giving SLANG a shot!
Your point about the "context layer" and degradation across chained steps is spot on. That is arguably the hardest part of building reliable multi-agent systems right now. In frameworks that rely on a shared global state or automatic memory pooling, the context window bloat and the "telephone game" degradation happen incredibly fast.
SLANG tries to mitigate this by making data passing strictly explicit rather than implicit.
Because agents only receive exactly what is passed to them via a stake, and because you can enforce strict output: { ... } schemas at every step, you are naturally pushed to distill the context. Agent B doesn't inherit Agent A's entire messy scratchpad or full interaction history, it only receives the clean, structured payload Agent A was explicitly instructed to output. It forces a separation of concerns at the context level.
I'd really love to hear how this explicit routing feels compared to your CrewAI setup, especially once you hook up your Bright Data/RAG pipeline. Let me know if you run into any friction!
This resonates a lot.
The biggest pain I’ve seen isn’t building agents — it’s maintaining and sharing the workflow logic. Every framework ends up becoming its own “language”, but none of them are actually portable.
The SQL analogy is spot on. We don’t need more power — we need a constrained, readable layer that separates intent from execution.
Curious: how do you see this evolving alongside emerging standards like MCP or Agent Protocol? Do you think we’ll converge to one DSL, or multiple “interoperable” ones?
Hi! You can have a SLANG workflow where an agent uses an MCP server in its
tools: [...]array to fetch data, and the SLANG runtime handles routing that data to the next agent in the pipeline.My bet with SLANG is that a highly constrained, declarative, Actor-Model-style language is the right abstraction for 80% of business use cases.
The SQL analogy is compelling, but SQL succeeded partly because relational algebra gave it a formal foundation. What’s the equivalent formal model for agent coordination — does SLANG have one, or is it more convention-based?
Instead of relational algebra, SLANG’s formal foundation is rooted strictly in the Actor Model of concurrent computation combined with Finite State Machines:
Message Passing: There is no shared memory or global state. Agents only communicate through explicit message passing (stake).
Mailboxes: The await primitive acts as the actor's mailbox, blocking execution until the required inputs (messages) arrive.
State Transitions: The workflow is essentially a directed graph (with cyclic capabilities). The strict output: { ... } schemas act as the typed contracts between the nodes in that graph.
So, while we don't have an "algebra for intelligence," SLANG enforces a strict, formal grammar (we have the EBNF spec in the repo) for how those probabilistic nodes interact. It prevents the system from entering undefined orchestration states, even if the node's output itself is generated by an LLM.
It's a formalization of the routing, not the reasoning.
Do you think the Actor Model provides a strong enough foundation for this kind of orchestration?
Love the 'zero-ghosting' philosophy. You've built more than an HR system—you've encoded empathy into workflow design. The MCP integration strategy (Slack for humans, Figma for design, AI for summaries) is elegant. One-command demo setup? Judges will love that. This is how HR tech should feel: human-first, not compliance-first.
i will steal this and add a layer to github.com/sebs/rewelo
What about self assembly?
Please steal it! That is exactly why the spec is open source. I’ll definitely check out rewelo.
Regarding self-assembly, this touches on a core design philosophy of SLANG: predictability over dynamic topology.
Right now, a .slang file defines a static, deterministic graph before execution starts. There is no native spawn() command for an agent to create another random agent mid-flight. This is intentional. Allowing unbounded self-assembly inside the runtime is the fastest way to get infinite loops and wake up to a massive API bill.
However, the "meta" self-assembly is completely possible.
Because SLANG is just plain text with a microscopic grammar, the easiest way to achieve self-assembly is to push it one layer up. You can simply have an "Architect" agent (or an initial prompt) whose exact task is to output a valid .slang string tailored to the user's specific problem.
User requests a complex task.
The Architect LLM generates a custom .slang workflow to solve it.
Your system intercepts that string, passes it to the interpreter, and executes the bespoke flow.
It keeps the execution layer safe and predictable, while still allowing the system to dynamically assemble workflows on the fly.
Let me know how the integration with rewelo goes, I'd love to see what you build with it!