Most languages that grow a concurrency story eventually grow a second vocabulary: async, await, tasks, futures, channels, executors.
That stack can be powerful, but it also leaks into everything. It changes APIs, changes mental models, and tends to spread once it arrives.
I wanted to try something narrower.
What if the language already had the shape I needed?
The idea
Aver already has tuples.
A tuple is a product of values:
(a, b)
So I added one operator:
(a, b)!
That means:
"These computations are independent. The runtime may evaluate them sequentially or in parallel."
And then one more:
(a, b)?!
That means:
"These computations are independent Result computations. If they all succeed, unwrap them. If one fails, propagate an error."
That is the whole user-facing feature.
No futures.
No task handles.
No scheduler API.
No user-visible thread model.
Just products and independence.
Why this fits Aver
Aver is intentionally explicit and structurally simple.
It already leans on:
- immutable data
- pattern matching
- recursion instead of loops
- explicit effects
So ! and ?! fit the language surprisingly well.
For pure code, independence is almost boring:
(fib(30), fib(31))!
For effectful code, independence is a declaration by the author:
(fetchProfile(userId), loadSettings(userId))?!
That means the semantics stay small:
- the language says the branches are independent
- the runtime chooses an execution strategy
- the result stays deterministic
The nice part: recursion composes naturally
Once you have independent products, recursive fan-out becomes natural.
fn fetchStep(url: String, rest: List<String>) -> Result<List<String>, String>
! [Http.get]
data = (fetchOne(url), fetchAll(rest))?!
match data
(body, others) -> Result.Ok(List.prepend(body, others))
That gives you recursive fork/join without inventing another abstraction.
And for bounded concurrency, I added List.take and List.drop, so windowing stays simple:
fn processInWindows(urls: List<String>, windowSize: Int) -> Result<Unit, String>
! [Http.get, Console.print]
match urls
[] -> Result.Ok(Unit)
_ ->
bodies = fetchAllParallel(List.take(urls, windowSize))?!
processAll(bodies)?
processInWindows(List.drop(urls, windowSize), windowSize)
That is still "just products plus recursion".
The same idea gives you pipeline parallelism:
fn pipelineContinue(ready: String, remaining: List<String>) -> Result<Unit, String>
? "Processes one ready item while fetching the next."
! [Http.get, Console.print]
match remaining
[] -> process(ready)
[url, ..rest] ->
data = (process(ready), fetchOne(url))?!
match data
(_, nextBody) -> pipelineContinue(nextBody, rest)
Item N is being processed while item N+1 is fetched. One independent product inside a recursive step, and you get production and consumption overlapping.
The hard part was not syntax
The syntax was tiny.
The hard part was defining a narrow semantic envelope and then making every backend actually respect it.
The first boundary was soundness. For pure terms, ! is sound by construction: tuple elements have no data dependency on each other, and Aver does not give them mutation or shared state to fight over. For effectful terms, ! is not proven. It is an unchecked contract from the author: "all schedules the runtime is allowed to choose are acceptable here." That turns out to be enough for the cases I wanted, without pretending I solved effect commutativity in general.
The second boundary was replay. If branches may reorder, replay cannot match by position. A branch that was "second" in one run might be "first" in another. So replay records grouped effects under a key shaped like:
(group_id, branch_path, effect_occurrence, effect_type, effect_args)
branch_path is the branch identity inside nested products. "0.1" means: outer branch 0, then inner branch 1. effect_occurrence disambiguates repeated effects inside the same branch. Without it, "the second Http.get in branch 0.1" collapses into "some Http.get in branch 0.1", which is not enough to replay nested or recursive programs deterministically.
The third boundary was cancellation priority. cancel is an execution artifact, not the primary failure. So when ?! unwraps results, a real Result.Err always beats a sibling cancellation error. That sounds like a small detail, but it is exactly the kind of rule that decides whether a feature feels principled or improvised.
I also had to be honest about cancellation
cancel sounds great until you ask what it actually means.
In Rust, there is no safe "kill this thread now" button. And honestly, that is good.
So Aver's cancel mode is cooperative.
In aver.toml:
[independence]
mode = "complete"
# mode = "cancel"
-
complete: all branches run to completion, then the leftmost real error wins -
cancel: once one branch fails, siblings are signaled to stop at checkpoints
That is an important distinction.
Reducing wasted work, not promising rollback of the universe.
Backend reality
One of the most satisfying parts of this work was getting beyond "the docs say this should work" and forcing the backends to prove it.
Today the picture is:
- interpreter: sequential, but semantically valid
- VM: parallel independent products with cooperative cancel
- compiled Rust: parallel independent products with cooperative cancel
I also added an end-to-end compiled regression test that compiles an Aver program to Rust, builds the generated Cargo project, runs it in both modes, and verifies that sibling work is shortened only in cancel. That test found real bugs, which is exactly why I wanted it.
Safety checks matter more than syntax sugar
The language lets authors declare effectful independence.
That is powerful, but it is also easy to misuse.
So aver check now emits independence-hazard warnings for likely bad branch pairs.
Today it warns for things like:
-
Console.*withConsole.* -
Console.*withTerminal.* -
Tcp.*withTcp.* -
HttpServer.*withHttpServer.* - mutating
Disk.* - mutating
Http.* - mutating
Env.*
And it shows up as a reviewable warning:
warning[independence-hazard]: Independent product branches 1 and 2 use potentially conflicting effects [Console.print, Terminal.flush] (shared terminal/output hazard)
"You can do this, but do not lie to yourself about the risks."
That turned out to be exactly the right tone for the feature.
What I like about this design
The best part is not that Aver now has parallelism.
The best part is that it got a concurrency story without becoming "an async language".
The feature stayed:
- small in syntax
- explicit in semantics
- compositional with recursion
- honest about effects
And that feels rare.
Too many language features start small and then explode into a second language inside the language.
Independent products still feel like they belong to the same design.
What is still intentionally missing
I did not add:
- channels
- streams
- backpressure
- task handles
- futures as a surface concept
- a general-purpose scheduler model
That is deliberate.
If this feature ever becomes a hidden path toward rebuilding worse async/await, I will have missed the point.
Its strength is that it is narrow.
The bigger takeaway
This project reminded me of something I keep relearning:
good concurrency features are often not about exposing more machinery.
They are about exposing the right semantic claim.
In Aver, that claim is:
"These computations are independent."
Once that is explicit, a lot of useful behavior can emerge from one small operator.
And if your runtime, replay model, diagnostics, and tests are strong enough, that one operator can carry a lot more weight than it first appears to.
If you want to look deeper
The core docs live here:
If I write a follow-up, it will probably be about the unglamorous part that mattered most:
how replay semantics, cancellation, and backend parity ended up being more important than the surface syntax.
Top comments (0)