<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Szymon Teżewski</title>
    <description>The latest articles on DEV Community by Szymon Teżewski (@jasisz).</description>
    <link>https://dev.to/jasisz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jasisz"/>
    <language>en</language>
    <item>
      <title>I Added Parallelism to a Language Without Adding async/await</title>
      <dc:creator>Szymon Teżewski</dc:creator>
      <pubDate>Thu, 02 Apr 2026 21:34:06 +0000</pubDate>
      <link>https://dev.to/jasisz/i-added-parallelism-to-a-language-without-adding-asyncawait-48dp</link>
      <guid>https://dev.to/jasisz/i-added-parallelism-to-a-language-without-adding-asyncawait-48dp</guid>
      <description>&lt;p&gt;Most languages that grow a concurrency story eventually grow a second vocabulary: &lt;code&gt;async&lt;/code&gt;, &lt;code&gt;await&lt;/code&gt;, tasks, futures, channels, executors.&lt;/p&gt;

&lt;p&gt;That stack can be powerful, but it also leaks into everything. It changes APIs, changes mental models, and tends to spread once it arrives.&lt;/p&gt;

&lt;p&gt;I wanted to try something narrower.&lt;/p&gt;

&lt;p&gt;What if the language already had the shape I needed?&lt;/p&gt;

&lt;h2&gt;
  
  
  The idea
&lt;/h2&gt;

&lt;p&gt;Aver already has tuples.&lt;/p&gt;

&lt;p&gt;A tuple is a product of values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(a, b)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So I added one operator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(a, b)!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means:&lt;/p&gt;

&lt;p&gt;"These computations are independent. The runtime may evaluate them sequentially or in parallel."&lt;/p&gt;

&lt;p&gt;And then one more:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(a, b)?!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means:&lt;/p&gt;

&lt;p&gt;"These computations are independent &lt;code&gt;Result&lt;/code&gt; computations. If they all succeed, unwrap them. If one fails, propagate an error."&lt;/p&gt;

&lt;p&gt;That is the whole user-facing feature.&lt;/p&gt;

&lt;p&gt;No futures.&lt;br&gt;
No task handles.&lt;br&gt;
No scheduler API.&lt;br&gt;
No user-visible thread model.&lt;/p&gt;

&lt;p&gt;Just products and independence.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why this fits Aver
&lt;/h2&gt;

&lt;p&gt;Aver is intentionally explicit and structurally simple.&lt;/p&gt;

&lt;p&gt;It already leans on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;immutable data&lt;/li&gt;
&lt;li&gt;pattern matching&lt;/li&gt;
&lt;li&gt;recursion instead of loops&lt;/li&gt;
&lt;li&gt;explicit effects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So &lt;code&gt;!&lt;/code&gt; and &lt;code&gt;?!&lt;/code&gt; fit the language surprisingly well.&lt;/p&gt;

&lt;p&gt;For pure code, independence is almost boring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(fib(30), fib(31))!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For effectful code, independence is a declaration by the author:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(fetchProfile(userId), loadSettings(userId))?!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means the semantics stay small:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the language says the branches are independent&lt;/li&gt;
&lt;li&gt;the runtime chooses an execution strategy&lt;/li&gt;
&lt;li&gt;the result stays deterministic&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The nice part: recursion composes naturally
&lt;/h2&gt;

&lt;p&gt;Once you have independent products, recursive fan-out becomes natural.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn fetchStep(url: String, rest: List&amp;lt;String&amp;gt;) -&amp;gt; Result&amp;lt;List&amp;lt;String&amp;gt;, String&amp;gt;
    ! [Http.get]
    data = (fetchOne(url), fetchAll(rest))?!
    match data
        (body, others) -&amp;gt; Result.Ok(List.prepend(body, others))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gives you recursive fork/join without inventing another abstraction.&lt;/p&gt;

&lt;p&gt;And for bounded concurrency, I added &lt;code&gt;List.take&lt;/code&gt; and &lt;code&gt;List.drop&lt;/code&gt;, so windowing stays simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn processInWindows(urls: List&amp;lt;String&amp;gt;, windowSize: Int) -&amp;gt; Result&amp;lt;Unit, String&amp;gt;
    ! [Http.get, Console.print]
    match urls
        [] -&amp;gt; Result.Ok(Unit)
        _ -&amp;gt;
            bodies = fetchAllParallel(List.take(urls, windowSize))?!
            processAll(bodies)?
            processInWindows(List.drop(urls, windowSize), windowSize)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is still "just products plus recursion".&lt;/p&gt;

&lt;p&gt;The same idea gives you pipeline parallelism:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn pipelineContinue(ready: String, remaining: List&amp;lt;String&amp;gt;) -&amp;gt; Result&amp;lt;Unit, String&amp;gt;
    ? "Processes one ready item while fetching the next."
    ! [Http.get, Console.print]
    match remaining
        [] -&amp;gt; process(ready)
        [url, ..rest] -&amp;gt;
            data = (process(ready), fetchOne(url))?!
            match data
                (_, nextBody) -&amp;gt; pipelineContinue(nextBody, rest)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Item N is being processed while item N+1 is fetched. One independent product inside a recursive step, and you get production and consumption overlapping.&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/jasisz/embed/XJjYrQz?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard part was not syntax
&lt;/h2&gt;

&lt;p&gt;The syntax was tiny.&lt;/p&gt;

&lt;p&gt;The hard part was defining a narrow semantic envelope and then making every backend actually respect it.&lt;/p&gt;

&lt;p&gt;The first boundary was soundness. For pure terms, &lt;code&gt;!&lt;/code&gt; is sound by construction: tuple elements have no data dependency on each other, and Aver does not give them mutation or shared state to fight over. For effectful terms, &lt;code&gt;!&lt;/code&gt; is not proven. It is an unchecked contract from the author: "all schedules the runtime is allowed to choose are acceptable here." That turns out to be enough for the cases I wanted, without pretending I solved effect commutativity in general.&lt;/p&gt;

&lt;p&gt;The second boundary was replay. If branches may reorder, replay cannot match by position. A branch that was "second" in one run might be "first" in another. So replay records grouped effects under a key shaped like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(group_id, branch_path, effect_occurrence, effect_type, effect_args)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;branch_path&lt;/code&gt; is the branch identity inside nested products. &lt;code&gt;"0.1"&lt;/code&gt; means: outer branch 0, then inner branch 1. &lt;code&gt;effect_occurrence&lt;/code&gt; disambiguates repeated effects inside the same branch. Without it, "the second &lt;code&gt;Http.get&lt;/code&gt; in branch 0.1" collapses into "some &lt;code&gt;Http.get&lt;/code&gt; in branch 0.1", which is not enough to replay nested or recursive programs deterministically.&lt;/p&gt;

&lt;p&gt;The third boundary was cancellation priority. &lt;code&gt;cancel&lt;/code&gt; is an execution artifact, not the primary failure. So when &lt;code&gt;?!&lt;/code&gt; unwraps results, a real &lt;code&gt;Result.Err&lt;/code&gt; always beats a sibling cancellation error. That sounds like a small detail, but it is exactly the kind of rule that decides whether a feature feels principled or improvised.&lt;/p&gt;

&lt;h2&gt;
  
  
  I also had to be honest about cancellation
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;cancel&lt;/code&gt; sounds great until you ask what it actually means.&lt;/p&gt;

&lt;p&gt;In Rust, there is no safe "kill this thread now" button. And honestly, that is good.&lt;/p&gt;

&lt;p&gt;So Aver's &lt;code&gt;cancel&lt;/code&gt; mode is cooperative.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;aver.toml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[independence]&lt;/span&gt;
&lt;span class="py"&gt;mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"complete"&lt;/span&gt;
&lt;span class="c"&gt;# mode = "cancel"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;complete&lt;/code&gt;: all branches run to completion, then the leftmost real error wins&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cancel&lt;/code&gt;: once one branch fails, siblings are signaled to stop at checkpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is an important distinction.&lt;/p&gt;

&lt;p&gt;Reducing wasted work, not promising rollback of the universe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend reality
&lt;/h2&gt;

&lt;p&gt;One of the most satisfying parts of this work was getting beyond "the docs say this should work" and forcing the backends to prove it.&lt;/p&gt;

&lt;p&gt;Today the picture is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;interpreter: sequential, but semantically valid&lt;/li&gt;
&lt;li&gt;VM: parallel independent products with cooperative cancel&lt;/li&gt;
&lt;li&gt;compiled Rust: parallel independent products with cooperative cancel&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also added an end-to-end compiled regression test that compiles an Aver program to Rust, builds the generated Cargo project, runs it in both modes, and verifies that sibling work is shortened only in &lt;code&gt;cancel&lt;/code&gt;. That test found real bugs, which is exactly why I wanted it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Safety checks matter more than syntax sugar
&lt;/h2&gt;

&lt;p&gt;The language lets authors declare effectful independence.&lt;/p&gt;

&lt;p&gt;That is powerful, but it is also easy to misuse.&lt;/p&gt;

&lt;p&gt;So &lt;code&gt;aver check&lt;/code&gt; now emits &lt;code&gt;independence-hazard&lt;/code&gt; warnings for likely bad branch pairs.&lt;/p&gt;

&lt;p&gt;Today it warns for things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Console.*&lt;/code&gt; with &lt;code&gt;Console.*&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Console.*&lt;/code&gt; with &lt;code&gt;Terminal.*&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Tcp.*&lt;/code&gt; with &lt;code&gt;Tcp.*&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;HttpServer.*&lt;/code&gt; with &lt;code&gt;HttpServer.*&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;mutating &lt;code&gt;Disk.*&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;mutating &lt;code&gt;Http.*&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;mutating &lt;code&gt;Env.*&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And it shows up as a reviewable warning:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;warning[independence-hazard]: Independent product branches 1 and 2 use potentially conflicting effects [Console.print, Terminal.flush] (shared terminal/output hazard)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;"You can do this, but do not lie to yourself about the risks."&lt;/p&gt;

&lt;p&gt;That turned out to be exactly the right tone for the feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I like about this design
&lt;/h2&gt;

&lt;p&gt;The best part is not that Aver now has parallelism.&lt;/p&gt;

&lt;p&gt;The best part is that it got a concurrency story without becoming "an async language".&lt;/p&gt;

&lt;p&gt;The feature stayed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;small in syntax&lt;/li&gt;
&lt;li&gt;explicit in semantics&lt;/li&gt;
&lt;li&gt;compositional with recursion&lt;/li&gt;
&lt;li&gt;honest about effects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that feels rare.&lt;/p&gt;

&lt;p&gt;Too many language features start small and then explode into a second language inside the language.&lt;/p&gt;

&lt;p&gt;Independent products still feel like they belong to the same design.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is still intentionally missing
&lt;/h2&gt;

&lt;p&gt;I did not add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;channels&lt;/li&gt;
&lt;li&gt;streams&lt;/li&gt;
&lt;li&gt;backpressure&lt;/li&gt;
&lt;li&gt;task handles&lt;/li&gt;
&lt;li&gt;futures as a surface concept&lt;/li&gt;
&lt;li&gt;a general-purpose scheduler model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is deliberate.&lt;/p&gt;

&lt;p&gt;If this feature ever becomes a hidden path toward rebuilding worse &lt;code&gt;async/await&lt;/code&gt;, I will have missed the point.&lt;/p&gt;

&lt;p&gt;Its strength is that it is narrow.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger takeaway
&lt;/h2&gt;

&lt;p&gt;This project reminded me of something I keep relearning:&lt;/p&gt;

&lt;p&gt;good concurrency features are often not about exposing more machinery.&lt;/p&gt;

&lt;p&gt;They are about exposing the right semantic claim.&lt;/p&gt;

&lt;p&gt;In Aver, that claim is:&lt;/p&gt;

&lt;p&gt;"These computations are independent."&lt;/p&gt;

&lt;p&gt;Once that is explicit, a lot of useful behavior can emerge from one small operator.&lt;/p&gt;

&lt;p&gt;And if your runtime, replay model, diagnostics, and tests are strong enough, that one operator can carry a lot more weight than it first appears to.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you want to look deeper
&lt;/h2&gt;

&lt;p&gt;The core docs live here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/jasisz/aver/blob/main/docs/independence.md" rel="noopener noreferrer"&gt;Independent products&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/jasisz/aver/blob/main/docs/language.md" rel="noopener noreferrer"&gt;Language guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/jasisz/aver/blob/main/docs/services.md" rel="noopener noreferrer"&gt;Services and stdlib&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I write a follow-up, it will probably be about the unglamorous part that mattered most:&lt;/p&gt;

&lt;p&gt;how replay semantics, cancellation, and backend parity ended up being more important than the surface syntax.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>languages</category>
      <category>aver</category>
      <category>concurrency</category>
    </item>
    <item>
      <title>Aver: Your codebase has no API for AI</title>
      <dc:creator>Szymon Teżewski</dc:creator>
      <pubDate>Tue, 31 Mar 2026 10:21:11 +0000</pubDate>
      <link>https://dev.to/jasisz/aver-your-codebase-has-no-api-for-ai-d2n</link>
      <guid>https://dev.to/jasisz/aver-your-codebase-has-no-api-for-ai-d2n</guid>
      <description>&lt;p&gt;A few projects have started calling themselves "AI-native" or "AI-first" languages. The pitch is usually the same: fewer tokens, one way to write things, simpler syntax. The metric is input cost — how cheaply can an LLM produce a file.&lt;/p&gt;

&lt;p&gt;The bottleneck is not generation cost. It's comprehension cost: can the next agent that touches this code understand &lt;em&gt;what it does&lt;/em&gt;, &lt;em&gt;what it's allowed to do&lt;/em&gt;, and &lt;em&gt;why it was written this way&lt;/em&gt; — without reading every line? Generation is fast and getting cheaper. Understanding is slow and getting more expensive, because the information an agent needs is almost never in the source text itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jasisz/aver" rel="noopener noreferrer"&gt;Aver&lt;/a&gt; is an AI-native language built around this problem. Not "fewer tokens in." &lt;strong&gt;More understanding out.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Every codebase is an archaeology site
&lt;/h2&gt;

&lt;p&gt;Give Claude or GPT a Python project and ask it to add a feature. It reads files — lots of files. It guesses which ones matter. It infers intent from variable names and comments that may be outdated. It has no reliable way to know which functions talk to the network, which ones are pure, and which architectural choices were made deliberately vs. inherited from a Stack Overflow answer in 2019.&lt;/p&gt;

&lt;p&gt;The AI is &lt;em&gt;reconstructing&lt;/em&gt; information that the original author had but didn't encode in the artifact. Intent, constraints, design rationale — all of it exists only as implicit patterns in the code, if it exists at all.&lt;/p&gt;

&lt;p&gt;The code has no API for the thing that reads it most often.&lt;/p&gt;

&lt;h2&gt;
  
  
  Token efficiency is real. It's just not the whole problem.
&lt;/h2&gt;

&lt;p&gt;There's a popular thesis in the AI-first language space: fewer options, fewer libraries, a shorter spec for the model to memorize. Reduce choice paralysis, reduce token cost, and the language becomes better for AI.&lt;/p&gt;

&lt;p&gt;That's a real optimization target. A language that takes 900 tokens to produce a CRUD endpoint instead of 1,800 is genuinely cheaper to generate. But a short program is not a legible program. You can win on token efficiency and still end up with code where the next agent doesn't know the intent behind a function, can't tell which calls have side effects, doesn't know why one approach was chosen over another, has no expected behavior to compare against, and has to regex-parse error output to figure out what went wrong.&lt;/p&gt;

&lt;p&gt;Token efficiency helps code get &lt;em&gt;written&lt;/em&gt;. A semantic surface helps code get &lt;em&gt;understood&lt;/em&gt;, &lt;em&gt;reviewed&lt;/em&gt;, &lt;em&gt;repaired&lt;/em&gt;, and &lt;em&gt;evolved&lt;/em&gt;. The Aver language doesn't compete primarily on fewest tokens to produce a program. It competes on most preserved meaning after the program exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  What an AI-first language exposes to agents
&lt;/h2&gt;

&lt;p&gt;The language encodes intent, effects, architectural decisions, and expected behavior as part of its grammar — parsed, type-checked, enforced, and exportable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;fetchUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="s"&gt;"Fetches a user record by ID from the external API."&lt;/span&gt;
    &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Http&lt;/span&gt;&lt;span class="py"&gt;.get&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;Http&lt;/span&gt;&lt;span class="nf"&gt;.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://api.example.com/users/{id}"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;?&lt;/code&gt; line is a description literal — part of the function's signature, parsed by the compiler, exported by tooling. It's not a comment next to code; it's a declaration &lt;em&gt;about&lt;/em&gt; the code that the toolchain knows about.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;! [Http.get]&lt;/code&gt; is a declared effect — enforced statically and at runtime. If the function body calls &lt;code&gt;Disk.writeText&lt;/code&gt; without declaring it, that's a compile error. An agent reading this signature knows the complete set of side effects without reading the body.&lt;/p&gt;

&lt;p&gt;These ideas aren't novel individually. What matters is that they compose into a single exportable surface — and that there's a concrete command that exports it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;aver context&lt;/code&gt;: where an agent enters the codebase
&lt;/h2&gt;

&lt;p&gt;This is the central piece.&lt;/p&gt;

&lt;p&gt;When an AI agent starts working on an Aver project, it doesn't read source files. It runs &lt;code&gt;aver context&lt;/code&gt;. This is the intended interface — the front door to the codebase for any agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aver context examples/core/calculator.av
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Module: Calculator&lt;/span&gt;
&lt;span class="gt"&gt;
&amp;gt; Safe calculator demonstrating Result types, match expressions,&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; and co-located verification. Errors are values, not exceptions.&lt;/span&gt;

&lt;span class="gu"&gt;### `safeDivide(a: Int, b: Int) -&amp;gt; Result&amp;lt;Int, String&amp;gt;`&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; Safe integer division. Returns Err when divisor is zero.&lt;/span&gt;
verify: &lt;span class="sb"&gt;`safeDivide(7, 0) =&amp;gt; Result.Err("Division by zero")`&lt;/span&gt;,
        &lt;span class="sb"&gt;`safeDivide(0, 5) =&amp;gt; Result.Ok(0)`&lt;/span&gt;,
        &lt;span class="sb"&gt;`safeDivide(9, 3) =&amp;gt; Result.Ok(3)`&lt;/span&gt;

&lt;span class="gu"&gt;### `safeRoot(n: Int) -&amp;gt; Result&amp;lt;Int, String&amp;gt;`&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; Returns Err for negative input, Ok otherwise. Uses match on a bool expression.&lt;/span&gt;
verify: &lt;span class="sb"&gt;`safeRoot(0 - 1) =&amp;gt; Result.Err("Cannot take root of negative number")`&lt;/span&gt;,
        &lt;span class="sb"&gt;`safeRoot(0 - 99) =&amp;gt; Result.Err("Cannot take root of negative number")`&lt;/span&gt;,
        &lt;span class="sb"&gt;`safeRoot(0) =&amp;gt; Result.Ok(0)`&lt;/span&gt;

&lt;span class="gu"&gt;### Decision: NoExceptions (2024-01-15)&lt;/span&gt;
&lt;span class="gs"&gt;**Chosen:**&lt;/span&gt; "Result" — &lt;span class="gs"&gt;**Rejected:**&lt;/span&gt; "Exceptions", "Nullable"
&lt;span class="gt"&gt;&amp;gt; Exceptions make error paths invisible at the call site.&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; Result forces the caller to acknowledge failure explicitly,&lt;/span&gt;
&lt;span class="gt"&gt;&amp;gt; which is essential when AI tooling reads cod…&lt;/span&gt;
impacts: &lt;span class="sb"&gt;`safeDivide`&lt;/span&gt;, &lt;span class="sb"&gt;`safeRoot`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No implementation details. Signatures, descriptions, effects, expected behavior from verify blocks, and the design decisions that constrain the module. In ~2k tokens the agent gets the contracts before the implementation. Compare that with dumping 50k tokens of raw source into a context window.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aver context app.av &lt;span class="nt"&gt;--budget&lt;/span&gt; 10kb
aver context app.av &lt;span class="nt"&gt;--focus&lt;/span&gt; processOrder
aver context app.av &lt;span class="nt"&gt;--decisions-only&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start high, zoom in. The agent reads the contract map first, then drills into the functions that actually need attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failures are parseable, not just readable
&lt;/h2&gt;

&lt;p&gt;When something breaks, &lt;code&gt;aver check&lt;/code&gt; emits structured diagnostics with repair suggestions and source snippets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error[type-error]: Function 'wrongReturn': body returns String but declared return type is Int
  at: test_errors.av:30:1
     |
  30 | fn wrongReturn() -&amp;gt; Int
     |                     ^^^  declared Int
  31 |     ? "Returns wrong type (type checker error)."
  32 |     "oops"
     |     ^^^^^^  returns String

warning[perf-string-concat]: string concatenation with `acc` in recursive call
  at: lint_demo.av:20:31
  in-fn: repeat
  repair: O(n²) per iteration; consider collecting into a list and joining
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every diagnostic has a machine-readable slug, a source location down to the column, and a repair suggestion. &lt;code&gt;--json&lt;/code&gt; emits the same as NDJSON — same schema across &lt;code&gt;check&lt;/code&gt;, &lt;code&gt;verify&lt;/code&gt;, and &lt;code&gt;replay&lt;/code&gt; — so an agent can categorize errors and apply fixes programmatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decisions survive across sessions
&lt;/h2&gt;

&lt;p&gt;The hardest information to preserve across AI sessions is &lt;em&gt;why&lt;/em&gt;. An agent can re-derive what the code does by reading it. It cannot re-derive why it was written that way.&lt;/p&gt;

&lt;p&gt;Aver's &lt;code&gt;decision&lt;/code&gt; blocks encode rationale as first-class syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hocon"&gt;&lt;code&gt;&lt;span class="l"&gt;decision&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;TailRecurrenceForPerformance&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;date&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-24"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;reason&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Naive fib(n-1)+fib(n-2) is exponential and easy for AI to generate."&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"Tail recursion makes fib linear time and predictable."&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;chosen&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TailRecursion"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;rejected&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"NaiveRecursion"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;impacts&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="l"&gt;fib&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;fibTR&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Parsed, validated, exported via &lt;code&gt;aver context --decisions-only&lt;/code&gt;. When an agent touches &lt;code&gt;fib&lt;/code&gt; three months from now, it reads this block and knows the exponential version was considered and rejected — with an explicit reason.&lt;/p&gt;

&lt;p&gt;I haven't seen many languages treat architectural rationale as grammar rather than convention. That doesn't mean it's a solved problem — enforcing decision quality is still on the author. But the toolchain exposes rationale the same way it exposes types and effects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Record/replay closes the loop on effects
&lt;/h2&gt;

&lt;p&gt;Pure functions have verify blocks. Effectful code has recordings. &lt;code&gt;aver run --record&lt;/code&gt; captures every effectful interaction with caller, arguments, and outcome. &lt;code&gt;aver replay --test --diff&lt;/code&gt; re-executes against that recording deterministically. If the code drifts, you get a structured diagnostic — which effect changed, in which function, at which step.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this adds up to
&lt;/h2&gt;

&lt;p&gt;Aver composes intent, effects, decisions, expected behavior, and structured failures into one exportable surface. &lt;code&gt;aver context&lt;/code&gt; is the entry point. The rest of the toolchain feeds into it. The agent gets contracts before implementation, rationale before refactoring, and parseable failures when things break.&lt;/p&gt;

&lt;p&gt;Aver is early and incomplete. But this semantic surface already works today. The &lt;a href="https://github.com/jasisz/aver" rel="noopener noreferrer"&gt;repo is here&lt;/a&gt;, the &lt;a href="https://jasisz.github.io/aver-language/" rel="noopener noreferrer"&gt;manifesto is here&lt;/a&gt;. &lt;code&gt;cargo install aver-lang&lt;/code&gt;, point &lt;code&gt;aver context&lt;/code&gt; at a module, and read what comes out.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Previous posts: &lt;a href="https://dev.to/jasisz/a-prompt-is-a-request-a-language-is-the-law-aver-and-ai-written-code-532b"&gt;A prompt is a request, a language is the law&lt;/a&gt; | &lt;a href="https://dev.to/jasisz/the-most-boring-games-you-have-aver-seen-18hl"&gt;The most boring games you have Aver seen&lt;/a&gt; | &lt;a href="https://dev.to/jasisz/i-gave-my-language-vm-four-memory-lanes-instead-of-a-normal-heap-1hgh"&gt;I gave my language VM four memory lanes&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ainative</category>
      <category>aifirst</category>
      <category>aver</category>
    </item>
    <item>
      <title>I Gave My Language VM Four Memory Lanes Instead of a Normal Heap</title>
      <dc:creator>Szymon Teżewski</dc:creator>
      <pubDate>Fri, 20 Mar 2026 14:14:34 +0000</pubDate>
      <link>https://dev.to/jasisz/i-gave-my-language-vm-four-memory-lanes-instead-of-a-normal-heap-1hgh</link>
      <guid>https://dev.to/jasisz/i-gave-my-language-vm-four-memory-lanes-instead-of-a-normal-heap-1hgh</guid>
      <description>&lt;p&gt;Most language runtimes eventually converge on the same story: allocate objects into a heap, add a garbage collector, spend the next few years arguing about generations, barriers, and pause times.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Aver&lt;/strong&gt;, that story started to feel wrong.&lt;/p&gt;

&lt;p&gt;Aver is a small language designed for AI-assisted development. It is intentionally narrow — immutable data, &lt;code&gt;match&lt;/code&gt; as the only branching construct, recursion and tail calls instead of loops, no closures, explicit effects, and a lot of very small helper functions.&lt;/p&gt;

&lt;p&gt;That shape matters. Once I added a real bytecode VM, it became obvious that a generic "heap + GC everywhere" design would leave a lot of performance and clarity on the table. The control flow of the language was already telling me something about lifetime.&lt;/p&gt;

&lt;p&gt;So instead of treating all heap-backed values the same, the VM now uses &lt;strong&gt;four memory lanes&lt;/strong&gt;: &lt;code&gt;young&lt;/code&gt; for local scratch work, &lt;code&gt;yard&lt;/code&gt; for tail-call survivors, &lt;code&gt;handoff&lt;/code&gt; for ordinary return survivors, and &lt;code&gt;stable&lt;/code&gt; for real escapes.&lt;/p&gt;

&lt;p&gt;This post is about why that model emerged and why I think it is one of the most Aver-shaped parts of the runtime.&lt;/p&gt;

&lt;p&gt;

&lt;iframe height="600" src="https://codepen.io/jasisz/embed/vEXJmXm?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  The premise
&lt;/h2&gt;

&lt;p&gt;One of the traps in runtime work is assuming that the natural unit of memory management is "a function call".&lt;/p&gt;

&lt;p&gt;In Aver, it is not.&lt;/p&gt;

&lt;p&gt;Aver programs have a lot of small functions. A helper might exist just to wrap a value in &lt;code&gt;Result.Ok&lt;/code&gt;, reshape a record, do one small &lt;code&gt;match&lt;/code&gt;, or delegate to another helper. If the VM pays full boundary-management cost on every one of those, you spend too much time being correct about memory and not enough time doing work.&lt;/p&gt;

&lt;p&gt;So I pushed the runtime toward a more semantic model. Temporary scratch data dies aggressively. Loop-carried state survives tail-call reuse. Helper return values survive into the caller — but do not pretend to be globally long-lived. Only real escapes become truly stable.&lt;/p&gt;

&lt;p&gt;Four lanes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four lanes
&lt;/h2&gt;

&lt;p&gt;One thing first: the lanes hold &lt;strong&gt;heap-backed values only&lt;/strong&gt;. Small ints, bools, floats, and many VM-known function references stay inline as NaN-boxed 8-byte handles and never touch the arena. A lot of the traffic in Aver programs is scalar, and scalar traffic avoids arena churn entirely.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;young&lt;/code&gt; — scratch
&lt;/h3&gt;

&lt;p&gt;The default lane. Most temporary work starts here — string building, temporary records and tuples, list intermediates, wrapper cells. At a frame boundary, the VM knows exactly what part of &lt;code&gt;young&lt;/code&gt; belongs to the current frame. It truncates that suffix in one shot.&lt;/p&gt;

&lt;p&gt;The important thing about &lt;code&gt;young&lt;/code&gt; is what it &lt;em&gt;doesn't&lt;/em&gt; do. It does not pretend that temporary work might be long-lived. It does not hedge. When the frame is done, everything that was not explicitly moved somewhere else is gone.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;yard&lt;/code&gt; — tail state
&lt;/h3&gt;

&lt;p&gt;If a frame is being reused by &lt;code&gt;TAIL_CALL_*&lt;/code&gt;, loop-carried values need to survive the reset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn countdown(n, acc)
    match n
        0 -&amp;gt; acc
        _ -&amp;gt; countdown(n - 1, List.prepend(n, acc))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here &lt;code&gt;n&lt;/code&gt; is an inline int — never touches the arena. But &lt;code&gt;acc&lt;/code&gt; is a growing &lt;code&gt;List&lt;/code&gt;, heap-backed. It is not "global". Not even "ordinary return state". It is data that must survive the next tail-call iteration and nothing more. Each time through, scratch work lands in &lt;code&gt;young&lt;/code&gt; and dies in bulk. &lt;code&gt;acc&lt;/code&gt; persists in &lt;code&gt;yard&lt;/code&gt; until the recursion bottoms out.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;handoff&lt;/code&gt; — return lane
&lt;/h3&gt;

&lt;p&gt;Suppose a helper returns a &lt;code&gt;Record&lt;/code&gt; to its caller. That value should not stay in the callee's &lt;code&gt;young&lt;/code&gt; — the callee's scratch memory is about to die. But it also should not go straight into some globally long-lived space.&lt;/p&gt;

&lt;p&gt;Return survivors get their own lane.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;handoff&lt;/code&gt; is not always a copy destination. If the compiler sees a value in an obvious return position, it may build it directly in &lt;code&gt;handoff&lt;/code&gt;. Otherwise it starts in &lt;code&gt;young&lt;/code&gt; and gets evacuated at return time. Either way, the return path ends in &lt;code&gt;handoff&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;stable&lt;/code&gt; — real escape
&lt;/h3&gt;

&lt;p&gt;Values go here only when they truly escape — globals, host-facing values, top-level-completed results. Any source lane can feed &lt;code&gt;stable&lt;/code&gt;: a value in &lt;code&gt;young&lt;/code&gt;, &lt;code&gt;yard&lt;/code&gt;, or &lt;code&gt;handoff&lt;/code&gt; that crosses an escape boundary gets compacted into &lt;code&gt;stable&lt;/code&gt; by an explicit root-driven walk.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;stable&lt;/code&gt; is &lt;strong&gt;not&lt;/strong&gt; a generational GC old-gen. It is not "the place where everything eventually goes because the runtime got scared". It is a canonical space for values that have genuinely outlived the current call-chain story.&lt;/p&gt;

&lt;p&gt;That distinction is what keeps the model clean.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually happens
&lt;/h2&gt;

&lt;p&gt;Here is what a helper return looks like at runtime. The syntax is pseudo-Aver (the real thing would look a bit different), but the shape is right:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;build_label&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;prefix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="nf"&gt;.concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"item-"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;label&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Record&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;tag&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;prefix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;label&lt;/span&gt;

&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;build_label&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"alpha"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;"alpha"&lt;/code&gt; and &lt;code&gt;42&lt;/code&gt; are inline — they sit in the value stack as NaN-boxed handles, no arena allocation. &lt;code&gt;String.concat&lt;/code&gt; produces a new &lt;code&gt;Str&lt;/code&gt; in &lt;code&gt;young&lt;/code&gt;, local scratch work. The &lt;code&gt;Record&lt;/code&gt; is heap-backed — if it is in an obvious return position, it may get built directly in &lt;code&gt;handoff&lt;/code&gt;. Otherwise it starts in &lt;code&gt;young&lt;/code&gt; and gets evacuated on return.&lt;/p&gt;

&lt;p&gt;When &lt;code&gt;build_label&lt;/code&gt; returns: the &lt;code&gt;Str&lt;/code&gt; from &lt;code&gt;concat&lt;/code&gt; is in &lt;code&gt;young&lt;/code&gt;, nothing in &lt;code&gt;handoff&lt;/code&gt; or &lt;code&gt;yard&lt;/code&gt; points to it, &lt;code&gt;young&lt;/code&gt; gets truncated, the &lt;code&gt;Str&lt;/code&gt; is gone. The &lt;code&gt;Record&lt;/code&gt; in &lt;code&gt;handoff&lt;/code&gt; survives into &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;No GC pass. No mark phase. No full generational barrier machinery. Scratch dies in bulk, the return value was already in the right lane, and there is coarse global dirty tracking for the cases that need it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not just use a GC
&lt;/h2&gt;

&lt;p&gt;GC is not bad. But for this language shape, a lot of memory death is obvious from control flow.&lt;/p&gt;

&lt;p&gt;When values die because a frame is done, or because a tail-call iteration resets scratch state, a full general-purpose collector is overkill. What the VM does instead is closer to region-style allocation for local scratch, relocation of the live graph when something must survive, and explicit canonicalization only at real escape boundaries.&lt;/p&gt;

&lt;p&gt;Runtime cost ends up tied to live survivors, not to total historical allocation volume.&lt;/p&gt;

&lt;p&gt;That is what I wanted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tiny helpers were still too expensive
&lt;/h2&gt;

&lt;p&gt;At one point the VM was technically correct, but real workloads still felt slower than they should have.&lt;/p&gt;

&lt;p&gt;Not the four lanes. &lt;strong&gt;Granularity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If every function behaves like a full-blown memory boundary, you pay too much bookkeeping for helpers that exist mostly for readability. That led to two extra ideas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thin functions.&lt;/strong&gt; If a function returns without growing &lt;code&gt;young&lt;/code&gt;, &lt;code&gt;yard&lt;/code&gt;, or &lt;code&gt;handoff&lt;/code&gt;, and without dirtying globals, the VM skips the boundary relocation path entirely. A few comparisons, pop frame, continue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parent-thin functions.&lt;/strong&gt; The more Aver-specific trick. Some wrapper-like helpers borrow the caller's &lt;code&gt;young&lt;/code&gt; lane directly. Normal call frame, but no separate scratch lifetime. If they stay out of &lt;code&gt;yard&lt;/code&gt; and &lt;code&gt;handoff&lt;/code&gt;, their temporary work lives in the caller's scratch space and dies with the caller.&lt;/p&gt;

&lt;p&gt;A very weird optimization in generic VM terms. Also exactly the kind of thing that becomes available when the language is small and constrained enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarks
&lt;/h2&gt;

&lt;p&gt;This is not just a story anymore. In recent local benchmark runs, the VM is consistently faster than the tree-walking interpreter on real workloads — often by a lot.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Benchmark&lt;/th&gt;
&lt;th&gt;Interpreter&lt;/th&gt;
&lt;th&gt;VM&lt;/th&gt;
&lt;th&gt;Speedup&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;sum_tco(1M)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;1998.911ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;668.600ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;3.0x&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;countdown(1M)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;1434.012ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;563.250ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2.5x&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;result_chain(40K)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;330.243ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;83.964ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;3.9x&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;shapes(30K)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;378.891ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;71.828ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;5.3x&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;list_builtins(40K)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;135.254ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;53.888ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2.5x&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mixed_real(20K)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;233.029ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;58.676ms&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;4.0x&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The memory side matters just as much. In many of those runs, the VM finishes with &lt;code&gt;live+ = 0&lt;/code&gt; in places where the interpreter still keeps large amounts of data alive. Scratch memory is getting reclaimed at the right boundaries.&lt;/p&gt;

&lt;p&gt;End-to-end app benchmarks: &lt;code&gt;workflow_engine seed_tasks&lt;/code&gt; dropped from 44s to 33s, &lt;code&gt;list_tasks&lt;/code&gt; from 544ms to 332ms. &lt;code&gt;payment_ops show_payment&lt;/code&gt; barely moved — 14ms to 13ms — but it was already fast.&lt;/p&gt;

&lt;p&gt;The story is not "VM beats everything always". The story is: once there is enough real work, the VM starts paying back its representation and lifetime model.&lt;/p&gt;

&lt;h2&gt;
  
  
  The honest trade-off
&lt;/h2&gt;

&lt;p&gt;The model is good. It is not free.&lt;/p&gt;

&lt;p&gt;The implementation gets subtle fast. Four lanes, thin fast paths, parent-thin fast paths, direct allocation into non-default lanes — that is not "a simple arena" anymore. That is a real memory system. I had to clean up a large amount of duplicated traversal and relocation code recently, because that kind of duplication becomes dangerous fast in a runtime like this.&lt;/p&gt;

&lt;p&gt;My honest take: the architecture is right, the implementation must stay aggressively maintained. The model earns its complexity — but only because the benchmarks and real workloads actually moved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I like this
&lt;/h2&gt;

&lt;p&gt;The best runtime ideas feel inevitable in retrospect.&lt;/p&gt;

&lt;p&gt;Aver is immutable, recursion-heavy, explicit, small enough that semantics still matter more than compatibility baggage. So instead of copying a generic memory story from a mainstream VM, the runtime can follow the language.&lt;/p&gt;

&lt;p&gt;Not "I implemented a fancy allocator". More:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;the memory model reflects the control-flow model of the language&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;young&lt;/code&gt;, &lt;code&gt;yard&lt;/code&gt;, &lt;code&gt;handoff&lt;/code&gt;, &lt;code&gt;stable&lt;/code&gt; are not just implementation tricks. They are the runtime version of a language design decision.&lt;/p&gt;

&lt;p&gt;That is the kind of systems work I want more of.&lt;/p&gt;




&lt;p&gt;Aver is open source: &lt;a href="https://github.com/jasisz/aver" rel="noopener noreferrer"&gt;github.com/jasisz/aver&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want the lower-level design note, the repo includes a technical VM document covering the bytecode model, list representation, and memory lanes in detail.&lt;/p&gt;

</description>
      <category>aver</category>
      <category>programming</category>
      <category>compilers</category>
      <category>performance</category>
    </item>
    <item>
      <title>The most boring games you have Aver seen</title>
      <dc:creator>Szymon Teżewski</dc:creator>
      <pubDate>Wed, 18 Mar 2026 08:42:17 +0000</pubDate>
      <link>https://dev.to/jasisz/the-most-boring-games-you-have-aver-seen-18hl</link>
      <guid>https://dev.to/jasisz/the-most-boring-games-you-have-aver-seen-18hl</guid>
      <description>&lt;p&gt;Yesterday Claude built Snake and Tetris in Aver. I did not write the implementations. I reviewed the decisions.&lt;/p&gt;

&lt;p&gt;That distinction is the whole point.&lt;/p&gt;

&lt;p&gt;No if/else. No loops. No mutable state. No exceptions. Game loops through tail-recursive calls. Every side effect declared in the function signature. The games work. You can play them. But playability is not the point. The review loop is.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "boring" means here
&lt;/h2&gt;

&lt;p&gt;Here is the Snake game loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn gameLoop(state: GameState) -&amp;gt; Result&amp;lt;String, String&amp;gt;
    ? "Render, sleep, read input, tick — TCO recursive"
    ! [
        Random.int,
        Terminal.clear, Terminal.flush, Terminal.moveTo, Terminal.print,
        Terminal.readKey, Terminal.resetColor, Terminal.setColor,
        Time.sleep,
    ]
    render(state)
    Time.sleep(state.tickMs)
    tick(readInput(state))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No surprises. You can see the intent (&lt;code&gt;?&lt;/code&gt;). You can see every effect the function is allowed to perform (&lt;code&gt;!&lt;/code&gt;). You can see the data flow: render, sleep, read input, tick. State goes in, state comes out. If this function tried to make an HTTP call, the type checker would reject it.&lt;/p&gt;

&lt;p&gt;Here is collision detection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn checkCollision(state: GameState) -&amp;gt; Bool
    ? "True if the snake hit a wall or itself"
    match List.get(state.snake, 0)
        Option.Some(head) -&amp;gt; match checkWall(head, state.width, state.height)
            true -&amp;gt; true
            false -&amp;gt; bodyContains(state.snake, head, 1)
        Option.None -&amp;gt; true

verify checkCollision
    checkCollision(gs([pt(5, 5)], 1, 0, pt(0, 0))) =&amp;gt; false
    checkCollision(gs([pt(0, 5)], 1, 0, pt(0, 0))) =&amp;gt; true
    checkCollision(gs([pt(5, 5), pt(6, 5), pt(5, 5)], 1, 0, pt(0, 0))) =&amp;gt; true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;if&lt;/code&gt;. Every branch is a &lt;code&gt;match&lt;/code&gt;. The &lt;code&gt;verify&lt;/code&gt; block sits right next to the code it tests. You do not need to open a second file. You read the function, you read the examples, you move on.&lt;/p&gt;

&lt;p&gt;This is boring. That is the point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three layers of review
&lt;/h2&gt;

&lt;p&gt;Claude wrote both games. But "I reviewed them" understates what actually happened. There are three layers, and I am only the last one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 1: The compiler.&lt;/strong&gt; Claude's first attempts had wrong syntax — &lt;code&gt;match&lt;/code&gt; with colons, records with braces, &lt;code&gt;||&lt;/code&gt; instead of &lt;code&gt;Bool.or&lt;/code&gt;. The parser rejected them. Claude fixed them. I never saw these errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 2: &lt;code&gt;aver check&lt;/code&gt;.&lt;/strong&gt; Missing &lt;code&gt;verify&lt;/code&gt; blocks, undeclared effects, uncovered match arms — the static checker caught structural problems that compiled but violated Aver's contracts. Again, Claude fixed them without my input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 3: Me.&lt;/strong&gt; By the time I looked at the code, it was syntactically correct, structurally complete, and passing all &lt;code&gt;verify&lt;/code&gt; blocks. I was not debugging. I was making design calls.&lt;/p&gt;

&lt;p&gt;Claude's first Tetris draft used &lt;code&gt;List&amp;lt;List&amp;lt;Int&amp;gt;&amp;gt;&lt;/code&gt; for the board. It worked. It even dutifully recorded this choice in a &lt;code&gt;decision&lt;/code&gt; block. But in review I immediately saw the problem: an Int tells you nothing. Is 1 a T-piece or an L-piece? I said "make a PieceKind sum type" — and Claude rebuilt it, replacing the old decision with a new one explaining why the sum type is better.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79hzv703vdevm1uicsot.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79hzv703vdevm1uicsot.gif" alt="Tetris written in Aver in Aver interpreter" width="480" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That is the beauty of the loop: the AI documents &lt;em&gt;every&lt;/em&gt; design choice, even the bad ones. The reviewer's job is not to find bugs — it is to upgrade the decisions.&lt;/p&gt;

&lt;p&gt;Then I told it to split the single file into modules — separate the piece definitions from the board logic from the rendering. Tetris ended up as four modules (board, pieces, logic, main), each under 250 lines. Claude did it, adjusted the imports, kept the &lt;code&gt;verify&lt;/code&gt; blocks passing.&lt;/p&gt;

&lt;p&gt;The toolchain catches mechanical errors. The checker catches structural gaps. The human makes architectural calls: better types, better structure, better boundaries. The kind of feedback that takes thirty seconds to give and would take thirty minutes to implement by hand.&lt;/p&gt;

&lt;h2&gt;
  
  
  And then the proofs check themselves
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;aver proof&lt;/code&gt; exports the pure subset of an Aver module to Lean 4 or Dafny. Verify cases become proof obligations — confirmed by the Lean kernel or Z3, not by a test runner.&lt;/p&gt;

&lt;p&gt;What passes today, out-of-the-box, zero manual fixes: 72 example-level proofs across Snake and Tetris (collision detection, grid operations, piece rotations, movement). Plus 3 universal theorems on Tetris scoring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;theorem computeScore_law_tetrisDoublesSingle :
    ∀ (level : Int), computeScore 4 level = 8 * computeScore 1 level
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is not "for these inputs." That is "for every level in the game." Lean unfolds the definition, &lt;code&gt;omega&lt;/code&gt; closes the arithmetic. Formal proof, zero axioms, zero &lt;code&gt;sorry&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Not everything is covered yet. Functions using index-based recursion lack termination proofs, which currently blocks some Tetris cases. The limitation is in the code generator, not the language.&lt;/p&gt;

&lt;p&gt;Even with those gaps, the same source file that Claude wrote and I reviewed also generates formal proofs that neither of us wrote by hand. Three outputs from one source: interpreter for iteration, Rust for deployment, Lean or Dafny for proof.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design by inconvenience
&lt;/h2&gt;

&lt;p&gt;Aver is inconvenient for humans to write and convenient for humans to review. That tradeoff used to look irrational. In an AI-first workflow, it starts to look obvious.&lt;/p&gt;

&lt;p&gt;AI does not care about constraints. The model generates the verbose version in milliseconds. The cost of verbosity falls on the author — and when the author is a language model, that cost is zero. The benefit falls on the reviewer — and when the reviewer is a human under time pressure, that benefit is everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Boring code, not boring programs
&lt;/h2&gt;

&lt;p&gt;The games are playable. The Snake moves, eats food, grows, dies. The Tetris pieces rotate, drop, clear lines, keep score. The programs are not boring at all.&lt;/p&gt;

&lt;p&gt;The code is aggressively boring. And that is the whole thesis: boring is a property of the source, not the output. You want the source boring so you can review it. You want the output to do whatever it needs to do.&lt;/p&gt;

&lt;p&gt;There are sibling efforts approaching this from the other direction. &lt;a href="https://botwork.se/2026/03/17/ai-cant-generate-code/" rel="noopener noreferrer"&gt;Prove&lt;/a&gt; makes code hard for AI to generate correctly — the compiler rejects until intent is demonstrated. Aver makes code easy for AI to generate but boring enough for humans to review. Same diagnosis, different treatment.&lt;/p&gt;

&lt;p&gt;In a world where AI writes the first draft, review time is the bottleneck. Boring code compresses review time. That is the rational response.&lt;/p&gt;




&lt;p&gt;Aver is open source and experimental: &lt;a href="https://github.com/jasisz/aver" rel="noopener noreferrer"&gt;github.com/jasisz/aver&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Snake source: &lt;a href="https://github.com/jasisz/aver/tree/main/examples/games/snake.av" rel="noopener noreferrer"&gt;examples/games/snake.av&lt;/a&gt;&lt;br&gt;
Tetris source: &lt;a href="https://github.com/jasisz/aver/tree/main/examples/games/tetris/" rel="noopener noreferrer"&gt;examples/games/tetris/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>rust</category>
      <category>aver</category>
    </item>
    <item>
      <title>A prompt is a request. A language is the law: Aver and AI-written code</title>
      <dc:creator>Szymon Teżewski</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:18:17 +0000</pubDate>
      <link>https://dev.to/jasisz/a-prompt-is-a-request-a-language-is-the-law-aver-and-ai-written-code-532b</link>
      <guid>https://dev.to/jasisz/a-prompt-is-a-request-a-language-is-the-law-aver-and-ai-written-code-532b</guid>
      <description>&lt;p&gt;I did not start Aver because I think AI changes everything.&lt;/p&gt;

&lt;p&gt;I started it because, if AI is going to write more of our software, syntax is not the problem.&lt;/p&gt;

&lt;p&gt;The problem is intent.&lt;/p&gt;

&lt;p&gt;AI can already generate code that looks plausible. What it often fails to preserve is the information reviewers actually need: what a function is allowed to do, why a design was chosen, what behavior must not regress, and where side effects cross the boundary of the process.&lt;/p&gt;

&lt;p&gt;That is why I built &lt;a href="https://github.com/jasisz/aver" rel="noopener noreferrer"&gt;Aver&lt;/a&gt;, a statically typed language with a Rust interpreter and a Rust code generation path for deployment.&lt;/p&gt;

&lt;p&gt;One line from the manifesto I wrote captures the philosophy better than any pitch:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A prompt is a request. A language is the law.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most AI tooling wraps existing languages with better prompts, agents, and editors. When I started building Aver, I asked a different question:&lt;/p&gt;

&lt;p&gt;What if the language itself changed once we assume AI writes a large share of the code?&lt;/p&gt;

&lt;h2&gt;
  
  
  Make intent part of the program
&lt;/h2&gt;

&lt;p&gt;The core move in Aver is simple: stop treating intent, effects, tests, and design rationale as optional conventions spread across comments, wikis, ADR folders, and chat logs.&lt;/p&gt;

&lt;p&gt;Make them part of the source.&lt;/p&gt;

&lt;p&gt;In Aver:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;functions can carry a prose description with &lt;code&gt;?&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;effects are part of the function signature&lt;/li&gt;
&lt;li&gt;pure behavior lives in colocated &lt;code&gt;verify&lt;/code&gt; blocks&lt;/li&gt;
&lt;li&gt;architectural rationale can live in &lt;code&gt;decision&lt;/code&gt; blocks next to the code it affects&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aver context&lt;/code&gt; can export a contract-level view of a module graph for humans or LLMs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the kind of code I wanted Aver to make natural:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Cmd
    Burn(Int)
    Rotate(Int)
    DeploySolar
    Tick(Int)
    Abort

decision EffectsAtTheEdge
    date = "2026-03-02"
    reason =
        "State transition logic should be deterministic and testable with verify."
        "Console output belongs to the outer shell only."
        "Replay mode then captures behavior without mocking."
    chosen = "PureCoreEffectfulShell"
    rejected = ["PrintInsideCore", "GlobalMutableState"]
    impacts = [runPlan, main]

fn parseCommand(line: String) -&amp;gt; Result&amp;lt;Cmd, String&amp;gt;
    ? "Parse one command line: BURN n, ROTATE n, DEPLOY, TICK n, ABORT."
    clean = String.trim(line)
    match clean == ""
        true -&amp;gt; Result.Err("Empty command")
        false -&amp;gt; parseCommandTokens(String.split(clean, " "))

verify parseCommand
    parseCommand("BURN 12") =&amp;gt; Result.Ok(Cmd.Burn(12))
    parseCommand("ABORT") =&amp;gt; Result.Ok(Cmd.Abort)
    parseCommand("JUMP") =&amp;gt; Result.Err("Unknown command: JUMP")

fn runMission(cmds: List&amp;lt;Cmd&amp;gt;) -&amp;gt; Unit
    ? "Effectful wrapper around pure runPlan."
    ! [Console.print]
    match runPlan(cmds)
        Result.Err(msg) -&amp;gt; Console.print("MISSION FAILED: " + msg)
        Result.Ok(state) -&amp;gt; reportMission(state)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the point of Aver in one screen.&lt;/p&gt;

&lt;p&gt;You can see the rationale. You can see the pure part that gets &lt;code&gt;verify&lt;/code&gt;. You can see the effect boundary where printing is allowed. You can see the architectural choice to keep the core deterministic and push I/O to the edge. You do not have to reconstruct that from a function body and three adjacent tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design by inconvenience
&lt;/h2&gt;

&lt;p&gt;I made Aver intentionally inconvenient to write by hand.&lt;/p&gt;

&lt;p&gt;That is not an accidental rough edge. It is the strategy.&lt;/p&gt;

&lt;p&gt;My bet is that AI generates the verbose version and humans review the explicit one.&lt;/p&gt;

&lt;p&gt;So Aver removes a lot of familiar escape hatches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no &lt;code&gt;if&lt;/code&gt;/&lt;code&gt;else&lt;/code&gt;, only &lt;code&gt;match&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;no loops, use recursion and explicit list operations&lt;/li&gt;
&lt;li&gt;no exceptions, use &lt;code&gt;Result&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;no &lt;code&gt;null&lt;/code&gt;, use &lt;code&gt;Option&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;no mutable bindings&lt;/li&gt;
&lt;li&gt;no closures or lambda-heavy style&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you optimize for author comfort, this looks hostile.&lt;/p&gt;

&lt;p&gt;If you optimize for auditability, it starts to look coherent.&lt;/p&gt;

&lt;p&gt;Closures hide capture. Exceptions hide control flow. Mutable state hides causality. Wide-open branching makes verification harder. That is why Aver ended up looking functionally constrained, but my argument is not "FP is elegant." My argument is that constrained code is easier to inspect, check, replay, compile, and explain.&lt;/p&gt;

&lt;h2&gt;
  
  
  A workflow that actually closes
&lt;/h2&gt;

&lt;p&gt;For me, the philosophy only becomes interesting when it turns into an actual loop.&lt;/p&gt;

&lt;p&gt;The workflow I have in mind looks like this: give Claude or Codex a prompt for a module that fetches JSON over HTTP and prints a summary.&lt;/p&gt;

&lt;p&gt;The first draft may look fine, but if it calls &lt;code&gt;Http.get&lt;/code&gt; without declaring &lt;code&gt;! [Http.get]&lt;/code&gt;, Aver rejects it at type-check time. If the pure parsing helper has no clear examples, that is an obvious gap for &lt;code&gt;verify&lt;/code&gt;. If the flow is effectful, the repo's record/replay support gives you a deterministic way to exercise it.&lt;/p&gt;

&lt;p&gt;That leads to a workflow like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aver check file.av
aver verify file.av
aver compile file.av &lt;span class="nt"&gt;-o&lt;/span&gt; out/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For pure logic, the center of gravity is &lt;code&gt;verify&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For effectful code, it is record/replay:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aver run file.av &lt;span class="nt"&gt;--record&lt;/span&gt; recordings/
aver replay recordings/ &lt;span class="nt"&gt;--test&lt;/span&gt; &lt;span class="nt"&gt;--diff&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That split is one of Aver's better ideas. Pure behavior is specified locally with examples. Effectful behavior is captured once against the real world, then replayed deterministically offline as a regression suite.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aver context&lt;/code&gt; is not part of execution. It is part of inspection. It is less a build step than a discovery tool: an agent can inspect a module with a fixed budget, for example &lt;code&gt;aver context file.av --budget 10kb&lt;/code&gt;, get the contract-level map first, and only then decide what deserves deeper reading. That is a better fit for AI-assisted code exploration than forcing every model to ingest raw source from the start.&lt;/p&gt;

&lt;p&gt;The interesting part for me is not that Aver helps AI generate code.&lt;/p&gt;

&lt;p&gt;It is that Aver narrows what counts as acceptable generated code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why &lt;code&gt;aver compile -&amp;gt; Rust&lt;/code&gt; matters
&lt;/h2&gt;

&lt;p&gt;This is one of the most practical parts of the project for me.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aver compile&lt;/code&gt; turns an Aver module graph into a normal Rust/Cargo project. That gives me the split I wanted from the start:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use Aver as the generation and review surface&lt;/li&gt;
&lt;li&gt;use Rust as the deployment surface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That matters because it answers the obvious objection: "Nice philosophy, but how does this ship?"&lt;/p&gt;

&lt;p&gt;It also makes some of Aver's restrictions look less ideological and more pragmatic. No closures means simpler variable resolution and fewer translation headaches. Immutable data and pattern matching map cleanly. Explicit effects make the source easier to reason about before you ever compile it.&lt;/p&gt;

&lt;p&gt;There is also a proof export path for the pure subset. That is interesting. But Rust is the killer feature here because it makes the workflow operational, not just conceptual.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual bet
&lt;/h2&gt;

&lt;p&gt;I would not sell Aver as a universal replacement for general-purpose languages.&lt;/p&gt;

&lt;p&gt;It is repetitive on purpose. Many developers will hate the lack of loops, closures, and other familiar shortcuts. It is also fair to look at some examples and think "this still resembles a very strict small FP language."&lt;/p&gt;

&lt;p&gt;But I do think Aver asks a more serious question than most AI programming discourse:&lt;/p&gt;

&lt;p&gt;If AI writes the first draft, should language design still optimize mostly for keystrokes?&lt;/p&gt;

&lt;p&gt;Or should it optimize for auditability?&lt;/p&gt;

&lt;p&gt;I made Aver pick auditability and push that choice into the grammar.&lt;/p&gt;

&lt;p&gt;That is what makes it interesting to me. It treats hidden behavior as the real liability in AI-written code. Not as a tolerable side effect of convenience, but as a thing to design against.&lt;/p&gt;

&lt;p&gt;If generated code becomes normal, that tradeoff may stop looking extreme and start looking necessary.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/jasisz/aver" rel="noopener noreferrer"&gt;github.com/jasisz/aver&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Manifesto: &lt;a href="https://jasisz.github.io/aver-language/" rel="noopener noreferrer"&gt;jasisz.github.io/aver-language&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>languages</category>
      <category>rust</category>
    </item>
  </channel>
</rss>
