DEV Community

Cover image for The most boring games you have Aver seen
Szymon Teżewski
Szymon Teżewski

Posted on

The most boring games you have Aver seen

Yesterday Claude built Snake and Tetris in Aver. I did not write the implementations. I reviewed the decisions.

That distinction is the whole point.

No if/else. No loops. No mutable state. No exceptions. Game loops through tail-recursive calls. Every side effect declared in the function signature. The games work. You can play them. But playability is not the point. The review loop is.

What "boring" means here

Here is the Snake game loop:

fn gameLoop(state: GameState) -> Result<String, String>
    ? "Render, sleep, read input, tick — TCO recursive"
    ! [
        Random.int,
        Terminal.clear, Terminal.flush, Terminal.moveTo, Terminal.print,
        Terminal.readKey, Terminal.resetColor, Terminal.setColor,
        Time.sleep,
    ]
    render(state)
    Time.sleep(state.tickMs)
    tick(readInput(state))
Enter fullscreen mode Exit fullscreen mode

No surprises. You can see the intent (?). You can see every effect the function is allowed to perform (!). You can see the data flow: render, sleep, read input, tick. State goes in, state comes out. If this function tried to make an HTTP call, the type checker would reject it.

Here is collision detection:

fn checkCollision(state: GameState) -> Bool
    ? "True if the snake hit a wall or itself"
    match List.get(state.snake, 0)
        Option.Some(head) -> match checkWall(head, state.width, state.height)
            true -> true
            false -> bodyContains(state.snake, head, 1)
        Option.None -> true

verify checkCollision
    checkCollision(gs([pt(5, 5)], 1, 0, pt(0, 0))) => false
    checkCollision(gs([pt(0, 5)], 1, 0, pt(0, 0))) => true
    checkCollision(gs([pt(5, 5), pt(6, 5), pt(5, 5)], 1, 0, pt(0, 0))) => true
Enter fullscreen mode Exit fullscreen mode

No if. Every branch is a match. The verify block sits right next to the code it tests. You do not need to open a second file. You read the function, you read the examples, you move on.

This is boring. That is the point.

Three layers of review

Claude wrote both games. But "I reviewed them" understates what actually happened. There are three layers, and I am only the last one.

Layer 1: The compiler. Claude's first attempts had wrong syntax — match with colons, records with braces, || instead of Bool.or. The parser rejected them. Claude fixed them. I never saw these errors.

Layer 2: aver check. Missing verify blocks, undeclared effects, uncovered match arms — the static checker caught structural problems that compiled but violated Aver's contracts. Again, Claude fixed them without my input.

Layer 3: Me. By the time I looked at the code, it was syntactically correct, structurally complete, and passing all verify blocks. I was not debugging. I was making design calls.

Claude's first Tetris draft used List<List<Int>> for the board. It worked. It even dutifully recorded this choice in a decision block. But in review I immediately saw the problem: an Int tells you nothing. Is 1 a T-piece or an L-piece? I said "make a PieceKind sum type" — and Claude rebuilt it, replacing the old decision with a new one explaining why the sum type is better.

Tetris written in Aver in Aver interpreter

That is the beauty of the loop: the AI documents every design choice, even the bad ones. The reviewer's job is not to find bugs — it is to upgrade the decisions.

Then I told it to split the single file into modules — separate the piece definitions from the board logic from the rendering. Tetris ended up as four modules (board, pieces, logic, main), each under 250 lines. Claude did it, adjusted the imports, kept the verify blocks passing.

The toolchain catches mechanical errors. The checker catches structural gaps. The human makes architectural calls: better types, better structure, better boundaries. The kind of feedback that takes thirty seconds to give and would take thirty minutes to implement by hand.

And then the proofs check themselves

aver proof exports the pure subset of an Aver module to Lean 4 or Dafny. Verify cases become proof obligations — confirmed by the Lean kernel or Z3, not by a test runner.

What passes today, out-of-the-box, zero manual fixes: 72 example-level proofs across Snake and Tetris (collision detection, grid operations, piece rotations, movement). Plus 3 universal theorems on Tetris scoring:

theorem computeScore_law_tetrisDoublesSingle :
    ∀ (level : Int), computeScore 4 level = 8 * computeScore 1 level
Enter fullscreen mode Exit fullscreen mode

That is not "for these inputs." That is "for every level in the game." Lean unfolds the definition, omega closes the arithmetic. Formal proof, zero axioms, zero sorry.

Not everything is covered yet. Functions using index-based recursion lack termination proofs, which currently blocks some Tetris cases. The limitation is in the code generator, not the language.

Even with those gaps, the same source file that Claude wrote and I reviewed also generates formal proofs that neither of us wrote by hand. Three outputs from one source: interpreter for iteration, Rust for deployment, Lean or Dafny for proof.

Design by inconvenience

Aver is inconvenient for humans to write and convenient for humans to review. That tradeoff used to look irrational. In an AI-first workflow, it starts to look obvious.

AI does not care about constraints. The model generates the verbose version in milliseconds. The cost of verbosity falls on the author — and when the author is a language model, that cost is zero. The benefit falls on the reviewer — and when the reviewer is a human under time pressure, that benefit is everything.

Boring code, not boring programs

The games are playable. The Snake moves, eats food, grows, dies. The Tetris pieces rotate, drop, clear lines, keep score. The programs are not boring at all.

The code is aggressively boring. And that is the whole thesis: boring is a property of the source, not the output. You want the source boring so you can review it. You want the output to do whatever it needs to do.

There are sibling efforts approaching this from the other direction. Prove makes code hard for AI to generate correctly — the compiler rejects until intent is demonstrated. Aver makes code easy for AI to generate but boring enough for humans to review. Same diagnosis, different treatment.

In a world where AI writes the first draft, review time is the bottleneck. Boring code compresses review time. That is the rational response.


Aver is open source and experimental: github.com/jasisz/aver

Snake source: examples/games/snake.av
Tetris source: examples/games/tetris/

Top comments (0)