Hello, I'm Shrijith Venkatramana. I'm building git-lrc, an AI code reviewer that runs on every commit. Star Us to help devs discover the project. Do give it a try and share your feedback for improving the product.
Most authorization systems begin simple.
Then reality happens.
Over time:
- more roles get added,
- exceptions accumulate,
- workflows become stateful,
- permissions become inherited,
- AI assistants start generating handlers and refactors,
- and eventually nobody is fully certain what combinations are actually possible anymore.
This is where many discussions around “AI-generated code safety” become unsatisfying.
People often talk about:
- better prompts,
- more tests,
- stronger reviews,
- static analysis,
- or safer languages.
Those help.
But there is another direction worth exploring:
What if some critical invariants were not merely tested, but mathematically enforced?
Not:
- “the code probably works,”
- or “the tests passed,”
but:
“certain invalid states are mechanically impossible.”
That is the interesting promise behind Lean.
And permission systems are one of the best places to start because:
- humans understand them intuitively,
- they are security-critical,
- and they become surprisingly difficult to reason about once complexity grows.
This tutorial walks through:
- installing Lean 4,
- understanding the core mathematical ideas,
- building a permission model,
- proving security invariants,
- intentionally breaking them,
- and seeing how Lean prevents unsafe changes.
The goal is not academic theorem proving.
The goal is:
designing systems where important security assumptions become hard to accidentally violate.
1. Installing Lean 4
Lean 4 is unusual because it is simultaneously:
- a programming language,
- a compiler,
- and a theorem prover.
Install it using elan.
Linux/macOS
curl https://raw.githubusercontent.com/leanprover/elan/master/elan-init.sh -sSf | sh
Verify installation:
lean --version
lake --version
2. Install the VSCode Extension
Install:
- “Lean 4”
from the VSCode marketplace.
This gives:
- live proof checking,
- inline errors,
- theorem goals,
- and interactive feedback.
This interactivity matters a lot.
Lean is less like:
- writing static code,
and more like:
- continuously negotiating with a mathematical verifier.
3. Create a Lean Project
Create a project with Mathlib support:
lake new VerifiedPermissions math
cd VerifiedPermissions
code .
Open:
VerifiedPermissions/Basic.lean
This file will contain both:
- executable programs,
- and mathematical proofs about those programs.
That duality is the central idea behind Lean.
4. First Lean Program
Replace the file contents with:
def greet (name : String) : String :=
s!"Hello, {name}"
#eval greet "world"
Let’s unpack this carefully.
def
def greet
def means:
define a function or value.
This is ordinary programming.
(name : String)
(name : String)
This means:
- the function accepts a parameter called
name, - whose type is
String.
Lean is statically typed.
But unlike many languages:
- types in Lean are deeply connected to logic itself.
That becomes important later.
: String
: String
This declares:
the function returns a string.
So mathematically:
greet : String → String
Meaning:
- greet maps one string into another string.
Functions in Lean are treated very mathematically.
:=
:=
Means:
is defined as.
#eval
#eval greet "world"
Actually runs the program.
This is important because Lean is not just:
- a proof notation system,
- or symbolic logic language.
It is executable.
5. A Small Verified Function
Now replace the file with:
def increment (x : Nat) : Nat :=
x + 1
theorem increment_is_larger (x : Nat) :
increment x > x := by
exact Nat.lt_succ_self x
This is where things become interesting.
You are no longer just writing code.
You are writing:
- code,
- and mathematical claims about the code.
6. Understanding the Mathematics Line by Line
Nat
Nat
Means:
natural numbers.
So:
- 0, 1, 2, 3…
Lean treats mathematics as native objects.
increment
def increment (x : Nat) : Nat :=
x + 1
This is an executable function.
Nothing unusual yet.
theorem
theorem increment_is_larger
This changes everything conceptually.
You are no longer saying:
“I hope this property holds.”
You are saying:
“This property must be proven.”
And Lean will refuse to continue unless the proof is valid.
(x : Nat)
The theorem applies universally.
Meaning:
For every natural number x
not:
- “for tested examples,”
- not “for likely inputs,”
- but literally all possible values.
This is one of the biggest conceptual differences from testing.
Tests are existential:
These cases worked.
Proofs are universal:
All valid inputs satisfy this property.
increment x > x
increment x > x
This is the claim being proven.
Meaning:
increment always returns a larger number.
:= by
:= by
This begins a proof block.
You are now constructing evidence that the statement is true.
exact
exact Nat.lt_succ_self x
This says:
use an existing theorem directly.
Nat.lt_succ_self is a theorem already known to Lean:
x < x + 1
So Lean verifies:
- your theorem,
- by reducing it to already-proven mathematics.
7. Breaking the Proof Intentionally
Now change:
increment x > x
to:
increment x < x
You now claim:
increment makes numbers smaller.
Lean immediately rejects this.
This is the first important moment.
The theorem is not:
- documentation,
- comments,
- or developer intent.
It is mechanically enforced logic.
8. Building a Permission Model
Now we move toward authorization systems.
Replace the file with:
inductive Role
| Guest
| User
| Admin
9. Understanding inductive
This line introduces a very important mathematical idea.
inductive Role
This defines a finite set of possible values.
Mathematically:
Role ∈ {Guest, User, Admin}
This is powerful because:
- impossible states cannot exist,
- invalid roles cannot appear accidentally,
- and all cases must be handled explicitly.
This already improves reliability substantially.
10. Defining Permissions
Now add:
def canDelete : Role → Bool
| Role.Guest => false
| Role.User => false
| Role.Admin => true
This means:
canDelete maps a Role into a boolean
or mathematically:
Role → Bool
Meaning:
- every role deterministically maps to a permission decision.
11. Why This Is Safer Than It Looks
Notice something subtle.
Lean forces all role cases to be handled.
If you later add:
| Moderator
Lean immediately complains that:
-
canDeleteis incomplete.
This is extremely valuable operationally.
In many production systems:
- new authorization states get introduced,
- old logic silently becomes incomplete,
- edge cases appear months later.
Lean forces exhaustive handling.
That alone prevents many categories of policy drift.
12. Adding Security Invariants
Now add:
theorem guests_cannot_delete :
canDelete Role.Guest = false := by
rfl
theorem users_cannot_delete :
canDelete Role.User = false := by
rfl
13. Understanding rfl
rfl
means:
this is true by direct reduction.
Lean computes:
canDelete Role.User
→ false
So the theorem becomes:
false = false
which is trivially true.
14. Introducing a Security Bug
Now simulate a future refactor.
Change:
| Role.User => false
to:
| Role.User => true
Immediately:
users_cannot_delete
fails.
This is where the practical value starts appearing.
The proof acts like:
- a permanently active security assertion.
Not:
- documentation,
- not review guidelines,
- not tribal knowledge.
An enforced invariant.
15. Why This Matters More with AI-Generated Code
The interesting part is not tiny examples like this.
The interesting part is what happens later when:
- AI assistants generate handlers,
- rewrite permission logic,
- refactor workflows,
- or modify state transitions.
The problem is no longer:
“Will the code compile?”
The problem becomes:
“Did the generated system preserve critical invariants?”
Formal models become interesting because:
- implementations can change repeatedly,
- while the invariants remain fixed and machine-checked.
16. What Lean Is Actually Buying
Lean does not magically create bug-free software.
What it can realistically provide is:
- machine-checked invariants,
- exhaustive handling of states,
- prevention of silent policy drift,
- stronger guarantees around transitions,
- and continuous enforcement of critical assumptions.
That is a narrower claim than:
“formally verified applications.”
But it is also much more practical.
And for authorization-heavy systems, even small mechanically enforced guarantees can become surprisingly valuable over time.
*AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.*
Any feedback or contributors are welcome! It's online, source-available, and ready for anyone to use.
HexmosTech
/
git-lrc
Free, Micro AI Code Reviews That Run on Commit
| 🇩🇰 Dansk | 🇪🇸 Español | 🇮🇷 Farsi | 🇫🇮 Suomi | 🇯🇵 日本語 | 🇳🇴 Norsk | 🇵🇹 Português | 🇷🇺 Русский | 🇦🇱 Shqip | 🇨🇳 中文 |
git-lrc
Free, Micro AI Code Reviews That Run on Commit
AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.
See It In Action
See git-lrc catch serious security issues such as leaked credentials, expensive cloud operations, and sensitive material in log statements
git-lrc-intro-60s.mp4
Why
- 🤖 AI agents silently break things. Code removed. Logic changed. Edge cases gone. You won't notice until production.
- 🔍 Catch it before it ships. AI-powered inline comments show you exactly what changed and what looks wrong.
- 🔁 Build a…
Top comments (0)