Hi Dev.to
I am Olebeng, a solo founder based in Johannesburg, South Africa, and this is the first post from the IntentGuard account.
I want to start by being direct about what we are, what we are not, and why I think the problem we are solving matters to this community specifically.
What IntentGuard is
IntentGuard is an automated Intent Audit platform.
That is a category that does not exist yet. We are building it.
The core question we answer is one that no tool has ever been able to answer automatically:
Does your code do what it was supposed to do?
Not "does your code have vulnerabilities?" Not "does your code pass your linting rules?" Those questions already have excellent tools answering them.
The question nobody has answered automatically is whether your code still reflects the intent behind it — the product description, the architecture decisions, the compliance obligations, the promises made to users.
That gap is what IntentGuard audits.
Why this matters right now
If you have been building with Cursor, Copilot, Claude, or any AI coding assistant, you already know the speed is extraordinary. You can go from idea to working prototype in hours.
What you might not know yet - but will find out at the worst possible moment - is that AI-generated code has a specific failure mode that no existing tool catches: intent drift.
The code works. The tests pass. The CI pipeline is green.
But the code no longer reflects what the product was designed to do. Data flows that were never supposed to exist. Compliance obligations that were stated in the spec and silently dropped in implementation. Architecture decisions that made sense in week one and were quietly reversed by an AI assistant in week six.
This is not a criticism of AI coding tools. It is the next problem to solve.
What we have built so far
IntentGuard is eight sessions into a ten-session build. Here is where we are:
- A two-pass Intent Agent that constructs a model of what a product was supposed to do — before reading a single line of code
- Five specialist agents (Architecture, Security, Compliance, AI Governance, Dependency) that each independently audit the codebase against that intent model
- A multi-LLM consensus pipeline — up to 4 independent models per finding, so no single model's hallucination makes it into a report
- Four persona-specific reports from one scan: Executive, Developer, Auditor, Investor
I am building this in public because I think the architecture decisions we have made - particularly around the intent reconstruction pipeline and the zero-data-retention sandbox - are worth discussing openly.
What I will be posting here
Technical articles. How the Intent Agent actually works. How we do deterministic diffing without hallucinated PRs. How we enforce multi-LLM consensus without producing contradictory outputs. Real architecture decisions with real trade-offs.
No marketing. No "10 reasons you need IntentGuard." If the technical work is not interesting enough to stand on its own, no amount of copy will fix that.
If you are building with AI coding tools, dealing with vibe-coded codebases, Investing is start-ups or thinking about the intent-vs-implementation gap - I would like to hear from you.
What is the hardest part of maintaining alignment between what you intended to build and what the code actually does?
Olebeng
Founder, IntentGuard · intentguard.dev
Top comments (0)