A VS Code extension that scores your code like a doctor scores your health. No AI, no cloud, no guessing.
And that's the problem Iris was built to fix.
Every developer has that moment. You open a file you wrote three months ago and it's 400 lines long, has six nested if statements, three unused imports, and a console.log you forgot to clean up before the last deploy. You knew it was messy. You just didn't know how messy.
That's the gap Iris fills.
Why I built it
I kept moving through large codebases with no real sense of which files were actually problematic. Linters catch syntax errors. Formatters handle style. But nothing answered the question I kept asking: is this file fine, or is this file a problem?
Iris answers that question. Every time you open a file, in under a second, you know where you stand.
What Iris is
Iris is a VS Code extension that gives your code a health score — the same way a doctor gives you one after a checkup.
Open any file, and you get a score from 0 to 100, backed by actual measurements: cyclomatic complexity, code smell detection, unused import analysis, and function-level breakdowns. Not vibes. Numbers.
Run a workspace scan, and you get all of that aggregated across your entire project — a ranked list of your most complex files, a Problems tab with every finding sorted by severity, and TODOs surfaced from every corner of the codebase. Every finding is clickable and takes you directly to the line.
How the analysis actually works
No AI. No cloud. Everything runs locally.
When you open or save a file, Iris runs a full static analysis pass on it — usually under a second. There's a version-based cache, so it's not re-running on every keystroke, only when the file actually changes.
Each language has its own analyser with language-specific rules:
For TypeScript and JavaScript, Iris tracks things like any usage, @ts-ignore count, missing return types on exported functions, non-null assertions, and type assertion patterns — metrics that actually matter for TypeScript quality beyond what a linter gives you. It also detects unused variables and unused imports by checking whether each binding appears in the rest of the file after stripping out the import lines themselves.
For Go, the analyser understands the difference between exported and unexported functions — it won't flag a capitalised function as unused just because it isn't called within the file, since it may be the whole point of the package. It parses go.mod for workspace-level unused dependency detection.
For Python, function boundaries are tracked by indentation depth rather than braces, deep nesting is flagged at four indent levels rather than two, and unused function detection is scoped to _private functions only — public functions may be imported anywhere.
Complexity scoring starts at 1 and caps at 10, factoring in function density, max indentation depth, control flow count, and third-party import volume. The health score starts at 100 and gets deducted based on actual findings — any usages, console logs, unused variables, deep nesting, and long parameter lists. Floor is 0.
This is all deterministic. Same code, same score, every time.
Why the scores are weighted the way they are
A health score is only useful if the penalties actually reflect real-world impact. Here's the thinking behind the numbers.
Health score deductions
Not all code problems are equal, and the deductions reflect that.
@ts-ignore gets a heavier penalty than a stray console.log because it's an active decision to suppress the type system — you're not just leaving debugging code in, you're telling TypeScript to look away. Similarly, unused functions carry a larger penalty than unused variables. A dead variable is noise. A dead function is a maintenance trap: someone will eventually wonder if it's safe to delete, spend time tracing it, and find out it does nothing.
any usage sits in the middle — it's not always wrong, but it's a signal that type coverage has a hole. Each usage chips away at the score rather than tanking it, because one any in a large file is very different from twenty.
Errors deduct more than warnings, and warnings deduct more than info-level findings. That sounds obvious, but it means a file with one long function isn't necessarily unhealthy — it gets flagged, but it doesn't crater the score the way multiple suppressed type errors would.
Here's the full breakdown:
| Finding | Deduction |
|---|---|
| Error-level warning | -5 each |
| Warning-level warning | -3 each |
any usage |
-2 each |
@ts-ignore |
-3 each |
console.log |
-1 each |
| Deep nesting (per function) | -2 each |
| Long parameter list (per function) | -1 each |
| Unused variable | -1 each |
| Unused function | -2 each |
Score floor is 0 — it won't go negative.
Complexity score
Complexity is a 1–10 scale that answers a different question than health: not "is this code problematic?" but "how hard is this code to hold in your head?"
Four things contribute to it. Function density — how many functions exist relative to the file's size — reflects how much is being packed into one place. Max indentation depth is a proxy for nesting hell: deeply indented code usually means deeply nested conditionals, which means deeply nested logic you have to mentally unwind to follow. Control flow count tracks the actual branching — every if, for, while, switch, catch, and ternary is a decision point that multiplies the paths through the code. Third-party import volume is the loosest signal of the four, but a file pulling in a lot of external dependencies tends to be doing a lot of things, which usually means more to reason about.
Each factor has a cap on its contribution, so no single thing can max out the score alone. A file can be large and have many functions, but still score low on complexity if the logic itself is straightforward.
Workspace rankings
When you scan a full workspace, Iris surfaces two ranked lists: largest files by line count and most complex files by complexity score. These are intentionally separate because they answer different questions.
A 600-line file with simple, flat logic is a different kind of problem than a 200-line file with a complexity score of 9. The first is probably just overdue for a split. The second is the one that will slow down every developer who has to touch it.
The Problems tab sorts by severity — errors first, then warnings, then info. The goal is that the first thing you see when you open it is the thing most worth fixing, not just the most recent finding.
What it supports
JavaScript, TypeScript (including JSX/TSX), Go, and Python. Each language has its own analysis rules built for how that language actually works — not a one-size-fits-all pass applied to everything.
How to get it
Search Iris — Code Health on the VS Code Marketplace, or go to iriscode.co.
Free to install. The free plan covers full file-level analysis — health score, complexity, code smells, status bar, and code lens. Pro unlocks workspace and folder-wide scans, the Problems tab, TODOs aggregation, and .irisconfig.json for team-wide config. Pricing is regionally adjusted — Nigerian developers get an early adopter rate of ₦2,000/month until March 30.
What's next
The core is solid — file health, workspace scans, code smell detection, and unused import analysis. There's more coming. I'm not going to put a full roadmap here, but if you've ever wanted Iris in a CI/CD pipeline, or wanted to track whether your codebase is getting healthier or worse over time, that's the kind of direction things are heading.
If you try it and something feels off or missing, reply to this post or reach me at hello@iriscode.co. I read everything.

Top comments (0)