Designing a scoring system after the build is convenient. The dimensions settle around what you happened to encounter; the weights drift toward what turned out to matter. The score feels right because the rubric was shaped around the result.
This one was fixed before Week 1 started.
A scoring system designed after the build will justify what happened rather than measure it. Building it first forces the criteria to be general enough to apply to every chain — EVM and non-EVM, polished and experimental — before you know which one you're deploying on.
Eight dimensions. Three weights.
| # | Dimension | Weight | Max |
|---|---|---|---|
| D1 | Getting Started | ×1.0 | 5 |
| D2 | Developer Tooling | ×2.0 | 5 |
| D3 | Contract Authoring | ×2.0 | 5 |
| D4 | Documentation Quality | ×1.5 | 5 |
| D5 | Frontend / Wallet | ×2.0 | 5 |
| D6 | Deployment Experience | ×1.5 | 5 |
| D7 | Transaction Cost | ×1.0 | 5 |
| D8 | Community & Ecosystem | ×1.0 | 5 |
Maximum score: 60. Each dimension scores 1–5. The weight reflects how much that dimension determines whether you'd actually build a production app on the chain.
D2, D3, and D5 carry the most weight because they are the daily surface: the tools you use every hour, the language you write in, the wallet integration you fight with on every frontend. Get those wrong and no amount of documentation or community enthusiasm compensates.
D4 and D6 are mid-weight — important but recoverable. Bad documentation can be worked around with research; a difficult deploy flow can be scripted.
D1, D7, and D8 are single-weight. Getting started friction matters, but you only experience it once. Transaction cost matters for production apps, but at this scale the variance between chains is more interesting than the absolute number. Community is context, not infrastructure.
What I'm measuring
Each dimension is scored from the perspective of a developer building this specific app — a social tip jar with a straightforward contract, wallet connect, and a message wall — on this specific chain, this specific week. Not a permanent rating. A snapshot of the developer experience at build time.
- D1: everything before the first line of code — wallet setup, testnet funds, network configuration. The faucet experience lives here.
- D2: the toolchain — Foundry or equivalent, CLI tools, local node, testing. How much of the standard EVM workflow translates unchanged?
- D3: the contract side — EVM equivalence, library support, Solidity version compatibility, anything that required rewriting logic.
- D4: documentation — official quickstarts, deployment guides, API references. Does the official docs get you to a working deploy, or do you end up in forum threads?
- D5: frontend and wallet layer — chain imports in wagmi/viem, wallet connector support, anything that required custom integration.
- D6: deploy and verify workflow — commands, manual steps, verification speed, block explorer quality.
- D7: transaction cost on mainnet — deploy cost and per-transaction cost for the app's primary function. For a tip jar, a transaction that costs more than the tip is broken.
- D8: broader ecosystem — community size, documentation freshness, signs of active development versus stagnation.
The score bands
- 55–60: Outstanding — build here with confidence, caveats worth naming but none are blockers.
- 45–54: Strong — solid foundation, specific gaps worth understanding before committing.
- 35–44: Mixed — viable, but with meaningful friction or risk you need to plan around.
- Below 35: Challenged — fundamental issues that affect the build directly.
Each week scoring
- Week 1 scored 56/60. Base is the base case. Full retrospective can be found here.
- Week 2 is to be set yet, stay tuned.
Top comments (0)