From "AI garbage" accusations to a trusted calculator — the technical journey of validating every price on AzureCalc.uk
I built an Azure cost calculator https://www.azure-calc.co.uk/ as a weekend project. The first piece of feedback I got was: “this looks AI-generated.”
They weren't entirely wrong.
Ouch.. fair as I had hardcoded pricing constants, generic guide content, and no methodology transparency.
The antidote to "AI garbage" is showing your working. This article explains exactly how I did that.
Layer 1: The Data Pipeline — Azure Retail Prices API to Cloudflare D1
The foundation is the Azure Retail Prices API (prices.azure.com). Unlike scraping or manual spreadsheets, this is Microsoft's official price feed. But it's 50,000+ rows for UK South alone, updated monthly. You can't query this in real-time for every calculator request.
The Worker filters to UK South, GBP-only, then inserts into a database table. This gives us:
Layer 2: The Verification Trap — Hardcoded Constants vs. Live Data
Here's where I went wrong in Sprint 0. I hardcoded the Log Analytics PAYG rate as £2.76/GB based on Azure's documentation. But documentation lags reality.
That came directly from Azure documentation.
After querying actual pricing data:
That is a 27% error.
The Fix: D1-First Development
Now, no price enters the codebase unless it exists in D1.
Every calculator now follows:
Query D1 for available SKUs
Match frontend inputs to actual SKU names
Store the query used for verification
Surface the same value in the UI
Layer 3: Real Invoice Validation — The Ultimate Ground Truth
API prices are theoretical. Invoices are reality. The gap between them is where discount programs (EA, CSP, MACC) live.
To reconcile this, I run controlled validations such as every quarter, I run a Invoice vs. Calculator reconciliation:
- Deploy a known workload (e.g. Log Analytics + App Service)
- Let it run for a billing cycle
- Capture Azure invoice data
- Run identical inputs through the calculator
- Compare line-by-line
Note: These are PAYG rates. EA/CSP discounts would show variance here — that's expected and documented on the /methodology page.
Layer 4: The Formula Disclosure — Showing Your Working
The most effective trust signal I added was the FormulaDisclosure component. Every calculator result shows the exact arithmetic:
![]()
Tier: Pay-as-you-go · Region: UK South
Price fetched: 08 Apr 2026 from Azure Retail Prices API
This serves two purposes:
- Verification — Engineers can check the unit price against their own sources
- Education — Shows how Azure billing actually works (unit price × quantity × time)
The implementation is a React component that takes the raw API response and the user's inputs, then generates the formula string dynamically. No hardcoded example text — real data only.
Layer 5: Price History & Alerts — Proving the Data Is Live
Static pricing pages are the hallmark of abandoned tools. Live data needs evidence of life.
The Price History Page
Here https://www.azure-calc.co.uk/history/ every price change is logged to a price_history table.
The page shows the last 10 price movements. This proves:
- The data isn't static
- Someone is monitoring it
- Price changes are tracked with timestamps
The TrustBar
Every page shows:
- Last update timestamp
- Number of prices cached
- UK South + GBP
This is not a UI feature.
It is evidence that the system is actively maintained
Layer 6: Zod Schema Validation — Preventing Frontend/Backend Drift
Another failure mode: the frontend adds a new SKU, but the API Worker's Zod schema doesn't recognize it. Result: HTTP 400 errors and "Error /mo" on the calculator.
The Pattern
Every calculator has a shared Zod schema in workers/api/validation.ts. When the frontend adds a new tier or SKU, the schema must be updated first. This is now part of the end-of-sprint checklist.
Technical Tips for Building Your Own Verified Calculator
- [ ] Query D1 for every price used in the calculator — verify exact match
- [ ] Test every API endpoint with real frontend requests (DevTools → Network)
- [ ] Verify FormulaDisclosure shows live D1 rate
- [ ] Hard refresh the live site, check TrustBar timestamp updates
- [ ] Zero TypeScript errors under strict mode
The Results: From "AI Garbage" to "Tight Loop No Other Tool Does"
Three sprints later, the same Reddit thread had this comment from a cloud architect:
"The KQL query builder is the part I'd lean into hardest... If you can pair the calculator output with the KQL query that surfaces what it actually costs in the user's workspace, you've got a really tight loop that no other tool in this space does well right now."
That's the difference between a tool that feels generated and one that feels maintained. The methodology isn't just documentation — it's the actual process I follow every night at 02:00 UTC when the cron trigger fires.
The "AI garbage" label sticks to tools that feel generated rather than maintained. The antidote isn't better copy — it's evidence of ongoing operational work.
For https://www.azure-calc.co.uk/, that evidence is:
- Live price data refreshed nightly, with history
- Real invoice reconciliation showing 0% variance on PAYG rates
- Formula disclosure on every result showing the exact arithmetic
- Verified KQL queries tested against real workspaces
- Public changelog documenting actual fixes, not feature lists
If you're building a data-driven tool, apply the same rigor. Your users might not read your methodology page, but they'll feel the difference between a static page and a living system.
Try it: https://www.azure-calc.co.uk/methodology — see the exact API query and verification process.
Open source: The methodology is public. If something looks wrong, email me and I'll check it against the Azure API within 24 hours.
Series
This is post 1 of "Building AzureCalc.uk" — a technical series on building credible infrastructure tools. Follow for Sprint 3 (Networking), the Price Alerts feature, and the Invoice Reconciliation deep-dive.







Top comments (0)