CTO & Co-Founder, Atoa. Building open banking payments for the UK. Half the cost of cards. Instant settlement. FCA-authorised. I hire for intent, not resumes. I write about payments, AI, and the messy
The non-determinism reframe is the takeaway I didn't know I needed.
I run engineering for a payments company — FCA-regulated, processing real money. When my team first pushed back on AI tooling, the argument was always "but it's non-deterministic." And I'd nod, because in fintech, determinism feels like a requirement.
But then I started keeping track. Our human-written payment retry logic had 3 edge-case bugs in 6 months. Our manually configured environment variables had naming conflicts across services that went unnoticed for weeks. Our "deterministic" code reviews missed a race condition that cost us real money.
Humans were never deterministic. We just had better marketing.
The 80/20 loop is real too. We've been using AI agents to audit our service configurations, and the pattern is exactly what you describe — first pass catches the obvious stuff, second pass with a different prompt angle catches the subtle inconsistencies, third pass is where you need a human who understands why a config exists, not just what it says.
The key insight for regulated industries: don't try to make AI deterministic. Build the verification layer that you should have been building for your human processes all along. The non-determinism isn't the risk — the lack of guardrails is.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
The non-determinism reframe is the takeaway I didn't know I needed.
I run engineering for a payments company — FCA-regulated, processing real money. When my team first pushed back on AI tooling, the argument was always "but it's non-deterministic." And I'd nod, because in fintech, determinism feels like a requirement.
But then I started keeping track. Our human-written payment retry logic had 3 edge-case bugs in 6 months. Our manually configured environment variables had naming conflicts across services that went unnoticed for weeks. Our "deterministic" code reviews missed a race condition that cost us real money.
Humans were never deterministic. We just had better marketing.
The 80/20 loop is real too. We've been using AI agents to audit our service configurations, and the pattern is exactly what you describe — first pass catches the obvious stuff, second pass with a different prompt angle catches the subtle inconsistencies, third pass is where you need a human who understands why a config exists, not just what it says.
The key insight for regulated industries: don't try to make AI deterministic. Build the verification layer that you should have been building for your human processes all along. The non-determinism isn't the risk — the lack of guardrails is.