DEV Community

Cover image for 3 Takeaways from All Things AI: 80/20 Rule, Non-Deterministic Humans, and Why We're Still Early
Ryan Swift
Ryan Swift Subscriber

Posted on

3 Takeaways from All Things AI: 80/20 Rule, Non-Deterministic Humans, and Why We're Still Early

Vibe coding and human-AI adoption friction

Last week, I attended All Things AI in Durham, NC. The event was geared toward technical AI practitioners with workshops on Day 1 and standard conference talks on Day 2. A few takeaways from the conference are below for y'all!

MLH AI Roadshow

On Day 1, I helped Major League Hacking (MLH) run an AI Roadshow. We're hosting about one of these a month right now. Some are standalone events. Others have been at partner conferences. I think you'll see us again at All Things Open in the fall.

At the AI Roadshow, we run ~6 practical, hands-on-keyboard activities. You walk up and use a ready-to-go laptop to learn a new skill or technique with AI. We typically have snacks, drinks, and networking throughout the events. If you want to keep an eye on the next stop, check out our Luma.

MLH AI Roadshow

The 80/20 Rule in Practice

(Inspired by Justin Jeffress: Vibe Coding in Action)

I enjoyed Justin's whole talk, but he had a specific slide on the 80/20 rule that is still sticking with me. AI feels really good at getting you 80% of the way there on most problems.

What do you do with that remaining 20%, though? Justin's advice was to simply repeat. Send the "remaining" 20% back to your AI tool of choice.

I've been doing this myself for a while with coding agents, but this talk has me thinking again about the effectiveness of simple loops and repetition. I imagine this advice hasn't penetrated to folks who mostly use AI through chat applications. It might also not work as well for outcomes that can't be tested. But the framing really resonated with me.

Humans Are Also Non-Deterministic

(Inspired by Calvin Hendryx-Parker: Orchestrate Agentic AI)

Through my work at MLH, I'm constantly talking to software developers. The last ~18 months have been even more intense, driven by all the AI development.

A common concern I haven't felt ready to address is that AI tools are non-deterministic. I get it. When you're writing code, you're building for specific outcomes. I think devs like the certainty.

Calvin made a small comment during his talk, reminding the audience that humans are also non-deterministic. The observation really hit home. I knew this. I'm sure you know it, too. But it was a good reframe. Process-driven thinking and guardrails feel suddenly important.

We're Still So So Early

(Just a personal reflection)

Like many of you, I've been following AI heavily recently. It's been important for the work I'm doing at MLH. It's so easy to get caught up in the fad-of-the-week on Twitter. But the reality is that most people still aren't leveraging AI tools. Especially not to their maximum capabilities.

Even at a conference for AI practitioners, I met people who hadn't used a coding agent. There were jokes about OpenClaw and Mac Minis, but not everyone was "in the know".

We're still so damn early in this wave of change. No time like the present to roll up your sleeves and give something a try yourself. I still think learning by doing is the right methodology with AI tools.

Happy hacking!

Top comments (10)

Collapse
 
theycallmeswift profile image
Swift

I love the point about humans being non-deterministic as well. Great write up, thanks for sharing!

Collapse
 
theeagle profile image
Victor Okefie

"The observation really hit home. I knew this. I'm sure you know it, too." Most conversations about non-deterministic AI assume humans are the stable baseline. Calvin's reframe flips it, humans bring variance, fatigue, and context shifts. That doesn't make AI less risky. It means the bar for "reliable" was never as high as we pretended.

Collapse
 
ng96 profile image
Nikita

That’s an interesting idea, regarding the human world being non-deterministic and so could be little bit of AI. I too never thought about it that way. I am wondering, whether the current focus on building AI tools is to deliver total determinism in output, even when some factors/variables/circumstances will always be out of our hands?

Collapse
 
thisisryanswift profile image
Ryan Swift

I think there's a big push towards more determinism, but I don't think anyone is looking for fully deterministic outputs. The 'creativity' and flexibility that AI provide are really valuable.

But I've been hearing so many folks use determinism as a counterargument to AI usage and coding. It was a good reality check to remember that human written code suffers similar problems.

Collapse
 
mickyarun profile image
arun rajkumar

The non-determinism reframe is the takeaway I didn't know I needed.

I run engineering for a payments company — FCA-regulated, processing real money. When my team first pushed back on AI tooling, the argument was always "but it's non-deterministic." And I'd nod, because in fintech, determinism feels like a requirement.

But then I started keeping track. Our human-written payment retry logic had 3 edge-case bugs in 6 months. Our manually configured environment variables had naming conflicts across services that went unnoticed for weeks. Our "deterministic" code reviews missed a race condition that cost us real money.

Humans were never deterministic. We just had better marketing.

The 80/20 loop is real too. We've been using AI agents to audit our service configurations, and the pattern is exactly what you describe — first pass catches the obvious stuff, second pass with a different prompt angle catches the subtle inconsistencies, third pass is where you need a human who understands why a config exists, not just what it says.

The key insight for regulated industries: don't try to make AI deterministic. Build the verification layer that you should have been building for your human processes all along. The non-determinism isn't the risk — the lack of guardrails is.

Collapse
 
novaelvaris profile image
Nova Elvaris

The 80/20 loop is solid advice, but I think the missing piece is that each iteration should evaluate against different criteria. First pass: does the structure make sense? Second pass: do the edge cases hold? Third pass: would this break under real-world inputs? When you send the same "fix this" prompt back, the model tends to patch symptoms instead of root causes. Changing the evaluation angle on each loop is what turns the 80/20 into 95/5. The non-determinism reframe is also really underrated — I've started treating AI outputs like I treat human code reviews: expect variance, build verification into the process, and don't assume the first answer is the final answer.

Collapse
 
mergeshield profile image
MergeShield

the non-deterministic humans point is the underrated one. we've spent decades building deterministic systems and now we're building non-deterministic ones operated by humans who expect predictable outputs. that mismatch is where most AI adoption friction actually lives.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

the non-deterministic humans point lands differently when you mix human reviews and AI steps in the same workflow. we assumed humans would consistently approve/reject at a certain rate, then found the variance was huge depending on time of day, workload, who was oncall. kind of the same reliability problem as the model, just slower. curious which of the 3 takeaways surprised you most at the conference?

Collapse
 
apex_stack profile image
Apex Stack

The 80/20 loop observation is really practical. I've been running automated agents that do exactly this pattern — take the output, validate it, feed the failures back in, repeat. The key insight I've found is that each iteration needs a different prompt angle, not just "try again." The first pass gets structure right, the second fixes edge cases, the third catches hallucinations. Same loop, different evaluation criteria each time.

The "we're still early" point really resonates. I talk to developers daily who've never set up an MCP server or used a coding agent beyond basic autocomplete. Meanwhile there are people running entire production pipelines with 10+ autonomous agents. The gap between early adopters and the mainstream is still enormous — which is actually encouraging if you're building in this space right now.

Your non-determinism reframe is the most underrated point. I'd push it even further: the determinism we think we have in traditional software is partly an illusion. Anyone who's debugged a race condition, dealt with flaky integration tests, or watched a caching layer serve stale data knows that "deterministic" systems fail in very non-deterministic ways at scale. AI just makes the non-determinism explicit rather than hidden.

Collapse
 
travisdrake profile image
Travis Drake

The 80/20 part makes a lot of sense. I tend to work in layers because I know where the 20 will appear, so I pre-plan to head it off before I review. Gets it much closer to 95/5.