Bayesian Thinking in Everyday Decisions
A senior engineer on my team once told me: "I'm 70% sure the performance issue is in the database layer." We profiled the application. The bottleneck was in a serialization step. He updated: "Okay, now I'm 90% sure it's serialization."
He didn't know he was being Bayesian. He was just being a good engineer.
Bayesian thinking -- starting with a belief, then updating it proportionally as evidence arrives -- is one of the most powerful thinking tools available. You don't need to know the math. You need the mindset.
The Core Idea
Traditional thinking is binary. You believe something or you don't. You're right or you're wrong.
Bayesian thinking is probabilistic. You hold beliefs with varying degrees of confidence. New evidence shifts your confidence up or down -- not all at once, but proportionally to the strength of the evidence.
The formula (simplified to its essence):
- Start with a prior belief (your current estimate of how likely something is)
- Observe new evidence
- Update your belief based on how much that evidence should shift it
- The result is your posterior belief
The key insight: how much you should update depends on two things -- how strong the evidence is, and how confident you were before.
Why Most People Get This Wrong
People make two systematic errors:
Error 1: Not updating at all (anchoring). You form an initial opinion and stick with it regardless of new information. "I thought the bug was in the authentication module, and even though three debugging sessions point elsewhere, I still think it's auth." This is stubbornness masquerading as conviction.
Error 2: Updating too much (overreaction). One data point completely reverses your view. "A single user complained about performance, so clearly we need to rewrite the entire system." This is reactivity masquerading as responsiveness.
Bayesian thinking finds the middle ground. A single complaint doesn't mean a rewrite, but it also doesn't mean nothing. It's a small piece of evidence that should slightly shift your confidence about whether a performance problem exists.
Applied to Debugging
Debugging is naturally Bayesian, even if nobody calls it that.
You start with a hypothesis: "The bug is probably in the new code we deployed yesterday." This is your prior, based on experience (most bugs come from recent changes).
You check the deployment log. The change was to a CSS file, and the bug is a data corruption issue. That evidence strongly contradicts your prior. You update: "Probably not the recent deployment. Maybe it's a race condition in the data pipeline."
You add logging to the pipeline. You see that two processes are writing to the same record. Strong evidence for the race condition hypothesis. You update again: now you're 90% confident.
The formal process is: hypothesis, evidence, update. The informal process is what every experienced debugger does intuitively. Making it explicit just makes you faster and more systematic.
Applied to Hiring
Here's where Bayesian thinking gets practically useful.
Prior: Before the interview, what's the base rate for this role? If you typically hire 1 in 10 candidates, your prior for any individual candidate being a good hire is about 10%.
Evidence from resume: Strong resume from a relevant background. Update upward, maybe to 25%.
Evidence from technical interview: Solved the problem but struggled with edge cases. Modest update. Maybe 30%.
Evidence from system design interview: Impressive depth, asked the right questions, acknowledged trade-offs. Significant update. Maybe 55%.
Evidence from reference check: Former manager gives a lukewarm reference. Update downward. Maybe 40%.
Notice what Bayesian thinking prevents: it prevents you from falling in love with a candidate after one great interview (overreaction) and also prevents you from dismissing someone because of one weak signal (anchoring on the negative).
Applied to Technical Decisions
"Should we use Kubernetes?"
Prior: Most teams our size (15 engineers, moderate traffic) are well-served by simpler deployment solutions. Starting confidence that K8s is the right call: 20%.
Evidence: Our deployment frequency is increasing, and we're planning to move to microservices. Update to 35%.
Evidence: Three engineers on the team have production K8s experience. Update to 50%.
Evidence: We talk to a similar-sized company that adopted K8s and spent six months on infrastructure instead of product features. Update down to 35%.
Evidence: Our current deployment system is causing two incidents per month. Update to 45%.
The final number isn't the point. The process is. You've systematically considered evidence from multiple angles, updated proportionally, and arrived at an informed position rather than a snap judgment.
Practical Tips for Bayesian Thinking
Assign numbers, even rough ones. "I'm 60% confident" is more useful than "I think so." Numbers force precision and make updates tractable.
Distinguish between strong and weak evidence. A production load test is strong evidence about performance. A blog post benchmark is weak evidence. Strong evidence should move your confidence more than weak evidence.
Watch for base rate neglect. The most common Bayesian error is ignoring the prior. If a test has a 5% false positive rate and the condition you're testing for occurs in 1% of the population, a positive result doesn't mean 95% chance of having the condition. The math is counterintuitive, and in everyday decisions, we make this mistake constantly.
Keep a calibration record. When you say "80% confident," how often are you right? If you're right 60% of the time, you're overconfident. Track your predictions and confidence levels. Over time, you'll calibrate.
Update incrementally, not discretely. The hallmark of Bayesian thinking is proportional updating. A single data point shouldn't flip your belief from 20% to 90%. If it does, either the evidence is extraordinarily strong or you're overreacting.
The Connection to Investment Thinking
The best investors are natural Bayesians. They form a thesis, then update it as earnings reports, market data, and competitive dynamics provide new evidence. The ones who succeed long-term are the ones who update proportionally -- neither too sticky nor too reactive.
For a structured collection of these probabilistic thinking principles applied to real decisions, the principles library on KeepRule organizes these frameworks in a way that makes them practically useful day to day.
The Takeaway
You don't need Bayes' theorem. You need three habits:
- Express beliefs as probabilities, not certainties
- Seek evidence that could change your mind
- Update your beliefs proportionally to the evidence
Do this consistently, and you'll make better decisions than people who are smarter but less systematic.
Start with your next technical debate. Instead of arguing for a position, state your confidence level and ask: "What evidence would change my mind?" Then go find that evidence.
Top comments (0)