Warren Buffett has a simple framework that's earned him over $100 billion: only invest in things you truly understand. He calls it the Circle of Competence. Everything you have deep, earned knowledge about sits inside the circle. Everything else is outside it. The key isn't making your circle bigger — it's knowing exactly where the edge is and refusing to pretend otherwise.
Buffett has said he doesn't invest in technology companies he can't fully evaluate. This isn't false modesty. It's discipline. And watching his returns over six decades, it's hard to argue with the results.
I've been thinking about this principle for two years, and I'm now convinced it's the most underused mental model in software engineering. Because the pattern of failure Buffett identifies in investing — people getting into trouble because they operate outside their competence without realizing it — maps almost perfectly onto how engineering teams choose technology.
The Circle of Competence, Defined
Buffett's partner Charlie Munger puts it plainly: "Knowing what you don't know is more useful than being brilliant."
Your circle of competence isn't about intelligence. Plenty of brilliant people make terrible decisions outside their circle. It's about the specific, earned knowledge that comes from years of hands-on experience with something. You can read every blog post about Kubernetes. That doesn't put Kubernetes inside your circle. Running Kubernetes in production for three years, dealing with its failure modes, understanding its operational characteristics from direct experience — that's what earns it a place inside your circle.
The boundary is critical. Inside your circle, you have intuition backed by experience. You know the gotchas. You can smell problems before they manifest. Outside your circle, you have impressions backed by marketing materials. You know the features. You can't smell the problems because you've never encountered them.
How Teams Violate the Circle
I've watched this pattern play out at four different companies:
The Conference-Driven Architecture. A senior engineer attends a conference. They see a talk about a technology that solved an impressive problem at an impressive company. They come back energized and advocate for adopting the technology. The team evaluates it based on conference talks, blog posts, and the "getting started" tutorial. Everyone is impressed. Nobody on the team has production experience with it. They adopt it. Six months later, they're dealing with operational problems that never appear in conference talks because speakers don't present their failures.
I saw this happen with a graph database adoption. The conference talk was compelling. The demo was gorgeous. The team adopted it for a use case that turned out to be a terrible fit for graph databases. Eighteen months and a complete rewrite later, they were back on PostgreSQL. Total cost: roughly $400,000 in engineering time.
The Resume-Driven Development. An engineer advocates for a technology not because it's the best fit but because they want it on their resume. They frame the advocacy in technical terms — scalability, performance, developer experience — but the actual motivation is career advancement. The team lacks the expertise to evaluate the claims critically, and the enthusiasm of the advocate is mistaken for expertise.
The Google/Netflix/Spotify Fallacy. "Netflix uses it, so it must be good." Netflix also has a dedicated platform engineering team of 200 people supporting their infrastructure choices. Your team of twelve doesn't have that. Technologies that work beautifully at scale with dedicated support teams can be nightmares at smaller scale without that support.
The Bleeding Edge Trap. There's a persistent belief in software engineering that newer is better. That staying on the cutting edge is how you maintain competitiveness. In practice, the cutting edge is where you encounter the most unknown unknowns — the exact environment where operating outside your circle of competence is most dangerous.
Applying the Circle to Tech Decisions
Here's how I now evaluate technology choices using the Circle of Competence principle:
Step 1: Honestly Map Your Team's Circle
For every technology your team currently uses, categorize it:
Deep Competence (inside the circle): Your team has multiple people with 2+ years of production experience. You've dealt with failure modes. You know the operational characteristics. You could troubleshoot a production issue at 3 AM without consulting documentation.
Working Knowledge (edge of the circle): One or two people have production experience. The team can build with it but hasn't dealt with the hard problems yet. You can develop with it but you'd need documentation for non-trivial troubleshooting.
Surface Knowledge (outside the circle): Nobody has production experience. Knowledge comes from tutorials, blog posts, and side projects. You can build a proof of concept but have no experience operating it under real conditions.
Be honest. Most teams I've worked with dramatically overestimate where their actual competence boundary is. "We know React" might mean "three people have built production React apps" or it might mean "everyone did the tutorial." Those are very different things.
Step 2: Match Risk to Competence
Not every technology choice carries equal risk. A new linting tool? Low risk. The wrong choice costs you a day of configuration. A new database? High risk. The wrong choice can cost months of migration work.
The rule: the higher the cost of being wrong, the more firmly the technology should sit inside your circle of competence.
For low-risk decisions, experimenting outside your circle is fine. That's how the circle grows. For high-risk decisions — database choices, core framework selections, infrastructure platforms — default to technologies inside your circle unless you have a compelling reason not to.
Step 3: Evaluate the Cost of Expanding Your Circle
Sometimes you genuinely need a technology outside your current circle. The question isn't "should we never try new things?" — it's "what will it actually cost to develop real competence, and are we willing to pay that price?"
Developing real competence means:
- Dedicated learning time (not squeezed into spare moments)
- Building and operating non-critical systems first
- Accepting a period of reduced productivity
- Having a fallback plan if the technology doesn't work out
- Hiring or consulting someone who's inside their circle for this technology
If you can't invest in real competence development, you shouldn't adopt the technology for critical systems. Full stop.
Step 4: Hire for Circle Expansion, Not Just Skills
When you need to move into a new technology area, the most effective approach is hiring someone for whom the technology is inside their circle. One engineer with three years of Kubernetes production experience is worth more than five engineers who've completed the Kubernetes certification but never operated it in production.
This is the Buffett approach in hiring form. When Buffett wants exposure to an industry he doesn't understand, he doesn't try to learn it from scratch. He acquires or partners with someone who already has deep competence. Engineering teams should do the same.
Where I Got This Wrong
Four years ago, I advocated for my team to adopt a microservices architecture. I'd read the books. I'd watched the talks. I'd built toy microservices on weekends. I was convinced I understood the trade-offs.
I didn't.
I understood the benefits because they're prominently featured in every microservices blog post. I didn't understand the operational costs because those only become apparent after months of production experience. Distributed tracing, service mesh configuration, cascading failure patterns, deployment coordination across dozens of services — none of this was in my circle of competence. I was operating on conference-talk knowledge and mistaking it for real understanding.
The migration took eighteen months instead of the six I estimated. We ended up with a hybrid architecture that was more complex than either a pure monolith or pure microservices. And the team spent the better part of a year developing the operational competence I'd assumed we already had.
If I'd honestly assessed my circle of competence before advocating for the change, I would have either stuck with the monolith (which was working fine) or hired someone with real microservices operational experience before starting. Either option would have saved roughly a year of pain.
The Meta-Principle
Buffett's Circle of Competence isn't really about technology, investing, or any specific domain. It's about intellectual honesty. It's about the willingness to say "I don't know this well enough to make a high-stakes decision" — which is one of the hardest things to say in a professional environment that rewards confidence.
The engineers I respect most aren't the ones who have the broadest knowledge. They're the ones who know exactly where their knowledge gets shallow and are transparent about it.
I've been exploring this concept through resources that collect investment wisdom from Buffett and other master thinkers. What strikes me most is how consistent this principle is across domains. Whether it's investing, engineering, medicine, or military strategy, the experts who outperform long-term are the ones who respect the boundaries of their own knowledge.
Practical Takeaways
Map your team's circle of competence honestly. Put it in a document. Review it quarterly. It should grow over time, but slowly — real competence takes years, not weeks.
Match technology risk to competence depth. Low-risk experiments outside the circle: yes. High-risk production choices outside the circle: no, unless you're investing in real competence development.
Be skeptical of conference-driven enthusiasm. Including your own. A great talk creates the feeling of understanding without the substance. That's a dangerous combination.
Hire for competence expansion. When you need a technology outside your circle, bring in someone for whom it's inside theirs.
Make "I don't know" a respected answer. If your team culture penalizes admitting ignorance, people will pretend to have competence they lack. That's how expensive mistakes get made.
The circle of competence won't make you a better coder. But it might save you from the kind of expensive, time-consuming mistakes that no amount of coding skill can fix.
What's the most expensive tech decision you've seen that was driven by enthusiasm rather than genuine competence? I'd love to hear war stories — especially from teams that recovered successfully.
Top comments (0)