10 Developer Habits That Separate Good Programmers From Great Ones
There's a moment in every developer's career when they realize that writing code that works isn't enough. It happens differently for everyone. Maybe you're staring at a pull request you submitted six months ago, horrified by the decisions your past self made. Maybe you're debugging a production issue at 2 AM, surrounded by energy drink cans, wondering how something so simple could have gone so catastrophically wrong. Or maybe you're pair programming with someone who makes everything look effortless—solving in minutes what would have taken you hours—and you're left wondering what separates you from them.
I've been writing code professionally for over a decade, and I can tell you with certainty: the difference between good programmers and great ones has very little to do with knowing more algorithms or memorizing syntax. It's not about graduating from a prestigious university or working at a FAANG company. The real separation happens in the invisible places—in the daily habits, the tiny decisions made a thousand times over, the discipline to do the unglamorous work that nobody sees.
This isn't about natural talent. I've watched brilliant developers flame out because they relied solely on raw intelligence. I've also watched average programmers transform into exceptional ones through deliberate practice and habit formation. The great ones aren't born—they're built, one habit at a time.
What follows isn't a collection of productivity hacks or keyboard shortcuts. These are the deep, fundamental habits that compound over time, the practices that will still matter whether you're writing Python microservices today or quantum computing algorithms fifteen years from now. Some of these habits will challenge you. Some will feel counterintuitive. All of them will require effort to develop.
But if you commit to them, you won't just become a better programmer. You'll become the kind of developer others want on their team, the one who gets pulled into the hardest problems, the one who shapes how engineering gets done.
Let's begin.
Habit 1: They Read Code Far More Than They Write It
When I mentor junior developers, I often ask them: "How much time do you spend reading other people's code compared to writing your own?" The answer is almost always the same: not much. Maybe they glance at documentation or skim through a library's source when something breaks, but intentional, deep code reading? Rarely.
This is the first habit that separates the good from the great: great developers are voracious code readers.
Think about it this way. If you wanted to become a great novelist, you wouldn't just write all day. You'd read—extensively, critically, analytically. You'd study how Hemingway constructs sentences, how Ursula K. Le Guin builds worlds, how Toni Morrison uses language to evoke emotion. Programming is no different. The craft of software engineering is learned as much through observation as through practice.
But here's what makes this habit so powerful: reading code teaches you things that writing code alone never will. When you write, you're trapped in your own mental models, your own patterns, your own biases. You'll naturally reach for the solutions you already know. Reading other people's code exposes you to different ways of thinking, different approaches to problems, different levels of abstraction.
I remember the first time I read through the source code of Redux, the popular state management library. I was intermediate-level at the time, comfortable with JavaScript but not what I'd call advanced. What struck me wasn't just how the code worked—it was how simple it was. The core Redux implementation is just a few hundred lines. The creators had taken a complex problem (managing application state) and distilled it down to its essence. Reading that code changed how I thought about software design. I realized that complexity isn't a badge of honor; simplicity is.
Great developers make code reading a regular practice. They don't wait for a reason to dive into a codebase. They do it because they're curious, because they want to learn, because they know that buried in those files are lessons that took someone years to learn.
Here's how to develop this habit practically:
Set aside dedicated reading time. Just like you might schedule time for coding side projects, schedule time for reading code. Start with 30 minutes twice a week. Pick a library or framework you use regularly and read through its source. Don't skim—actually read, line by line. When you encounter something you don't understand, resist the urge to skip over it. Pause. Research. Figure it out.
Read with purpose. Don't just read passively. Ask questions as you go: Why did they structure it this way? What problem were they solving with this abstraction? What would I have done differently? Are there patterns I can adopt? What makes this code easy or hard to understand?
Read code from different domains and languages. If you're a web developer, read embedded systems code. If you work in Python, read Rust. The patterns and principles often transcend the specific technology. I've applied lessons from reading Erlang's OTP framework to architecting Node.js microservices, even though the languages are wildly different. The underlying principles of fault tolerance and supervision trees were universally applicable.
Join the reading club movement. Some development teams have started "code reading clubs" where developers meet regularly to read through and discuss interesting codebases together. If your team doesn't have one, start it. Pick a well-regarded open-source project and work through it together. The discussions that emerge from these sessions are gold—you'll hear how different people interpret the same code, what they notice, what they value.
Study the masters. There are certain programmers whose code is worth studying specifically. John Carmack's game engine code. Rich Hickey's Clojure. Linus Torvalds' Git. DHH's Rails. These aren't perfect (nothing is), but they represent thousands of hours of refinement and deep thinking. Reading their work is like studying under a master craftsperson.
The transformation this habit creates is subtle but profound. You'll start to develop intuition about code quality. You'll recognize patterns more quickly. You'll build a mental library of solutions that you can draw from. When you encounter a new problem, instead of Googling immediately, you'll remember: "Oh, this is similar to how React handles reconciliation" or "This is the strategy pattern I saw in that Python library."
I've interviewed hundreds of developers, and I can usually tell within the first few technical questions whether someone is a serious code reader. They reference implementations they've studied. They compare approaches across different libraries. They have opinions informed by actual examination of alternatives, not just Stack Overflow answers.
Reading code won't make you a great developer by itself. But it's the foundation. Everything else builds on this. Because you can't write great code if you haven't seen what great code looks like.
Habit 2: They Invest Deeply in Understanding the 'Why' Behind Every Decision
Good programmers implement features. Great programmers understand the business context, user needs, and systemic implications of what they're building.
This might sound obvious, but it's one of the most commonly neglected habits, especially among developers who pride themselves on their technical skills. I've worked with brilliant engineers who could implement any algorithm, optimize any query, architect any system—but who treated requirements like gospel, never questioning whether what they were asked to build was actually the right solution.
Here's a story that illustrates this perfectly. A few years ago, I was working on a fintech platform, and we received a feature request to add "pending transaction" functionality. The product manager wanted users to see transactions that were authorized but not yet settled. Straightforward enough.
A good developer would have taken that requirement and implemented it. Created a new status field in the database, added some UI components, written the business logic. Done. Ship it.
But one of our senior engineers did something different. She scheduled a meeting with the PM and asked: "Why do users need to see pending transactions? What problem are they trying to solve?"
It turned out users were complaining that their account balances seemed wrong—they'd make a purchase, but their balance wouldn't reflect it immediately. They weren't actually asking to see pending transactions; they were confused about their available balance. The real solution wasn't to show pending transactions at all—it was to display two balances: current balance and available balance, accounting for pending authorizations.
This might seem like a small distinction, but it completely changed the implementation. Instead of building a whole new UI section for pending transactions (which would have added cognitive load), we refined the existing balance display. The solution was simpler, better aligned with user needs, and took half the time to implement.
This is what investing in the "why" looks like in practice.
Great developers treat every feature request, every bug report, every technical decision as an opportunity to understand the deeper context. They don't just ask "What needs to be built?" They ask:
- What problem is this solving? Not the technical problem—the human problem. Who is affected? What pain are they experiencing?
- What are the constraints? Is this urgent because of a regulatory deadline? Because of competitive pressure? Because a major client threatened to leave? Understanding urgency helps you make better tradeoff decisions.
- What are the second-order effects? How will this change user behavior? How will it affect the system's complexity? What maintenance burden are we taking on?
- Is this the right solution? Sometimes the best code is no code. Could we solve this problem through better UX? Through configuration instead of programming? Through fixing the root cause instead of treating symptoms?
I once spent three hours in a technical design review for a caching layer that would have solved our performance problems. The engineer who proposed it had done excellent work—detailed benchmarks, solid architecture, clear migration plan. But then someone asked: "Why are we having these performance problems in the first place?"
We dug deeper. Turned out a poorly optimized query was the root cause, making millions of unnecessary database calls. We'd been about to build a caching system to work around a problem that could be fixed with a two-line SQL optimization. Understanding the "why" saved us from weeks of unnecessary work.
This habit requires courage, especially when you're early in your career. It feels risky to question requirements, to push back on product managers or senior engineers, to suggest that maybe the planned approach isn't optimal. But here's what I've learned: people respect developers who think critically about what they're building. They want collaborators who catch problems early, who contribute to product thinking, who treat software development as problem-solving rather than ticket-closing.
How to develop this habit:
Make "Why?" your default question. Before starting any significant piece of work, ensure you can articulate why it matters. If you can't, you don't understand the problem well enough yet. Schedule time with whoever requested the work—product managers, other engineers, customer support—and ask questions until the context is clear.
Study the domain you're working in. If you're building healthcare software, learn about healthcare. Read about HIPAA. Understand how hospitals operate. Talk to doctors if you can. The more you understand the domain, the better you'll be at evaluating whether technical solutions actually solve real problems. I've seen developers who treated the domain as background noise, and their code showed it—technically proficient but misaligned with how the business actually worked.
Participate in user research. Watch user testing sessions. Read support tickets. Join customer calls. There's no substitute for seeing real people struggle with your software. It fundamentally changes how you think about what you're building. After watching just one user testing session, you'll never write a cryptic error message again.
Practice systems thinking. Every change you make ripples through the system. That innocent feature addition might increase database load, complicate the deployment process, or create a new edge case that breaks existing functionality. Great developers mentally model these ripples before writing code. They think in systems, not in isolated features.
Document the why, not just the what. When you write code comments, don't explain what the code does (that should be obvious from reading it). Explain why it exists. Why this approach instead of alternatives? What constraint or requirement drove this decision? Future you—and future maintainers—will be grateful.
I'll be honest: this habit can be exhausting. It's mentally easier to just implement what you're told. But here's the thing—great developers aren't great because they chose the easy path. They're great because they took responsibility for outcomes, not just outputs. They understood that their job wasn't to write code; it was to solve problems. And you can't solve problems you don't understand.
The developers who cultivate this habit become trusted advisors. They get invited to planning meetings. They influence product direction. They become force multipliers for their teams because they catch misalignments early, before they turn into wasted sprints and disappointed users.
Understanding the "why" transforms you from a code writer into an engineer. And that transformation is everything.
Habit 3: They Treat Debugging as a Science, Not a Guessing Game
It's 11 PM. Your production system is down. Customers are angry. Your manager is asking for updates every ten minutes. The pressure is overwhelming, and your first instinct is to start changing things—restart the server, roll back the last deploy, tweak some configuration values—anything to make the problem go away.
This is where good developers and great developers diverge most dramatically.
Good developers guess. They rely on intuition, past experience, and hope. They make changes without fully understanding the problem, treating debugging like a game of whack-a-mole. Sometimes they get lucky and stumble on a solution. Often they don't, and hours vanish into frustration.
Great developers treat debugging as a rigorous scientific process. They form hypotheses, gather data, run experiments, and systematically eliminate possibilities until they isolate the root cause. They're patient when patience feels impossible. They're methodical when chaos reigns.
Let me tell you about the worst production bug I ever encountered. Our e-commerce platform started randomly dropping orders—not all orders, just some of them. Maybe 2-3% of transactions would complete on the payment side but never create an order record in our database. Revenue was bleeding. Every hour the bug remained unfixed cost the company thousands of dollars.
The pressure to "just fix it" was immense. The easy move would have been to start deploying patches based on gut feelings. Instead, our lead engineer did something counterintuitive: she made everyone step back and follow a structured debugging process.
First, reproduce the problem. Seems obvious, but many developers skip this step, especially under pressure. She set up a staging environment and hammered it with test transactions until we could reliably reproduce the order drops. This single step was crucial—it meant we could test theories without experimenting on production.
Second, gather data. What do these dropped orders have in common? We pulled logs, traced requests through every system component, analyzed timing, examined user agents, scrutinized payment gateway responses. We weren't looking for the answer yet—we were building a complete picture of the problem.
Third, form hypotheses. Based on the data, we generated a list of possible causes, ranked by likelihood: database connection timeout, race condition in order creation logic, payment gateway webhook failure, API rate limiting, network partition, corrupted state in Redis cache.
Fourth, test systematically. We tested each hypothesis one at a time, starting with the most likely. For each test, we clearly defined what result would prove or disprove the theory. No guessing. No "let's try this and see what happens." Every experiment was deliberate.
It took four hours of methodical investigation, but we found it: a race condition where concurrent payment webhooks could create a state where the payment was marked successful, but the order creation transaction was rolled back. The bug only manifested under high load with specific timing conditions—hence the intermittent nature.
Here's the key insight: we could have easily spent twenty hours flailing around, making random changes, creating new bugs while trying to fix old ones. Instead, systematic debugging found the root cause in a quarter of the time. More importantly, we fixed it correctly, with confidence that it was actually resolved.
This habit—treating debugging as a disciplined practice rather than chaotic troubleshooting—is perhaps the most underestimated skill in software engineering.
How great developers debug:
They resist the urge to jump to solutions. When you see an error, your brain immediately wants to fix it. Fight this instinct. Spend time understanding the problem first. I have a personal rule: spend at least twice as much time understanding a bug as you expect to spend fixing it. This ratio has saved me countless hours of chasing symptoms instead of causes.
They use the scientific method explicitly. Write down your hypothesis. Write down what evidence would confirm or refute it. Run the experiment. Document the results. Move to the next hypothesis if needed. I literally keep a debugging journal where I log this process for complex bugs. It keeps me honest and prevents me from testing the same theory multiple times because I forgot I already tried it.
They make problems smaller. Great debuggers are masters of binary search in debugging. If a bug exists somewhere in 1,000 lines of code, they'll comment out 500 lines and see if the bug persists. Then 250 lines. Then 125. They systematically isolate the problem space until the bug has nowhere to hide.
They understand their tools deeply. Debuggers, profilers, log analyzers, network inspectors, database query analyzers—great developers invest time in mastering these tools. They can set conditional breakpoints, analyze memory dumps, trace system calls, interpret flame graphs. These tools multiply their effectiveness exponentially. I've seen senior developers debug issues in minutes that stumped others for days, simply because they knew how to use a profiler effectively.
They build debugging into their code. Great developers write code that's easy to debug. They add meaningful log statements at key decision points. They build observability into their systems from the start—metrics, traces, structured logs. They know that 80% of a bug's lifetime is spent trying to understand what's happening; making that easier is time well invested.
They reproduce, then fix, then verify. Never fix a bug you can't reproduce—you're just guessing. Once you can reproduce it, fix it. Then verify the fix actually works under the conditions where the bug originally occurred. Too many developers skip this verification step and end up shipping fixes that don't actually fix anything.
They dig for root causes. When you find a bug, ask "Why did this happen?" five times. Each answer leads you deeper. "The server crashed." Why? "Out of memory." Why? "Memory leak." Why? "Objects not being garbage collected." Why? "Event listeners not removed." Why? "No cleanup in component unmount." Now you've found the root cause, not just the symptom.
I've worked with developers who seemed to have an almost supernatural ability to find bugs. Early in my career, I thought they were just smarter or more experienced. Now I know the truth: they had simply internalized a systematic approach. They trusted the process, not their intuition.
This habit has a profound psychological benefit too. Debugging stops being stressful and starts being intellectually engaging. Instead of feeling helpless when bugs occur, you feel confident—you have a process, a methodology, a way forward. The bug might be complex, but you know how to approach complexity.
There's a reason the best developers don't panic during incidents. They've trained themselves to treat every bug as a puzzle with a solution, not a crisis. They know that systematic investigation always wins in the end. That confidence is built through this habit.
And here's something beautiful: when you approach debugging scientifically, you don't just fix bugs faster—you learn more from each one. Every bug becomes a lesson about the system, about edge cases, about your own mental models. Debuggers who just guess and check learn nothing. Scientific debuggers accumulate deep system knowledge with every issue they resolve.
The next time you encounter a bug, resist the temptation to immediately start changing code. Take a breath. Open a notebook. Write down what you know. Form a hypothesis. Test it. Let the scientific method be your guide.
You'll be amazed how much more effective you become.
Habit 4: They Write for Humans First, Machines Second
Here's an uncomfortable truth: most of your career as a developer won't be spent writing new code. It'll be spent reading, understanding, and modifying existing code—code written by other people, or by past versions of yourself who might as well be other people.
Yet when I review code from good developers, I consistently see the same mistake: they optimize for cleverness or brevity instead of clarity. They write code that impresses other developers with its sophistication, but which requires intense concentration to understand. They treat the compiler or interpreter as their primary audience.
Great developers flip this priority. They write code for humans first, machines second.
This might sound like a platitude, but it represents a fundamental shift in mindset that affects every line of code you write. Let me show you what I mean.
Here's a code snippet I found in a production codebase:
def p(x): return sum(1 for i in range(2, int(x**0.5)+1) if x%i==0)==0 and x>1
Can you tell what this function does? If you're experienced with algorithms, you might recognize it as a prime number checker. It works perfectly. The machine executes it just fine. But for a human reading this code? It's a puzzle that needs solving.
Now here's how a great developer would write the same function:
def is_prime(number):
"""
Returns True if the number is prime, False otherwise.
A prime number is only divisible by 1 and itself.
We only need to check divisibility up to the square root of the number
because if n = a*b, one of those factors must be <= sqrt(n).
"""
if number <= 1:
return False
if number == 2:
return True
# Check if number is divisible by any integer from 2 to sqrt(number)
for potential_divisor in range(2, int(number ** 0.5) + 1):
if number % potential_divisor == 0:
return False
return True
The second version is longer. It's more verbose. The machine doesn't care—both run in O(√n) time. But the human difference is night and day. The second version is self-documenting. A junior developer can understand it. You can understand it six months from now when you've forgotten you wrote it. The intent is crystal clear.
This habit—writing for human comprehension—manifests in many ways:
Naming that reveals intent. Variable names like temp, data, obj, result tell you nothing. Great developers choose names that encode meaning: unprocessed_orders, customer_email_address, successfully_authenticated_user. Yes, these names are longer. That's fine. The extra few characters are worth it. You type code once but read it dozens of times.
I remember reviewing code where someone had named a variable x2. I had to trace through 50 lines of logic to figure out it represented "XML to JSON converter". They'd saved themselves typing 18 characters and cost every future reader minutes of cognitive load. That's a terrible trade.
Functions and methods that do one thing. When a function is trying to do multiple things, it becomes hard to name, hard to test, and hard to understand. Great developers extract functionality into well-named functions even when it feels like "overkill." They understand that a sequence of well-named function calls often communicates intent better than the raw implementation.
Strategic comments. Here's a nuance many developers miss: great developers don't comment what the code does—they comment why it does it. If your code needs comments to explain what it does, the code itself isn't clear enough. But comments explaining why certain decisions were made? Those are gold.
"Why" comments might explain:
- "We're using algorithm X instead of the obvious approach Y because Y has O(n²) complexity with our data patterns"
- "This weird timeout value came from extensive testing with the external API—smaller values cause intermittent failures"
- "We're intentionally not handling edge case X because it's impossible given the database constraints enforced by migration Y"
These comments preserve context that would otherwise be lost. They prevent future developers from "optimizing" your carefully chosen approach or removing code they think is unnecessary.
Code structure that mirrors mental models. Great developers organize code the way humans naturally think about the domain. If you're building an e-commerce system, your code structure should reflect concepts like orders, customers, payments, and inventory—not generic abstractions like managers, handlers, and processors.
I once worked on a codebase that had a DataManager, DataHandler, DataProcessor, and DataController. None of these names conveyed what they actually did. When we refactored to OrderValidator, PaymentProcessor, and InventoryTracker, suddenly the codebase became navigable. New team members could find things. The code structure matched their mental model of the business.
Consistent patterns. Humans are pattern-matching machines. When your codebase follows consistent patterns, developers can transfer knowledge from one part to another. When every module does things differently, every context switch requires re-learning. Great developers value consistency even when they might personally prefer a different approach.
Appropriate abstraction levels. This is subtle but crucial. Great developers are careful about mixing abstraction levels in the same function. If you're writing high-level business logic, you shouldn't suddenly drop down to low-level string manipulation details. Extract that into a well-named helper function. Keep each layer of code at a consistent conceptual level.
Here's an example of mixed abstraction levels:
function processOrder(order) {
// High-level business logic
validateOrder(order);
// Suddenly low-level string manipulation
const cleanEmail = order.email.trim().toLowerCase().replace(/\s+/g, '');
// Back to high-level
chargeCustomer(order);
sendConfirmation(order);
}
Better:
function processOrder(order) {
validateOrder(order);
const normalizedOrder = normalizeOrderData(order);
chargeCustomer(normalizedOrder);
sendConfirmation(normalizedOrder);
}
Now the function reads like a sequence of business steps, not a mix of business logic and implementation details.
This habit requires discipline because writing for machines is often easier than writing for humans. The machine is forgiving—it doesn't care if your variable name is x or customer_lifetime_value_in_cents. But humans care deeply.
I've seen talented developers handicap themselves with this habit. They write impressively compact code, demonstrating their mastery of language features. But then they spend hours in code reviews explaining what their code does because nobody else can figure it out. They've optimized for the wrong thing.
There's a famous quote often attributed to various programming luminaries: "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." The wisdom in this statement becomes more apparent with every year of experience.
When you cultivate the habit of writing for humans first, something remarkable happens: your code becomes maintainable. Teams move faster because understanding is easy. Onboarding new developers takes days instead of weeks. Bugs decrease because the code's intent is clear. Technical debt accumulates more slowly because future modifications don't require archaeological expeditions through cryptic logic.
I can always identify great developers in code reviews by one characteristic: I rarely have to ask "What does this code do?" The code itself tells me. I might ask about trade-offs, about performance implications, about alternative approaches—but I never struggle with basic comprehension.
Write code as if the person maintaining it is a violence-prone psychopath who knows where you live. The person maintaining your code will be you in six months, and you'll thank yourself for the clarity.
Habit 5: They Embrace Constraints as Creative Catalysts
When I was a junior developer, I viewed constraints as problems to be overcome or worked around. Limited time? Frustrating. Legacy system compatibility? Annoying. Memory restrictions? Limiting. I saw my job as defeating these constraints to implement the "proper" solution.
Great developers think about constraints completely differently. They embrace them. They lean into them. They recognize that constraints don't limit creativity—they focus it, channel it, and often produce better solutions than unlimited resources would allow.
This is one of the most counterintuitive habits that separates good from great, and it takes years to internalize.
Let me share a story that crystallized this for me. I was working at a startup building a mobile app for emerging markets. Our target users were on low-end Android devices with spotty 2G connections and limited data plans. Our initial instinct was to treat these constraints as handicaps—we'd build a "lite" version of our real product, stripped down and compromised.
Then our tech lead said something that changed my perspective: "These aren't limitations. These are our design parameters. They're telling us what excellence looks like in this context."
We completely shifted our approach. Instead of asking "How do we cram our features into this constrained environment?", we asked "What's the best possible experience we can create given these parameters?"
We designed offline-first from the ground up. We compressed images aggressively and used SVGs where possible. We implemented delta updates so the app could update itself over flaky connections. We cached intelligently and prefetched predictively. We made every byte count.
The result? An app that felt snappy and responsive even on terrible connections. An experience that was actually better than many apps designed for high-end markets, because we'd been forced to think deeply about performance and efficiency. Our Western competitors who designed for high-bandwidth, powerful devices couldn't compete in that market. Their apps were bloated, slow, and data-hungry.
The constraints didn't handicap us. They made us better.
This principle extends far beyond technical constraints. Consider time constraints. Good developers see tight deadlines as stress. Great developers see them as clarity. When you have unlimited time, you can explore every possible solution, refactor endlessly, polish indefinitely. Sounds great, right? But unlimited time often produces worse results because nothing forces you to prioritize, to identify what really matters, to make hard trade-off decisions.
I've watched projects with loose deadlines drift aimlessly for months, adding feature after feature, refactoring the refactorings, never quite shipping. Then I've seen teams given two weeks to ship an MVP who produced focused, well-scoped products that actually solved user problems. The time constraint forced clarity about what was essential.
Or consider team constraints. Maybe you're the only backend developer on a small team. Good developers see this as overwhelming—too much responsibility, too much to maintain. Great developers see it as an opportunity to shape the entire backend architecture, to make consistent decisions, to build deep expertise. The constraint of being alone forces you to write extremely maintainable code because you'll be the one maintaining it.
Or legacy system constraints. You're integrating with a 15-year-old SOAP API with terrible documentation. Good developers complain about it. Great developers recognize it as an opportunity to build a clean abstraction layer that isolates the rest of the codebase from that complexity. The constraint of the legacy system forces you to think carefully about boundaries and interfaces.
Here's how to cultivate this habit:
Reframe the language. Stop saying "We can't do X because of constraint Y." Start saying "Given constraint Y, what's the best solution we can design?" The linguistic shift creates a mental shift. You move from problem-focused to solution-focused thinking.
Study historical examples. Twitter's original 140-character limit wasn't a bug—it was a constraint that defined the platform's character. Game developers creating for the Super Nintendo worked with 32 kilobytes of RAM and produced masterpieces. They didn't have unlimited resources, but the constraints forced incredible creativity and efficiency. The Apollo Guidance Computer had less computing power than a modern calculator, but it got humans to the moon. Study how constraints drove innovation in these cases.
Impose artificial constraints. This sounds crazy, but it works. If you're building a web app, challenge yourself: what if it had to work without JavaScript? What if the bundle size had to be under 50KB? What if it had to run on a $30 Android phone? These artificial constraints force you to question assumptions and explore different approaches. You might not ship with these constraints, but the exercise makes you a better developer.
Embrace the "worse is better" philosophy. Sometimes a simpler solution that doesn't handle every edge case is better than a complex solution that handles everything. Constraints force you to make this trade-off explicitly. The UNIX philosophy—small programs that do one thing well—emerged from extreme memory and storage constraints. Those constraints produced better design principles than unlimited resources would have.
Look for the constraint's gift. Every constraint is trying to tell you something. Memory constraints tell you to think about efficiency. Time constraints tell you to focus on impact. Legacy constraints tell you to design clean interfaces. Budget constraints tell you to use proven technologies instead of chasing novelty. What is the constraint teaching you?
I've seen developers waste enormous energy fighting constraints instead of working with them. They'll spend weeks architecting a way to bypass a database query limitation instead of restructuring their data model to work within it. They'll add layers of complexity to work around a framework's design instead of embracing the framework's philosophy.
Great developers pick their battles. Sometimes constraints truly are wrong and should be challenged. But more often, constraints represent real trade-offs in a complex system, and working within them produces better results than fighting them.
This habit also builds character. Embracing constraints requires humility—accepting that you can't have everything, that trade-offs are real, that perfection isn't achievable. It requires creativity—finding elegant solutions within boundaries. It requires focus—distinguishing between what's essential and what's merely nice to have.
The modern development world often feels like it's about having more: more tools, more frameworks, more libraries, more features, more scalability. But some of the most impactful software ever created was built with severe constraints. Redis started as a solution to a specific problem with strict performance requirements. Unix was designed for machines with tiny memory footprints. The web itself was designed to work over unreliable networks with minimal assumptions about client capabilities.
When you embrace constraints, you stop fighting reality and start working with it. You become a pragmatic problem-solver instead of an idealistic perfectionist. You ship solutions instead of endlessly pursuing optimal ones.
And here's the beautiful paradox: by accepting limitations, you often transcend them. The discipline and creativity that constraints force upon you produce solutions that work better, not worse. The app optimized for 2G connections also screams on 5G. The code designed for maintainability by a solo developer remains maintainable as the team grows. The feature set focused by time constraints turns out to be exactly what users needed.
Constraints aren't your enemy. They're your teacher, your focus, your catalyst for creative solutions. Learn to love them.
Habit 6: They Cultivate Deep Focus in an Age of Distraction
The modern developer's environment is a carefully engineered distraction machine. Slack pings, email notifications, endless meetings, "quick questions," and the siren song of social media and news feeds—all conspiring to fragment your attention into a thousand tiny pieces.
Good developers work in these conditions. They context-switch constantly, juggling multiple threads, believing that responsiveness is a virtue. They wear their busyness as a badge of honor.
Great developers build fortresses of focus. They understand that their most valuable asset isn't their knowledge of frameworks or algorithms—it's their ability to concentrate deeply on complex problems for extended periods. They treat uninterrupted time as a non-negotiable resource, more precious than any cloud computing credit.
This isn't just a preference; it's a necessity grounded in the nature of our work. Programming isn't a mechanical task of typing lines of code. It's an act of construction and problem-solving that happens largely in your mind. You build intricate mental models of systems, data flows, and logic. These models are fragile. A single interruption can shatter hours of mental assembly, forcing you to rebuild from scratch.
I learned this the hard way early in my career. I prided myself on being "always on." I had eight different communication apps open, responded to messages within seconds, and hopped between coding, code reviews, and support tickets all day. I was exhausted by 3 PM, yet my output was mediocre. I was putting in the time but not producing my best work.
Everything changed when I paired with a senior engineer named David for a week. David worked in mysterious two-hour blocks. During these blocks, he'd turn off all notifications, close every application except his IDE and terminal, and put on headphones. At first, I thought he was being antisocial. But then I saw his output. In one two-hour focus block, he'd often complete what would take me an entire distracted day. The quality was superior—fewer bugs, cleaner designs, more thoughtful edge-case handling. He wasn't just faster; he was operating at a different level of quality.
David taught me that focus is a skill to be developed, not a trait you're born with. And it's perhaps the highest-leverage skill you can cultivate.
Here's how great developers protect and cultivate deep focus:
They schedule focus time religiously. They don't leave it to chance. They block out multi-hour chunks in their calendar and treat these appointments with themselves as seriously as meetings with the CEO. During this time, they are effectively offline. Some companies even formalize this with policies like "no-meeting Wednesdays" or "focus mornings," but great developers implement these guardrails for themselves regardless of company policy.
They master their tools, but don't fetishize them. Great developers use tools like "Do Not Disturb" modes, website blockers, and full-screen IDEs not as productivity hacks, but as deliberate barriers against interruption. The goal isn't to find the perfect app; it's to create an environment where deep work can occur. They understand that the tool is secondary to the intent.
They practice single-tasking. Multitasking is a myth, especially in programming. What we call multitasking is actually rapid context-switching, and each switch carries a cognitive cost. Great developers train themselves to work on one thing until it reaches a natural stopping point. They might keep a "distraction list" nearby—a notepad to jot down random thoughts or to-dos that pop up—so they can acknowledge the thought without derailing their current task.
They defend their focus courageously. This is the hardest part. It requires saying "no" to well-meaning colleagues, setting boundaries with managers, and resisting the cultural pressure to be constantly available. Great developers learn to communicate these boundaries clearly and politely: "I'm in the middle of a deep work session right now, but I can help you at 3 PM." Most reasonable people will respect this if it's communicated consistently.
They recognize the cost of context switching. Every interruption doesn't just cost the time of the interruption itself; it costs the time to re-immerse yourself in the original problem. A 30-second Slack question can easily derail 15 minutes of productive flow. Great developers make this cost visible to their teams, helping create a culture that respects deep work.
They structure their day around energy levels. Focus is a finite resource. Great developers know when they are at their cognitive best—for many, it's the morning—and guard that time fiercely for their most demanding work. Meetings, administrative tasks, and code reviews are relegated to lower-energy periods. They don't squander their peak mental hours on low-value, shallow work.
They embrace boredom. This sounds strange, but it's critical. In moments of frustration or mental block, the immediate impulse is to reach for your phone—to seek a dopamine hit from Twitter or email. Great developers resist this. They stay with the problem, staring out the window if necessary, allowing their subconscious to work on the problem. Some of the most elegant solutions emerge not in frantic typing, but in quiet contemplation.
The benefits of this habit extend far beyond increased productivity. Deep focus is where mastery lives. It's in these uninterrupted stretches that you encounter the truly hard problems, the ones that force you to grow. You develop the patience to debug systematically, the clarity to see elegant architectures, and the persistence to push through complexity that would overwhelm a distracted mind.
Furthermore, focus begets more focus. Like a muscle, your ability to concentrate strengthens with practice. What starts as a struggle to focus for 30 minutes can, over time, become a reliable two-hour deep work session.
In a world that values shallow responsiveness, choosing deep focus feels countercultural. But it's precisely this choice that separates competent developers from exceptional ones. The developers who can enter a state of flow regularly are the ones who ship complex features, solve the hardest bugs, and produce work that feels almost magical in its quality.
Your most valuable code will be written in focus. Protect that state with your life.
Habit 7: They Practice Strategic Laziness
If "laziness" sounds like a vice rather than a virtue, you're thinking about it wrong. Good developers are often hardworking—they'll pour hours into manual testing, repetitive configuration, and brute-force solutions. They equate effort with value.
Great developers practice strategic laziness. They will happily spend an hour automating a task that takes five minutes to do manually, not because it saves time immediately, but because they hate repetition so much they're willing to invest upfront to eliminate it forever. They are constantly looking for the lever, the shortcut, the abstraction that maximizes output for minimum ongoing effort.
This principle, often attributed to Larry Wall, the creator of Perl, is one of the three great virtues of a programmer (the others being impatience and hubris). Strategic laziness isn't about avoiding work; it's about being profoundly efficient by automating the boring stuff so you can focus your energy on the hard, interesting problems.
I saw a perfect example of this with a DevOps engineer I worked with. Our deployment process involved a 15-step checklist that took about 30 minutes and required intense concentration. A mistake at any step could take down production. Most of us treated it as a necessary, if tedious, part of the job.
She, however, found it intolerable. Over two days, she built a set of scripts that automated the entire process. The initial investment was significant—probably 16 hours of work. But after that, deployments took 2 minutes and were error-free. In a month, she had recouped the time investment for the entire team. In a year, she had saved hundreds of hours and eliminated countless potential outages. That's strategic laziness.
This habit manifests in several key ways:
They automate relentlessly. If they have to do something more than twice, they write a script. Environment setup, database migrations, build processes, testing routines—all are prime candidates for automation. They don't just think about the time saved; they think about the cognitive load eliminated and the errors prevented.
They build tools and abstractions. Great developers don't just solve the immediate problem; they solve the class of problems. When they notice a repetitive pattern in their code, they don't copy-paste with minor modifications—they extract a function, create a library, or build a framework. They'd rather spend time designing a clean API than writing the same boilerplate for the tenth time.
They are masters of delegation—to the computer. They constantly ask: "What part of this can the computer handle?" Linting, formatting, dependency updates, performance monitoring—tasks that good developers do manually are delegated to automated systems by great developers. This frees their mental RAM for tasks that genuinely require human intelligence.
They optimize for long-term simplicity, not short-term speed. The strategically lazy developer knows that the easiest code to write is often the hardest to maintain. So they invest a little more time upfront to create a simple, clear design that will be easy to modify later. They're lazy about future work, so they do the hard thinking now.
They leverage existing solutions. The strategically lazy developer doesn't build a custom authentication system when Auth0 exists. They don't write a custom logging framework when structured logging libraries are available. They have a healthy bias for using battle-tested solutions rather than reinventing the wheel. Their goal is to solve the business problem, not to write every line of code themselves.
How to cultivate strategic laziness:
Develop an allergy to repetition. Pay attention to tasks you find yourself doing repeatedly. Does it feel tedious? That's your signal to automate. Start small—a shell script to set up your project, a macro to generate boilerplate code. The satisfaction of eliminating a recurring annoyance is addictive and will fuel further automation.
Ask the lazy question. Before starting any task, ask: "Is there an easier way to do this?" "Will I have to do this again?" "Can I get the computer to do the boring parts?" This simple metacognition separates the habitually lazy from the strategically lazy.
Invest in your toolchain. Time spent learning your IDE's shortcuts, mastering your shell, or configuring your linters isn't wasted—it's compounded interest. A few hours learning Vim motions or VS Code multi-cursor editing can save you days of typing over a year.
Build, then leverage. When you build an automation or abstraction, think about how to make it reusable. A script that's only useful for one project is good; a tool that can be used across multiple projects is great. Write documentation for your tools—future you will thank you.
The beauty of strategic laziness is that it benefits everyone, not just you. The deployment script you write helps the whole team. The well-designed abstraction makes the codebase easier for everyone to work with. The automated test suite prevents bugs for all future developers.
This habit transforms you from a code monkey into a force multiplier. You stop being just a producer of code and become a builder of systems that produce value with less effort. You become the developer who, instead of just working hard, makes the entire team's work easier and more effective.
And in the end, that's the kind of laziness worth cultivating.
Habit 8: They Maintain a Feedback Loop with the Production Environment
Good developers write code, run tests, and push to production. They trust that if the tests pass and the build is green, their job is done. They view production as a distant, somewhat scary place that operations teams worry about.
Great developers have an intimate, ongoing relationship with production. They don't just ship code and forget it; they watch it walk out the door and follow it into the world. They treat the production environment not as a final destination, but as the ultimate source of truth about their code's behavior, performance, and value.
This habit is the difference between theoretical correctness and practical reality. Your code can pass every test, satisfy every requirement, and look beautiful in review, but none of that matters if it fails in production. Great developers understand that production is where their assumptions meet reality, and reality always wins.
I learned this lesson from a catastrophic performance regression early in my career. We had built a new feature with a complex database query. It was elegant, used all the right JOINs, and passed all our unit and integration tests. Our test database had a few hundred rows of synthetic data, and the query was instant.
We shipped it on a Friday afternoon. By Saturday morning, the database was on fire. In production, with millions of rows of real-world data, that "elegant" query was doing full table scans. It timed out, locked tables, and brought the entire application to its knees. We spent our weekend in panic mode, rolling back and writing a fix.
A great developer on our team, Maria, took this personally. Not because she wrote the bad query (she hadn't), but because she saw it as a systemic failure. "We can't just test if our code works," she said. "We have to test if it works under real conditions."
From that day on, she became the guardian of our production feedback loops.
Here's what maintaining a tight production feedback loop looks like in practice:
They instrument everything. Great developers don't just log errors; they measure everything that matters. Response times, throughput, error rates, business metrics, cache hit ratios, database query performance. They bake observability—metrics, logs, and traces—into their code from the very beginning. They know that you can't fix what you can't see.
They watch deployments like hawks. When their code ships, they don't just move on to the next ticket. They watch the deployment metrics. They monitor error rates. They check performance dashboards. They might even watch real-user sessions for a few minutes to see how the feature is actually being used. This immediate feedback allows them to catch regressions that slip past tests.
They practice "you build it, you run it." This Amazon-originated philosophy means developers are responsible for their code in production. They are on call for their features. They get paged when things break. This might sound punishing, but it's the most powerful feedback loop imaginable. Nothing motivates you to write robust, fault-tolerant code like knowing your phone will ring at 3 AM if you don't.
They use feature flags religiously. Great developers don't deploy big bang releases. They wrap new features in flags and roll them out gradually—to internal users first, then to 1% of customers, then to 10%, and so on. This allows them to get real-world feedback with minimal blast radius. If something goes wrong, they can turn the feature off with a single click instead of a full rollback.
They analyze production data to make decisions. Should we optimize this query? A good developer might guess. A great developer will look at production metrics to see how often it's called, what its average and p95 latencies are, and what impact it's having on user experience. They let data from production guide their prioritization.
They embrace and learn from incidents. When production breaks, great developers don't play the blame game. They lead and participate in blameless post-mortems. They dig deep to find the root cause, not just the symptom. More importantly, they focus on systemic fixes that prevent the entire class of problem from recurring, rather than just patching the immediate issue.
How to develop this habit:
Make your application observable from day one. Before you write business logic, set up structured logging, metrics collection, and distributed tracing. It's much harder to add this later. Start simple—even just logging key business events and performance boundaries is a huge step forward.
Create a personal dashboard. Build a dashboard that shows the health of the features you own. Make it the first thing you look at in the morning and the last thing you check before a deployment. This habit builds a sense of ownership and connection to your code's real-world behavior.
Volunteer for on-call rotation. If your team has one, join it. If it doesn't, propose it. The experience of being woken up by a pager for code you wrote is transformative. It will change how you think about error handling, logging, and system design forever.
Practice "production debugging." The next time there's a production issue, even if it's not in your code, ask if you can shadow the person debugging it. Watch how they use logs, metrics, and traces to pinpoint the problem. This is a skill that can only be learned by doing.
Ship small, ship often. The more frequently you deploy, the smaller each change is, and the easier it is to correlate changes in the system with changes in its behavior. Frequent deployments reduce the fear of production and turn it into a familiar, manageable place.
Maintaining this feedback loop does more than just prevent bugs—it closes the circle of learning. You write code based on assumptions, and production tells you which of those assumptions were wrong. Maybe users are using your feature in a way you never anticipated. Maybe that "edge case" is actually quite common. Maybe the performance characteristic you assumed is completely different under real load.
This continuous learning is what turns a good coder into a great engineer. You stop designing systems in a vacuum and start building them with a deep, intuitive understanding of how they will actually behave in the wild.
Production is the most demanding and honest code reviewer you will ever have. Listen to it.
Habit 9: They Prioritize Learning Deliberately, Not Accidentally
The technology landscape is a raging river of change. New frameworks, languages, tools, and paradigms emerge, gain fervent adoption, and often fade into obscurity, all within a few years. A good developer swims frantically in this river, trying to keep their head above water. They learn reactively—picking up a new JavaScript framework because their job requires it, skimming a blog post that pops up on Hacker News, watching a tutorial when they're stuck on a specific problem. Their learning is ad-hoc, driven by immediate necessity and the loudest voices in the ecosystem.
A great developer doesn't just swim in the river; they build a boat and chart a course. They understand that in a field where specific technologies have a half-life of mere years, the only sustainable advantage is the ability to learn deeply and efficiently. They don't learn reactively; they learn deliberately. Their learning is a systematic, ongoing investment, guided by a clear understanding of first principles and their long-term goals, not by the whims of tech trends.
This is arguably the most important habit of all, because it's the meta-habit that enables all the others. It's the engine of growth.
I witnessed the power of this habit in two developers who joined my team at the same time, both with similar backgrounds and talent. Let's call them Alex and Ben.
Alex was a classic reactive learner. He was bright and capable. When the team decided to adopt a new state management library, he dove in. He learned just enough to get his tasks done. He Googled specific error messages, copied patterns from existing code, and became functionally proficient. His knowledge was a mile wide and an inch deep—a collection of solutions to specific problems without a unifying mental model.
Ben took a different approach. When faced with the same new library, he didn't just read the "Getting Started" guide. He spent a weekend building a throwaway project with it. Then, he read the official documentation cover-to-cover. He watched a talk by the creator to understand the philosophy behind the library—what problems it was truly designed to solve, and what trade-offs it made. He didn't just learn how to use it; he learned why it was built that way, and when it was the right or wrong tool for the job.
Within six months, the difference was staggering. Alex could complete tasks using the library, but he often wrote code that fought against its core principles, leading to subtle bugs and performance issues. When he encountered a novel problem, he was often stuck.
Ben, on the other hand, had become the team's go-to expert. He could anticipate problems before they occurred. He designed elegant solutions that leveraged the library's strengths. He could explain its concepts to juniors in a way that made them stick. He wasn't just a user of the tool; he was a master of it.
Alex had learned accidentally. Ben had learned deliberately.
Here’s how great developers structure their deliberate learning:
They Learn Fundamentals, Not Just Flavors. The great developer knows that while JavaScript frameworks come and go (Remember jQuery? AngularJS? Backbone.js?), the underlying fundamentals of the web—the DOM, the event loop, HTTP, browser rendering—endure. They invest their time in understanding computer science fundamentals: data structures, algorithms, networking, operating systems, and design patterns. These are the timeless principles that allow them to evaluate and learn any new "flavor" of technology quickly and deeply. Learning React is easy when you already understand the principles of declarative UI, the virtual DOM concept, and one-way data flow. You're not memorizing an API; you're understanding a manifestation of deeper ideas.
They Maintain a "Learning Backlog." Just as they have a backlog of features to build, they maintain a personal backlog of concepts to learn, technologies to explore, and books to read. This isn't a vague "I should learn Go someday." It's a concrete list: "Read 'Designing Data-Intensive Applications,'" "Build a simple Rust CLI tool to understand memory safety," "Complete the 'Networking for Developers' course on Coursera." This transforms learning from an abstract intention into a manageable, actionable project.
They Allocate "Learning Time" and Protect It Ferociously. They don't leave learning to the scraps of time left over after a exhausting day of meetings and coding. They schedule it. Many great developers I know block out one afternoon per week, or a few hours every morning before work, for deliberate learning. This time is sacred. It's not for checking emails or putting out fires. It's for deep, uninterrupted study and practice.
They Learn by Doing, Not Just Consuming. Passive consumption—reading, watching videos—is only the first step. Great developers internalize knowledge by applying it. They don't just read about a new database; they install it, import a dataset, and run queries. They don't just watch a tutorial on a new architecture; they build a toy project that implements it. This practice builds strong, durable neural pathways that theory alone cannot. They understand that true mastery lives in the fingertips as much as in the brain.
They Go to the Source. When a new tool emerges, the reactive learner reads a "10-minute introduction" blog post. The deliberate learner goes straight to the primary source: the official documentation, the original research paper (if one exists), or a talk by the creator. They understand that secondary sources are often simplified, opinionated, or outdated. The truth, in all its nuanced complexity, is usually found at the source. Reading the React documentation or Dan Abramov's blog posts is a different league of learning than reading a list of "React tips and tricks" on a random blog.
They Teach What They Learn. The deliberate learner knows that the ultimate test of understanding is the ability to explain a concept to someone else. They write blog posts, give brown bag lunches to their team, contribute to documentation, or simply explain what they've learned to a colleague. The act of organizing their thoughts for teaching forces them to confront gaps in their own understanding and solidify the knowledge. It's the final step in the learning cycle.
They Curate Their Inputs Wisely. The digital world is a firehose of low-quality, repetitive, and often incorrect information. Great developers are ruthless curators of their information diet. They don't try to read everything. They identify a handful of trusted, high-signal sources—specific blogs, journals, podcasts, or people—and ignore the rest. They favor depth over breadth, quality over quantity.
How to cultivate this habit:
Conduct a quarterly "skills audit." Every three months, take an honest inventory of your skills. What's getting stronger? What's becoming obsolete? What emerging trend do you need to understand? Based on this audit, update your learning backlog. This transforms your career development from a passive process into an active one you control.
Follow the "20% rule." Dedicate a fixed percentage of your time—even if it's just 5% to start—to learning things that aren't immediately relevant to your current tasks. This is how you avoid technological obsolescence. It's how you serendipitously discover better ways of working and new opportunities.
Build a "personal syllabus." If you wanted to become an expert in distributed systems, what would you need to learn? In what order? A deliberate learner creates a syllabus for themselves, just like a university course. They might start with a textbook, then move to seminal papers, then build a project. This structured approach is infinitely more effective than random exploration.
Find a learning cohort. Learning alone is hard. Find one or two colleagues who share your growth mindset. Start a book club, a study group, or a "tech deep dive" session. The social commitment will keep you accountable, and the discussions will deepen your understanding.
The payoff for this habit is immeasurable. It's the difference between a developer whose value peaks five years into their career and one who becomes more valuable with each passing year. It's the difference between being at the mercy of the job market and being the one that companies fight over.
Deliberate learning is the ultimate career capital. In a world of constant change, the ability to learn how to learn, and to do it with purpose and strategy, isn't just a nice-to-have. It's the single greatest predictor of long-term success. It is the quiet, persistent engine that transforms a good programmer into a great one, and a great one into a true master of the craft.
Habit 10: They Build and Nurture Their Engineering Judgment
You can master every technical skill. You can write pristine code, debug with scientific precision, and architect systems of elegant simplicity. You can have an encyclopedic knowledge of algorithms and an intimate relationship with production. But without the final, most elusive habit, you will never cross the chasm from being a great technician to being a truly great engineer.
That final habit is the cultivation of engineering judgment.
Engineering judgment is the silent, invisible partner to every technical decision you make. It’s the internal compass that guides you when the map—the requirements, the documentation, the best practices—runs out. It’s the accumulated wisdom that tells you when to apply a rule, and, more importantly, when to break it. It’s what separates a technically correct solution from a genuinely wise one.
A good developer, when faced with a problem, asks: "What is the technically optimal solution?" They will find the most efficient algorithm, the most scalable architecture, the most pristine code structure. They are in pursuit of technical perfection.
A great developer asks a more complex set of questions: "What is the right solution for this team, for this business context, for this moment in time?" They weigh technical ideals against a messy reality of deadlines, team skills, business goals, and long-term maintenance. They understand that the best technical solution can be the worst engineering decision.
I learned this not from a success, but from a failure that still haunts me. Early in my career, I was tasked with building a new reporting feature. The existing system was a tangled mess of SQL queries embedded in PHP. It was slow, unmaintainable, and a nightmare to modify.
I saw my chance to shine. I designed a beautiful, event-sourced architecture with a CQRS pattern. It was technically brilliant. It would be infinitely scalable, provide perfect audit trails, and allow for complex historical queries. It was the kind of system you read about in software architecture books. I was immensely proud of it.
It was also a catastrophic failure.
The project took three times longer than estimated. The complexity was so high that only I could understand the codebase. When I eventually left the company, the team struggled for months to maintain it, eventually rewriting the entire feature in a much simpler, cruder way. My "technically optimal" solution was an engineering disaster. It was the wrong solution for the team's skill level, the wrong solution for the business's need for speed, and the wrong solution for the long-term health of the codebase.
I had technical skill, but I had failed the test of engineering judgment.
Engineering judgment is the synthesis of all the other habits into a form of professional wisdom. Here’s how it manifests:
They Understand the Spectrum of "Good Enough." Great developers know that not every piece of the system needs to be a masterpiece. The prototype for a one-off marketing campaign does not need the same level of robustness as the core authentication service. The internal admin tool can tolerate more technical debt than the customer-facing API. They make conscious, deliberate trade-offs. They ask: "What is the minimum level of quality required for this to successfully solve the problem without creating unacceptable future risk?" This isn't laziness; it's strategic allocation of effort.
They See Around Corners. A developer with strong judgment can anticipate the second- and third-order consequences of a decision. They don't just see the immediate feature implementation; they see how it will constrain future changes, what new categories of bugs it might introduce, and how it will affect the system's conceptual integrity. When they choose a library, they don't just evaluate its features; they evaluate its maintenance status, its upgrade path, its community health, and its architectural philosophy. They are playing a long game that others don't even see.
They Balance Idealism with Pragmatism. They hold strong opinions about code quality, but they hold them loosely. They can passionately argue for a clean architecture in a planning meeting, but if the business context demands a quicker, dirtier solution, they can pivot and implement the pragmatic choice without resentment. They document the trade-offs made and the technical debt incurred, creating a ticket to address it later, and then they move on. They understand that software exists to serve a business, not the other way around.
They Make Decisions Under Uncertainty. Requirements are ambiguous. Timelines are tight. Information is incomplete. This is the reality of software development. A good developer freezes, demanding more certainty, more specifications, more time. A great developer uses their judgment to make the best possible decision with the information available. They identify the core risks, make reasonable assumptions, and chart a course. They know that delaying a decision is often more costly than making a slightly wrong one.
They Distinguish Between Symptoms and Diseases. A junior developer treats the symptom: "The page is loading slowly, let's add a cache." A good developer finds the disease: "The page is loading slowly because of an N+1 query problem, let's fix the query." A great developer with sound judgment asks if the disease itself is a symptom: "Why are we making so many queries on this page? Is our data model wrong? Is this feature trying to do too much? Should we be pre-computing this data entirely?" They operate at a higher level of abstraction, solving classes of problems instead of individual instances.
How to Cultivate Engineering Judgment (Because It Can't Be Taught, Only Grown)
Judgment isn't a skill you can learn from a book. It's a form of tacit knowledge, built slowly through experience, reflection, and a specific kind of practice.
Seek Diverse Experiences. Judgment is pattern-matching on a grand scale. The more patterns you have seen, the better your judgment will be. Work at a startup where speed is everything. Work at an enterprise where stability is paramount. Work on front-end, back-end, and infrastructure. Each context teaches you a different set of values and trade-offs. The developer who has only ever worked in one environment has a dangerously narrow basis for judgment.
Conduct Retrospectives on Your Own Decisions. This is the single most powerful practice. Don't just move on after a project finishes or a decision is made. Schedule a solo retrospective. Take out a notebook and ask yourself:
· "What were the key technical decisions I made?"
· "What was my reasoning at the time?"
· "How did those decisions play out? Better or worse than expected?"
· "What did I miss? What would I do differently with the benefit of hindsight?"
This ritual of self-reflection is how you convert experience into wisdom.
Find a Yoda. Identify a senior engineer whose judgment you respect—someone who seems to have a preternatural ability to make the right call. Study them. When they make a decision that seems counterintuitive, ask them to explain their reasoning. Not just the technical reason, but the contextual, human, and business reasons. The nuances they share are the building blocks of judgment.
Practice Articulating the "Why." When you make a recommendation, force yourself to explain not just what you think should be done, but why. Lay out the trade-offs you considered. Explain the alternatives you rejected and why. The act of articulating your reasoning forces you to examine its validity and exposes flaws in your logic. It also invites others into your thought process, allowing them to challenge and refine your judgment.
Embrace the "Reversibility" Heuristic. When faced with a difficult decision, ask: "How reversible is this?" Adopting a new programming language is largely irreversible for a codebase. Adding a complex microservice architecture is hard to undo. Choosing a cloud provider creates lock-in. These are high-judgment decisions. On the other hand, refactoring a module, changing an API endpoint, or trying a new library are often easily reversible. Great developers apply more rigor and demand more certainty for irreversible decisions, and they move more quickly on reversible ones.
Develop a Sense of Proportion. This is perhaps the most subtle aspect of judgment. It’s knowing that spending two days optimizing a function that runs once a day is a waste, but spending two days optimizing a function called ten thousand times per second is critical. It’s knowing that a 10% performance degradation in the checkout flow is an emergency, while a 10% degradation in the "about us" page is not. This sense of proportion allows them to focus their energy where it truly matters.
The Compounding Effect of the Ten Habits
Individually, each of these habits will make you a better developer. But their true power is not additive; it's multiplicative. They compound.
Reading code widely (Habit 1) builds the mental library that informs your engineering judgment (Habit 10). Understanding the "why" (Habit 2) allows you to make the pragmatic trade-offs required by strategic laziness (Habit 5) and sound judgment (Habit 10). Cultivating deep focus (Habit 6) is what enables the deliberate learning (Habit 9) that prevents you from making naive decisions. Treating debugging as a science (Habit 3) and maintaining a feedback loop with production (Habit 8) provide the raw data that your judgment synthesizes into wisdom.
This is not a checklist to be completed. It is a system to be grown, a identity to be adopted. You will not master these in a week, or a year. This is the work of a career.
Start with one. Pick the habit that resonates most with you right now, the one that feels both necessary and just out of reach. Practice it deliberately for a month. Then add another.
The path from a good programmer to a great one is not a straight line. It's a spiral. You will circle back to these habits again and again throughout your career, each time understanding them more deeply, each time integrating them more fully into your practice.
The destination is not a job title or a salary. The destination is becoming the kind of developer who doesn't just write code, but who solves problems. The kind of developer who doesn't just build features, but who builds systems that are robust, maintainable, and a genuine pleasure to work with. The kind of developer who leaves every codebase, every team, and every organization better than they found it.
That is the work. That is the craft. And it begins with the decision, right now, to not just be good, but to begin the deliberate, lifelong practice of becoming great.

Top comments (1)
Brilliant