<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Steve McDougall</title>
    <description>The latest articles on DEV Community by Steve McDougall (@juststevemcd).</description>
    <link>https://dev.to/juststevemcd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/juststevemcd"/>
    <language>en</language>
    <item>
      <title>Spec Driven Development With LLMs</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Thu, 02 Apr 2026 09:54:59 +0000</pubDate>
      <link>https://dev.to/juststevemcd/spec-driven-development-with-llms-1dhk</link>
      <guid>https://dev.to/juststevemcd/spec-driven-development-with-llms-1dhk</guid>
      <description>&lt;p&gt;There is a pattern that almost every engineer who has worked seriously with LLMs eventually discovers, usually after a few frustrating experiences: the quality of what you get out is determined almost entirely by the quality of what you put in.&lt;/p&gt;

&lt;p&gt;This is not a new observation. Every tool reflects the clarity of the instruction given to it. But with LLMs the relationship is more direct and more consequential than most engineers initially expect, because the model is capable enough to produce something plausible regardless of how good your input is. A vague prompt produces confident, coherent, and subtly wrong output. A precise prompt produces something you can actually use. The difference between those two outcomes is the spec.&lt;/p&gt;

&lt;p&gt;Spec-driven development is not a new concept either. Writing a clear specification before implementation has been good engineering practice for as long as engineering has existed. What is new is the leverage. When a well-written spec is the input to an LLM, the implementation work that follows is faster, more accurate, and requires significantly less correction than when you start from a rough idea and iterate. The spec is now the highest-leverage thing an engineer writes, and most engineering teams are not treating it that way.&lt;/p&gt;

&lt;p&gt;This article is about how to write specifications that work well as LLM inputs - what they need to contain, how to structure them, where the common failure modes are, and how spec-driven development connects to the pitch-based planning approach we covered in the previous article.&lt;/p&gt;

&lt;p&gt;Let's start with what a spec is not in this context, because there are a few things it gets confused with.&lt;/p&gt;

&lt;p&gt;A spec is not a pitch. The pitch operates at the level of the problem and the shaped solution. It is strategic - it communicates direction and appetite to a team. A spec operates at the level of implementation. It is tactical - it tells you and your tools what to actually build in enough detail that the output can be evaluated against clear criteria. A pitch might describe a notification management feature and its general approach. A spec describes a specific component of that feature: its interface, its behaviour, its error states, its constraints.&lt;/p&gt;

&lt;p&gt;A spec is not a ticket. A ticket in most engineering systems is a unit of tracking, not a unit of thinking. "Add notification preferences screen" is a ticket. A spec describes what that screen does, how it behaves under different conditions, what data it works with, what the edge cases are, and what success looks like. The thinking that goes into a good spec is what makes the ticket meaningful rather than just a pointer to work that still needs to be figured out.&lt;/p&gt;

&lt;p&gt;A spec is not documentation after the fact. It is a thinking tool that exists before implementation begins, and its value is precisely that it forces the difficult questions to surface before they become expensive problems mid-build.&lt;/p&gt;

&lt;p&gt;What a good spec contains depends somewhat on the type of work - a spec for a UI component looks different from a spec for a backend service or a data migration - but there are properties that apply across all of them.&lt;/p&gt;

&lt;p&gt;Clarity about what is being built is the foundation. This sounds obvious but it is where most specs fail first. "A service that handles notifications" is not clarity. "A notification delivery service that accepts events from upstream producers via a message queue, applies user preference filters, and dispatches to one or more delivery channels with at-least-once delivery guarantees and idempotency handling on the consumer side" is clarity. The second version tells you what the thing does, how it connects to other things, and what its operational properties are. An LLM given the first version will make decisions about all of those things on your behalf, and some of those decisions will be wrong in ways that are not immediately visible.&lt;/p&gt;

&lt;p&gt;Explicit interface definitions matter more than almost anything else when you are using LLM assistance. If you are building a function, describe its signature, its inputs, its outputs, and its error behaviour. If you are building an API endpoint, describe its path, its method, its request shape, its response shape, and its failure modes. If you are building a UI component, describe its props, its states, and the events it emits. The more precisely you define the interface before the LLM generates the implementation, the less time you spend correcting an implementation that works internally but connects incorrectly to everything around it.&lt;/p&gt;

&lt;p&gt;This matters specifically for LLM-assisted development because models are very good at implementing something that satisfies an internally consistent spec and very bad at inferring the correct interface from context. If you leave the interface underspecified, the model will make choices that are locally reasonable but that do not match the rest of your system. You will not always catch this immediately, and when you do catch it the fix often requires more rework than if you had specified the interface correctly upfront.&lt;/p&gt;

&lt;p&gt;Behaviour under edge cases and error conditions is the part of a spec that most engineers skip and most LLMs handle poorly when left to their own devices. A model given an underspecified prompt will often generate happy-path implementation that handles the common case correctly and ignores everything else. If your spec does not explicitly describe what happens when the input is malformed, when the upstream dependency is unavailable, when the user does not have permission, or when the data is in an unexpected state, you will get an implementation that does not handle those cases - not because the model cannot handle them, but because you did not ask it to.&lt;/p&gt;

&lt;p&gt;Write out the edge cases explicitly. Not as an exhaustive list of every possible failure mode, but as a clear description of the categories of error and the expected behaviour for each. "Returns a 422 with a structured error body describing the validation failure" is a useful edge case description. "Handles errors appropriately" is not.&lt;/p&gt;

&lt;p&gt;Constraints and non-functional requirements belong in the spec too. Performance expectations, security requirements, dependency versions, coding conventions, testing expectations - these are things that an LLM will make default choices about if you do not specify them, and those default choices may not match your system's actual requirements. If you have a response time budget, say so. If you need the implementation to work with a specific version of a library, say so. If your team has a convention around error handling or logging, describe it. The model has no way to know these things from context unless you tell it.&lt;/p&gt;

&lt;p&gt;A practical structure that works well across most implementation specs looks something like this. Start with a brief context section - two or three sentences that place this component in the larger system and explain why it exists. Then the interface definition - inputs, outputs, dependencies. Then the behaviour description - what it does in the normal case, broken down into the meaningful sub-cases. Then the error handling - what happens when things go wrong. Then the constraints - performance, security, conventions. Then the testing expectations - what kinds of tests should exist and what they should verify.&lt;/p&gt;

&lt;p&gt;That structure is not a rigid template. Adapt it to the work. A simple utility function does not need a full context section. A complex service with multiple integration points might need more detail in the interface section than the template suggests. The point is to have a consistent habit of thinking that ensures the important things are covered rather than accidentally omitted.&lt;/p&gt;

&lt;p&gt;The connection between spec-driven development and the Shape Up methodology from the previous articles is tighter than it might initially appear. In Shape Up, the pitch shapes the problem and the approach at a strategic level. The spec is what happens when a team member takes a piece of the shaped work and thinks through the implementation details before writing code. The two artifacts operate at different levels of abstraction and serve different purposes, but they are part of the same discipline of thinking before building.&lt;/p&gt;

&lt;p&gt;One of the things that Shape Up's autonomous team model enables is the kind of focused thinking that good spec writing requires. When a team has six weeks and genuine ownership of a scoped problem, they have the time and the context to write specs that are actually grounded in the real constraints of the work. When a team is running two-week sprints and picking up tickets from a shared backlog, the pressure is toward starting implementation quickly, and the spec is the thing that gets skipped.&lt;/p&gt;

&lt;p&gt;There is a version of LLM-assisted development that skips the spec entirely. You describe what you want conversationally, the model generates something, you correct it, it regenerates, and you iterate toward a solution. This works for small, simple, self-contained pieces of work. For anything significant it is slower and produces worse output than writing a clear spec upfront and generating against it. The conversational iteration loop is essentially doing the spec work implicitly, one correction at a time, but without the benefit of having the full picture in one place where you can review it before implementation begins.&lt;/p&gt;

&lt;p&gt;Write the spec first. Then generate. Then review the output against the spec rather than against your intuition about what looks right. That review step is important and we will go deeper on it in the next article. For now the key point is that the spec is what makes the review possible - without an explicit description of what the implementation should do, you are reviewing against a fuzzy mental model that is easy to satisfy superficially and hard to hold precisely.&lt;/p&gt;

&lt;p&gt;There is also a team benefit to spec writing that is separate from the LLM angle. A spec that exists as a written artifact before implementation begins is something other team members can read, critique, and contribute to. It surfaces disagreements and misunderstandings before they are embedded in code. It creates a shared understanding of what is being built that the implementation alone does not provide. And it is a useful reference during code review - rather than evaluating whether the implementation looks right, reviewers can evaluate whether it satisfies the spec, which is a more precise and more productive question.&lt;/p&gt;

&lt;p&gt;This is the aspect of spec-driven development that I think is most undervalued right now. The conversation about LLMs in engineering tends to focus on individual productivity - how much faster can one engineer move with AI assistance. The spec-driven approach creates team-level benefits that compound across the cycle, because it moves the alignment work to the beginning of the implementation rather than distributing it across dozens of review comments and conversations.&lt;/p&gt;

&lt;p&gt;Think of the spec as the design review that happens before the code exists rather than after. It is much cheaper to fix a misunderstanding at the spec stage than at the implementation stage, and dramatically cheaper than at the integration stage when the misunderstanding has propagated into multiple parts of the system.&lt;/p&gt;

&lt;p&gt;Write the spec. Then build. In that order, every time.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: Reviewing AI-Generated Work - how to evaluate code, architecture, and quality when a meaningful portion of your codebase was not written by a human.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>projectmanagement</category>
      <category>ai</category>
      <category>modernsoftware</category>
    </item>
    <item>
      <title>Writing Pitches That Work</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Thu, 02 Apr 2026 09:53:24 +0000</pubDate>
      <link>https://dev.to/juststevemcd/writing-pitches-that-work-3p1a</link>
      <guid>https://dev.to/juststevemcd/writing-pitches-that-work-3p1a</guid>
      <description>&lt;p&gt;The pitch is the atomic unit of Shape Up. Get it right and the team has everything they need to do good work within the cycle. Get it wrong and you will feel it for six weeks - in misaligned expectations, scope debates that should have happened before the work started, and solutions that technically satisfy the brief but miss the actual problem.&lt;/p&gt;

&lt;p&gt;Most teams that adopt Shape Up underinvest in pitch writing. They treat it as a lighter version of a product requirements document, or they go the other direction and write something so vague it gives the team no useful direction at all. Both failure modes are common and both are avoidable once you understand what a pitch is actually trying to do.&lt;/p&gt;

&lt;p&gt;A pitch is not a specification. It is not a list of requirements. It is not a design document. It is a shaped proposal - a written artifact that has done enough thinking about a problem to give a team genuine clarity on what they are solving and why, without pre-solving it for them in a way that removes the creative and technical latitude they need to do the work well.&lt;/p&gt;

&lt;p&gt;That distinction between shaping and specifying is the most important thing to understand about pitch writing, so let us spend some time on it before getting into the mechanics.&lt;/p&gt;

&lt;p&gt;When you specify, you are deciding what the solution looks like before the team has had a chance to engage with the problem. You are drawing wireframes, writing detailed acceptance criteria, and describing behaviour at the level of individual interactions. This feels thorough. It feels like you are setting the team up for success by removing ambiguity. What it actually does is remove the team's ability to find a better solution than the one you already thought of, and it often embeds assumptions that only become visible as problems once implementation starts.&lt;/p&gt;

&lt;p&gt;When you shape, you are doing something different. You are thinking hard enough about the problem to understand its boundaries, its constraints, and the general direction of a solution - but stopping short of specifying the details. You are identifying the risks and unknowns that matter, the no-gos that would take the work in the wrong direction, and the properties a good solution needs to have without dictating exactly what form it takes.&lt;/p&gt;

&lt;p&gt;The test of whether you have shaped rather than specified is whether the team still has meaningful decisions to make. If you hand the pitch to the team and they essentially have a to-do list to execute, you have over-specified. If you hand them the pitch and they understand the problem, the appetite, the constraints, and the direction, but still need to figure out the best approach - that is a well-shaped pitch.&lt;/p&gt;

&lt;p&gt;Now let's get into what a pitch actually contains.&lt;/p&gt;

&lt;p&gt;Every pitch worth the name has five components: the problem, the appetite, the solution at the right level of abstraction, the rabbit holes, and the no-gos. These do not have to appear in a rigid template format - a well-written pitch reads more like a short essay than a form - but all five need to be present.&lt;/p&gt;

&lt;p&gt;The problem is where most pitches are weakest and where the most leverage is. A vague problem statement produces vague solutions. A precise problem statement - one that describes a specific situation, a specific friction, a specific gap between what exists and what should exist - gives the team something to build against that they can evaluate their work against throughout the cycle.&lt;/p&gt;

&lt;p&gt;A weak problem statement: "Users need a better way to manage their notifications."&lt;/p&gt;

&lt;p&gt;A stronger problem statement: "Users who receive more than twenty notifications per day are disabling notifications entirely rather than managing them, which means they are missing time-sensitive alerts that affect their workflow. We need a way to let users manage notification volume without losing access to the alerts that actually matter to them."&lt;/p&gt;

&lt;p&gt;The difference is not just length. The stronger version identifies who the problem affects, what they are doing as a result of the problem, and what the actual cost of that behaviour is. It gives the team a specific situation to solve for and a way to evaluate whether the solution they build actually addresses it.&lt;/p&gt;

&lt;p&gt;The appetite is the honest statement of how much time this work is worth. Not how long you expect it to take - how much time you are willing to spend on it given what you know about its value and strategic importance. Two weeks for a small improvement. Six weeks for something more significant. Be direct and be honest. If the appetite is two weeks, write two weeks and mean it. The appetite is not a soft target that expands when the work turns out to be more complex than expected. It is a constraint that shapes the scope of the solution.&lt;/p&gt;

&lt;p&gt;Stating the appetite explicitly also forces an important conversation before the cycle begins. If the team reads the pitch and immediately thinks the problem cannot be meaningfully addressed in the stated appetite, that is a conversation worth having before six weeks of work happens, not after.&lt;/p&gt;

&lt;p&gt;The solution section is where the shaping work lives, and it is the hardest part of a pitch to write well. You are trying to communicate enough about the direction and the approach that the team is not starting from zero, without designing the solution so completely that you have made all the meaningful decisions for them.&lt;/p&gt;

&lt;p&gt;A few techniques that help here. Breadboarding is the practice of sketching the key components and connections of a solution without specifying its visual design - think of it as a schematic rather than a mockup. You might sketch that there is a notification preferences screen with three distinct categories, that categories can be muted rather than all notifications being turned off, and that there is an activity log so users can see what they missed. That communicates direction without locking in the visual design, the exact interaction model, or the technical implementation.&lt;/p&gt;

&lt;p&gt;Fat marker sketches serve a similar purpose for more visual problems. Rough, low-fidelity drawings that capture the general layout and flow without the kind of detail that a real wireframe contains. The roughness is intentional - it signals to the team that the sketch is directional, not prescriptive, and it leaves room for them to find better solutions within the general approach.&lt;/p&gt;

&lt;p&gt;The key is to work at the level of components and relationships rather than the level of pixels and interactions. What are the meaningful parts of this solution? How do they connect? What does someone move through as they use it? Those are the questions a good solution section answers without over-answering.&lt;/p&gt;

&lt;p&gt;Rabbit holes deserve more attention than they usually get in discussions of Shape Up. A rabbit hole is a part of the problem that looks tractable but might turn out to be much more complex than it appears - the kind of thing that could consume most of a cycle if the team is not explicitly warned about it.&lt;/p&gt;

&lt;p&gt;Identifying rabbit holes in a pitch is one of the highest-value things a shaper can do, because it requires actually thinking through the implementation risks before the work starts rather than discovering them mid-cycle when there is less time to respond. It also signals to the team that you have thought about the problem seriously rather than just describing it from a high level.&lt;/p&gt;

&lt;p&gt;A rabbit hole might be a technical dependency that is less stable than it appears. It might be an edge case in the data model that looks simple on the surface but branches into complexity once you dig in. It might be an interaction with an existing feature that creates unexpected constraints. Whatever it is, naming it explicitly in the pitch gives the team permission to timebox their investigation of it rather than going deep in search of a perfect solution.&lt;/p&gt;

&lt;p&gt;The no-gos are the explicit scope boundaries - the things that are specifically out of scope for this cycle even if they seem related. No-gos are important because they prevent scope creep from a specific direction: the well-intentioned team member who sees adjacent work and pulls it into the cycle because it seems related and they have the context to do it.&lt;/p&gt;

&lt;p&gt;No-gos also do something less obvious: they signal that the shaper has thought about what this work is not, which is often as clarifying as knowing what it is. "This does not include email notification preferences, which will be addressed in a separate cycle" is a useful statement because it tells the team where the boundary is and why, and it heads off a conversation that would otherwise happen at an inconvenient moment during implementation.&lt;/p&gt;

&lt;p&gt;There is a question that comes up when teams start writing pitches with LLM assistance, which is increasingly common: how do you use a tool like this in the shaping process without ending up with something that is technically complete but fundamentally generic?&lt;/p&gt;

&lt;p&gt;The honest answer is that LLMs are useful for some parts of pitch writing and not others. They are useful for refining language, checking that a problem statement is clear to someone without context, generating a list of rabbit holes to consider, and structuring a rough draft that you have already thought through. They are not useful for the actual thinking work of shaping - understanding the specific problem in its specific context, identifying the constraints that matter, and making the judgment calls about where the scope boundary should be.&lt;/p&gt;

&lt;p&gt;The pitches that work are the ones where the thinking happened before the writing. If you use an LLM to generate a pitch from a one-line prompt, you will get something that looks like a pitch but lacks the depth of understanding that makes a pitch actually useful. The team will feel that absence during the cycle, even if they cannot always articulate where it is coming from.&lt;/p&gt;

&lt;p&gt;Write the hard parts yourself. Use the tools for everything else.&lt;/p&gt;

&lt;p&gt;One last thing about pitch writing that does not get said enough: a pitch that does not get selected at the betting table is not a failed pitch. It is a well-spent investment in understanding the problem. The thinking you did to write the pitch - the problem definition, the constraint identification, the risk mapping - is valuable regardless of whether this specific version of the work happens this cycle. If the pitch comes back at the next betting table, the thinking is still there and the pitch can be refined rather than rewritten. If the problem turns out to have changed or resolved itself, the thinking helped you understand that faster than you would have otherwise.&lt;/p&gt;

&lt;p&gt;Treat pitches as a practice, not a transaction. Write them carefully, get better at writing them over time, and use the betting table's response to them as feedback on whether you are shaping at the right level of abstraction. That feedback loop, over several cycles, is what turns pitch writing from an uncomfortable new process into one of the most clarifying things an engineering organisation can do.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: Spec-Driven Development with LLMs - how to write specifications that produce useful output from AI tools, and why the quality of your spec is now the highest-leverage thing an engineer writes.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>projectmanagement</category>
      <category>ai</category>
      <category>modernsoftware</category>
    </item>
    <item>
      <title>Shape Up: A Practical Introduction</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Thu, 02 Apr 2026 09:53:23 +0000</pubDate>
      <link>https://dev.to/juststevemcd/shape-up-a-practical-introduction-fpa</link>
      <guid>https://dev.to/juststevemcd/shape-up-a-practical-introduction-fpa</guid>
      <description>&lt;p&gt;If you read the previous article and found yourself nodding along to the diagnosis, the natural next question is: so what do we do instead?&lt;/p&gt;

&lt;p&gt;Shape Up is the most coherent answer I have found. Not because it is perfect, and not because it solves every problem that sprint methodology creates, but because it starts from more honest assumptions about how complex software work actually happens. It was developed at Basecamp over many years of building their own products, written up by Ryan Singer, and released publicly in 2019. It has since been adopted by teams well beyond Basecamp, in contexts ranging from small startups to larger product organisations.&lt;/p&gt;

&lt;p&gt;This article is a practical introduction to how it works. Not a summary of the book - read the book, it is worth your time - but an explanation of the core ideas, why they matter, and what they look like in practice for an engineering team making the shift.&lt;/p&gt;

&lt;p&gt;The central problem Shape Up is trying to solve is different from the problem sprints were trying to solve. Sprints were designed to create accountability and feedback loops in a process that had too little of both. Shape Up is designed to solve a different failure mode: teams that are always busy, always shipping something, but never quite doing the work that actually moves the product forward in a meaningful way. The endless backlog. The feature that takes six months because it keeps getting interrupted. The important architectural work that never quite makes it into a sprint because there is always something more urgent.&lt;/p&gt;

&lt;p&gt;Shape Up addresses that by changing the fundamental unit of planning from the task to the bet.&lt;/p&gt;

&lt;p&gt;The appetite is the first concept worth understanding deeply, because it reframes the relationship between time and scope in a way that sounds simple but has significant practical implications.&lt;/p&gt;

&lt;p&gt;In sprint planning, the question is typically: how long will this take? You scope the work, estimate the effort, and try to fit it into available capacity. The problem is that for anything genuinely novel or complex, that question is almost impossible to answer accurately. You do not know how long it will take until you have done it, and by then it does not matter anymore.&lt;/p&gt;

&lt;p&gt;Shape Up inverts the question. Instead of asking how long the work will take, it asks: how much time are we willing to spend on this? That is the appetite. It is a deliberate, upfront decision about the value of the work relative to the time it would consume. If the answer is two weeks, you design a solution that fits two weeks. If the answer is six weeks, you design a solution that fits six weeks. The time is fixed. The scope is flexible.&lt;/p&gt;

&lt;p&gt;This inversion matters because it puts the design constraint where it belongs - on the people doing the work - rather than pretending that scope can be fixed and time will follow. Every experienced engineer knows that scope creep is real and that the work expands to fill the time available. Fixing the time and making scope the variable is a more honest model of how software development actually works.&lt;/p&gt;

&lt;p&gt;The pitch is the artifact that captures this thinking before work begins. A pitch is not a product requirements document and it is not a ticket. It is a written proposal, typically one to two pages, that describes a problem worth solving, proposes a shaped solution at the right level of abstraction, defines the appetite, and identifies the no-gos - the things explicitly out of scope for this cycle. The pitch is written by whoever is doing the shaping, which might be a product manager, a senior engineer, a designer, or some combination. It is not written by committee.&lt;/p&gt;

&lt;p&gt;What makes a good pitch is that it shapes the problem rather than specifying the solution. There is an important distinction there. Shaping means thinking through the problem carefully enough to identify the meaningful constraints and the rough approach, without pre-designing every detail in a way that removes creative latitude from the team doing the implementation. A good pitch gives the team a clear direction and clear boundaries, and then trusts them to figure out the best way to execute within those boundaries.&lt;/p&gt;

&lt;p&gt;We will go much deeper on pitch writing in the next article. For now the important thing is that the pitch is how appetite gets translated into something a team can work against.&lt;/p&gt;

&lt;p&gt;The betting table is where pitches go to be evaluated. In Shape Up, there is a fixed cadence of planning cycles - typically six weeks of building followed by two weeks of cooldown - and before each building cycle there is a betting table where the people with decision-making authority look at the available pitches and decide which ones to bet on for the upcoming cycle.&lt;/p&gt;

&lt;p&gt;The betting table is deliberately not a backlog grooming session. Nothing carries over automatically. Pitches that were not selected in the previous cycle do not automatically return to the queue - they have to be re-pitched if they are still worth doing, which forces a useful re-evaluation of whether the work is still the right work given what has been learned since the pitch was written. This sounds harsh but it serves an important function: it prevents the backlog from becoming a graveyard of old commitments that nobody has the courage to formally abandon.&lt;/p&gt;

&lt;p&gt;The betting table is also where the circuit breaker lives. In Shape Up, if a project is not done at the end of its cycle, the default is not to extend the cycle. The default is to stop, evaluate what happened, and decide whether to re-pitch a modified version in a future cycle. This is one of the more counterintuitive aspects of the methodology and one of the more powerful. The threat of a hard stop at the end of the cycle creates a forcing function for scope management during the cycle. Teams that know the deadline is real tend to make better decisions about what to cut when the work reveals itself to be larger than the pitch anticipated.&lt;/p&gt;

&lt;p&gt;The cooldown period is two weeks between building cycles where no new projects are assigned. Engineers use this time to fix bugs, explore ideas, do the small improvements that never make it into a shaped project, write documentation, and recover from the intensity of a six-week building cycle. This is not slack time in the pejorative sense - it is structured breathing room that serves several important functions.&lt;/p&gt;

&lt;p&gt;It prevents the accumulation of small technical debts that sprint teams often carry indefinitely because there is never a moment where it is clearly appropriate to address them. It gives engineers time to think without an immediate delivery pressure, which is where a lot of good ideas actually come from. And it means the team arrives at the next building cycle genuinely ready rather than already depleted from the previous one.&lt;/p&gt;

&lt;p&gt;The cooldown is also where the betting table for the next cycle is prepared, which means the people doing the shaping have time to write and refine pitches without competing with active delivery work for their attention.&lt;/p&gt;

&lt;p&gt;Small autonomous teams are how the building work gets done. In Shape Up, a project is typically assigned to a small team - often two or three people, a designer and one or two engineers - who have full responsibility for the work within the cycle. They figure out how to implement the pitch, they make the day-to-day decisions, and they are trusted to manage their own time and approach within the fixed deadline.&lt;/p&gt;

&lt;p&gt;This autonomy is important and it is one of the bigger cultural shifts for teams coming from sprint methodology. Sprint teams often have a product manager or scrum master who is closely involved in day-to-day decisions about how the work is executed. In Shape Up, that involvement happens at the shaping stage, not the building stage. Once the pitch is accepted and the team is working, the expectation is that they manage themselves. That requires a level of trust and a level of engineering maturity that not every team has immediately, but it also tends to develop those qualities faster than a more managed approach does.&lt;/p&gt;

&lt;p&gt;How does this map onto teams working with LLMs? Better than sprint methodology does, for a specific reason. LLM-assisted development changes the implementation rhythm in ways that make fixed two-week cycles particularly awkward. A task that might have been a three-day implementation job can now be a three-hour implementation job, but the thinking work that surrounds it - understanding the problem, designing the approach, reviewing the output critically, integrating it into the larger system - does not compress in the same way.&lt;/p&gt;

&lt;p&gt;Shape Up's appetite model handles this more gracefully because it does not try to predict implementation time in the first place. The appetite is set based on the value and strategic importance of the work, not on an estimate of how long the implementation will take. When implementation time compresses because of LLM assistance, the team has more room within the fixed appetite to do the thinking and review work thoroughly rather than rushing it to hit a velocity target.&lt;/p&gt;

&lt;p&gt;The cooldown period also creates natural space for the kind of work that LLM-assisted development generates more of - reviewing generated code carefully, refactoring output that works but is not clean, writing the documentation and tests that bring generated code up to production standard. Sprint methodology has no structural home for that work. It either displaces feature work, gets skipped under delivery pressure, or accumulates as a different kind of technical debt.&lt;/p&gt;

&lt;p&gt;None of this means Shape Up is easy to introduce. Teams that have been running sprints for years have built habits, expectations, and stakeholder relationships around the sprint model. Changing those takes time and requires managing the transition carefully. The rest of this series covers the specific skills and practices that make Shape Up work in practice - how to write pitches, how to run a betting table, how to handle the stakeholder conversations that come with changing how you plan.&lt;/p&gt;

&lt;p&gt;But before any of that, the most important thing is to be honest about why you are making the change and what you expect it to accomplish. Shape Up is not a magic methodology that fixes broken teams. It is a planning model that gives good teams a better structure to work within. If your team has deeper issues - poor communication, unclear product direction, lack of technical skill - Shape Up will not solve those. It will just give them a different frame to appear in.&lt;/p&gt;

&lt;p&gt;Start with the diagnosis. Then build the model that fits the reality.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: Writing Pitches That Work - how to shape a problem at the right level of abstraction and write a pitch that gives a team genuine clarity without over-specifying the solution.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>projectmanagement</category>
      <category>ai</category>
      <category>modernsoftware</category>
    </item>
    <item>
      <title>Why Sprints Are Broken</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Tue, 31 Mar 2026 12:15:15 +0000</pubDate>
      <link>https://dev.to/juststevemcd/why-sprints-are-broken-483f</link>
      <guid>https://dev.to/juststevemcd/why-sprints-are-broken-483f</guid>
      <description>&lt;p&gt;Let me say something that a lot of engineering teams are thinking but not saying out loud: &lt;strong&gt;sprints are not working anymore&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not for everyone, and not in every context. There are still teams running two-week cycles who are genuinely productive and who find the structure useful. But for a growing number of engineering organisations - particularly those working with modern tooling, AI-assisted development, and complex product problems - the sprint model has become more of a performance than a practice. The rituals continue. The points get estimated. The velocity gets tracked. And underneath all of it, the actual work is happening on a completely different rhythm that the sprint structure is not capturing and is actively getting in the way of.&lt;/p&gt;

&lt;p&gt;This article is not about bashing agile as a philosophy. The core ideas behind agile - iterative development, responding to change, close collaboration, shipping working software - are sound and remain relevant. This is about the specific implementation of those ideas that most teams landed on in the 2000s and have been running ever since, and why that implementation is increasingly misaligned with the way software actually gets built today.&lt;/p&gt;

&lt;p&gt;Understanding why requires going back to where the sprint model came from and what problem it was designed to solve.&lt;/p&gt;

&lt;p&gt;Scrum and the two-week sprint emerged in a specific context. Teams were building software in long waterfall cycles, requirements were being locked down months in advance, and by the time software shipped it was often solving a problem that had evolved or disappeared entirely. The sprint was a corrective mechanism. By forcing teams to ship something demonstrable every two weeks, it created a feedback loop that waterfall lacked. You could not hide in a six-month planning phase. You had to show your work regularly and respond to what you learned.&lt;/p&gt;

&lt;p&gt;That was genuinely valuable. In that context, the sprint was the right tool.&lt;/p&gt;

&lt;p&gt;The context has changed substantially. The feedback loops that sprints were designed to create now exist through other means. You can deploy multiple times a day. You can run experiments with feature flags. You can get user feedback through analytics, session recording, and direct research on a continuous basis. The forcing function that made two-week cycles useful - the need to create artificial checkpoints in a process that otherwise had none - is less necessary when the process itself has become more continuous.&lt;/p&gt;

&lt;p&gt;At the same time, the nature of the work has changed. The problems engineering teams are solving are more complex, more interconnected, and more ambiguous than the typical CRUD application work that sprint methodology was largely optimised for. A two-week sprint works reasonably well when the work is decomposable into discrete, estimable tasks. It works much less well when you are doing exploratory technical work, building systems with significant unknown unknowns, or working on problems where the right solution only becomes visible partway through the attempt.&lt;/p&gt;

&lt;p&gt;And then there is the AI dimension, which is changing the shape of engineering work faster than any methodology has adapted to.&lt;/p&gt;

&lt;p&gt;LLMs have not made engineering easier in a simple, linear sense. What they have done is collapse the time required for certain categories of work - boilerplate implementation, test generation, documentation, straightforward feature development - while leaving other categories largely unchanged or in some cases more complex. The cognitive work of understanding a problem deeply, designing the right system, making good architectural tradeoffs, reviewing generated output critically - that work has not gotten faster. In some ways it has gotten harder, because the volume of code being produced has increased while the time available to reason carefully about it has not.&lt;/p&gt;

&lt;p&gt;The result is a strange asymmetry. A task that might have taken three days of implementation work two years ago might now take three hours of implementation work but still requires the same two days of thinking, scoping, and review. The sprint model, which was built around implementation time as the primary unit of work, does not have a good way to account for this. Story points were always a flawed proxy for effort, but they were at least correlated with something real. That correlation is breaking down as implementation time becomes less representative of total work involved.&lt;/p&gt;

&lt;p&gt;There is also a rhythm problem. Two-week sprints create a specific cadence that assumes work fits neatly into two-week containers. Some work does. A lot of important work does not. A significant architectural investigation might need six weeks of focused effort from a small group. A genuinely novel feature might need a cycle of building, learning, and rebuilding that does not map onto fixed sprint boundaries. When teams try to force that kind of work into two-week containers, one of two things happens: either the work gets artificially scoped down to fit the container, which means the team is never doing the full problem, or the work spills across sprint boundaries in ways that make the sprint structure meaningless as a planning tool.&lt;/p&gt;

&lt;p&gt;The ceremony overhead compounds this. A typical sprint includes planning, a daily standup, a mid-sprint check-in, a review, and a retrospective. For a team of eight engineers, that is easily four to six hours of synchronous meeting time per sprint, and that is before you count the async overhead of updating tickets, writing sprint reports, and maintaining the backlog. For some teams the ratio of ceremony to actual engineering work is genuinely alarming.&lt;/p&gt;

&lt;p&gt;I am not arguing that coordination and reflection have no value - they obviously do. I am arguing that the specific forms those things take in sprint methodology were designed for a world without the communication tools, deployment infrastructure, and development tooling that most teams now have. The overhead is not proportionate to the value it creates in the modern context.&lt;/p&gt;

&lt;p&gt;The estimation problem deserves its own moment because it is the place where the dysfunction is most visible and most demoralising. Story point estimation exists to give teams and stakeholders a sense of how much work fits into a sprint and to track velocity over time. In practice, it produces numbers that are unreliable enough to be misleading while being precise enough to feel meaningful.&lt;/p&gt;

&lt;p&gt;Engineers know this. They know that their estimates are often wrong, that the factors that make estimates wrong are largely outside their control, and that the velocity metrics derived from those estimates are being used by stakeholders to make decisions that the underlying data does not actually support. The result is a quiet cynicism about planning that spreads through engineering teams and makes genuine engagement with the process harder to sustain.&lt;/p&gt;

&lt;p&gt;The deeper problem with estimation is not that engineers are bad at it. It is that software estimation is genuinely hard in a way that no methodology fully resolves. The work that is easiest to estimate accurately is the work that is most similar to work you have done before. The work that matters most - the novel problems, the architectural decisions, the exploratory investigations - is hardest to estimate because it is by definition unlike what you have done before. Forcing that work through an estimation process optimised for familiar, decomposable tasks produces confident-looking numbers that do not mean very much.&lt;/p&gt;

&lt;p&gt;What does a better model look like? That is what the rest of this series is about. But the short version is this: instead of asking "how long will this take," ask "how much appetite do we have for this problem." Instead of filling a backlog with everything that might conceivably get done someday, make explicit bets on the things that matter most in the next cycle. Instead of running a continuous treadmill of two-week sprints with no structural breathing room, build in time for the team to actually think, clean up, and reset.&lt;/p&gt;

&lt;p&gt;Those ideas come from &lt;strong&gt;Shape Up&lt;/strong&gt;, the methodology developed at Basecamp and written up by Ryan Singer. It is not a perfect system and it is not right for every team. But it starts from a more honest set of assumptions about how complex software work actually happens, and it has a more realistic model of the relationship between time, scope, and quality than sprint methodology does.&lt;/p&gt;

&lt;p&gt;Before we get into the specifics of how Shape Up works and how to introduce it, it is worth sitting with the diagnosis a little longer. Because the failure mode I see most often is teams that recognise something is broken, adopt a new methodology as a fix, and then find that the new methodology is not working either - because they changed the process without changing the underlying assumptions about what software development is and how it should be managed.&lt;/p&gt;

&lt;p&gt;The assumption worth examining most carefully is the idea that engineering work is fundamentally a production process - that the job is to take a backlog of defined requirements and process them as efficiently as possible into shipped software. That model has its uses and its contexts. But it is a poor fit for the kind of work that matters most in most engineering organisations: the work of figuring out the right thing to build, the work of solving problems that do not have obvious solutions, the work of building systems that need to evolve over time rather than just be completed.&lt;/p&gt;

&lt;p&gt;Sprints were a significant improvement on what came before them. The question is not whether they were a good idea in their time - they were. The question is whether they are still the right tool for the work most teams are doing now. For a growing number of teams, the honest answer is no.&lt;/p&gt;

&lt;p&gt;Recognising that is the first step. Building a better model is the work.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: Shape Up - a practical introduction to the planning methodology that starts from more honest assumptions about how complex software work actually gets done.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>projectmanagement</category>
      <category>ai</category>
      <category>modernsoftware</category>
    </item>
    <item>
      <title>Technical Debt: When to Fix, When to Ship</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Sat, 28 Mar 2026 19:11:30 +0000</pubDate>
      <link>https://dev.to/juststevemcd/technical-debt-when-to-fix-when-to-ship-20pn</link>
      <guid>https://dev.to/juststevemcd/technical-debt-when-to-fix-when-to-ship-20pn</guid>
      <description>&lt;p&gt;Every engineering team carries debt. The question is never whether you have it. The question is whether you understand it well enough to make deliberate decisions about it, or whether you are just hoping it does not become a crisis before you get around to dealing with it.&lt;/p&gt;

&lt;p&gt;Most teams are in the second camp. Not because the engineers do not care, and not because the managers are incompetent, but because technical debt is genuinely hard to reason about. It is invisible to most stakeholders. It compounds quietly. Its costs show up as friction and slowness rather than as clean line items on a budget. And the tradeoff between addressing it now versus shipping something now is almost always under time pressure, which means the default is almost always to ship.&lt;/p&gt;

&lt;p&gt;I want to give you a framework for thinking about debt more deliberately - one that helps you decide when fixing is the right call, when shipping is the right call, and how to communicate either decision to the people who care about outcomes rather than architecture.&lt;/p&gt;

&lt;p&gt;Before we get into the framework, it is worth being precise about what technical debt actually is, because the term gets used loosely in ways that muddle the decision-making.&lt;/p&gt;

&lt;p&gt;Ward Cunningham's original metaphor was specific: technical debt is the extra work created when you take a shortcut to ship faster, with the understanding that you will come back and do it properly later. The key word is deliberate. You knew it was a shortcut. You made a conscious tradeoff. That is very different from code that is just poorly written because someone did not know better, or a design that seemed correct at the time but was invalidated by requirements that could not have been anticipated.&lt;/p&gt;

&lt;p&gt;In practice, most engineering teams use "technical debt" to describe all three of those things, which is fine for casual conversation but creates confusion when you are trying to prioritise. Deliberate shortcuts have a specific remorse profile - you know what you did, you know roughly what fixing it would take, and you can reason about when the tradeoff tips toward fixing. Legacy code that was written under different assumptions, or architectural decisions that made sense at a previous scale, are harder to reason about because the original context is often lost and the cost of addressing them is harder to estimate.&lt;/p&gt;

&lt;p&gt;For the purposes of decision-making, the useful distinction is not between types of debt by origin but between debt by impact. Specifically: is this debt actively costing you velocity right now, or is it a latent risk that has not yet materially affected your ability to work?&lt;/p&gt;

&lt;p&gt;High-impact debt - the kind that is actively slowing the team down, generating frequent bugs, making changes in a certain area disproportionately risky, or creating cognitive overhead every time someone has to work near it - that is debt with a measurable present cost. You can point to it in sprint data: this area of the codebase takes three times as long to change as comparable areas, and it accounts for a disproportionate share of production incidents.&lt;/p&gt;

&lt;p&gt;Latent debt - the kind that is messy and uncomfortable but not yet materially impacting delivery - is real, but it has a different urgency profile. Addressing it might still be the right call for other reasons, but it is harder to justify against immediate delivery needs without a clear and specific articulation of the risk.&lt;/p&gt;

&lt;p&gt;The framework I use for debt prioritisation has three dimensions: velocity impact, risk profile, and strategic alignment.&lt;/p&gt;

&lt;p&gt;Velocity impact is the question of whether the debt is actually costing you delivery speed right now. If you can measure it - and often you can, in cycle time data, bug rates by subsystem, or engineer time estimates on adjacent work - use the numbers. "This service generates forty percent of our incidents but represents ten percent of our codebase" is a compelling velocity impact argument. "This code is messy and would be nicer if it were cleaner" is not.&lt;/p&gt;

&lt;p&gt;Risk profile is the question of what happens if the debt is not addressed. Some debt sits in a part of the system that is unlikely to change significantly - it is messy but it is also relatively stable and not under active development. That debt has a low risk profile even if it is aesthetically uncomfortable. Other debt sits in a critical path that is about to receive significant investment, or in a part of the system where a failure would be disproportionately damaging. That debt has a high risk profile even if it is not currently causing visible problems.&lt;/p&gt;

&lt;p&gt;Strategic alignment is the question of whether the work that would fix this debt is work that matters for where the product is going anyway. Sometimes the most efficient path is to address debt as part of a larger piece of work that is already planned - you are rebuilding the payments flow anyway, so cleaning up the debt in the payment service is low incremental cost. Sometimes the debt is in a part of the system that is likely to be deprecated or replaced entirely, in which case investing in it now is a waste.&lt;/p&gt;

&lt;p&gt;When you look at a piece of debt through all three of those lenses, the decision often becomes cleaner. High velocity impact, high risk profile, and in a strategically important area: address it, and address it soon. Low velocity impact, low risk profile, in an area the product is moving away from: leave it, and stop feeling guilty about it.&lt;/p&gt;

&lt;p&gt;The harder cases are the mixed ones - debt with moderate velocity impact and moderate risk, competing with genuine product priorities for engineering time. This is where the "fix it in pieces" approach often makes sense. Not a dedicated debt sprint that product stakeholders will resent and that rarely fully succeeds anyway, but a standing allocation of capacity toward high-priority debt items worked into every sprint. Ten to twenty percent is the range I see working in practice. Enough that meaningful progress gets made, not so much that it creates constant friction with delivery commitments.&lt;/p&gt;

&lt;p&gt;There is a case against the dedicated debt sprint that is worth making explicitly because the instinct to batch debt work into a single concentrated effort is very common and very understandable. The problem is that it creates a boom-bust cycle. You accumulate debt under delivery pressure, you hit a tipping point, you negotiate a debt sprint, you clean up the worst of it, and then you go back to accumulating. The underlying rate of accumulation does not change because the sprint did not change the culture or the incentives - it just cleared the queue.&lt;/p&gt;

&lt;p&gt;A standing allocation changes the culture more durably because it normalises debt management as a continuous practice rather than an emergency response. It also keeps engineers closer to the debt, which means they are better positioned to identify which parts of it are actually costing velocity and which are just aesthetically uncomfortable. That distinction matters a lot for prioritisation.&lt;/p&gt;

&lt;p&gt;Now let's talk about stakeholder communication, because this is where a lot of technically strong engineering leaders stumble.&lt;/p&gt;

&lt;p&gt;The engineers on your team understand why debt matters. They live with it. They feel it every time they work in a slow, fragile, or confusing part of the codebase. But the product managers, business stakeholders, and executives you need to align with do not feel that friction, and they are not going to be persuaded by architectural arguments. They are going to be persuaded by arguments about outcomes.&lt;/p&gt;

&lt;p&gt;That means translating the debt conversation into the language of risk and velocity. Not "we have a lot of legacy code in the payment service" but "our payment service currently takes our engineers three times as long to change as comparable services, and it generates more than a third of our production incidents. That is costing us roughly two sprint cycles per quarter in incident response and rework, and it is the main reason we keep missing our estimated delivery dates on anything that touches payments."&lt;/p&gt;

&lt;p&gt;That framing gives the stakeholder something to weigh. It turns a vague technical discomfort into a specific cost, and it makes the tradeoff legible: we can invest capacity here and expect these benefits over this timeframe, or we can continue deferring and expect to keep paying this ongoing cost.&lt;/p&gt;

&lt;p&gt;Be honest about uncertainty in those estimates. If you are saying it costs two sprint cycles per quarter, that should be a real estimate based on real data, not a number you made up to make the argument more compelling. Stakeholders who get burned by overconfident technical estimates stop trusting technical estimates, which makes every subsequent conversation harder.&lt;/p&gt;

&lt;p&gt;The inverse is also true for shipping decisions. When the right call is to ship with known debt rather than delay to fix it, say so explicitly and document it. "We are shipping this with a known shortcut in the session handling code. It is fine for our current traffic levels but will need to be addressed before we scale past X. Estimated cost to address: one engineer week. Suggested timeline: before Q3 scaling work." That kind of explicit acknowledgment does two things: it keeps the debt visible rather than letting it quietly become background noise, and it demonstrates the deliberate reasoning that builds trust with stakeholders that you are managing these decisions thoughtfully rather than just letting things slide.&lt;/p&gt;

&lt;p&gt;The most important habit you can build around technical debt is measuring it. Not in some comprehensive, difficult-to-maintain debt registry, but in the practical proxy metrics that tell you whether it is getting better or worse over time. Cycle time by area of the codebase. Incident frequency by service. Change failure rate. Time to onboard new engineers to different parts of the system. These are imperfect proxies but they are real data, and they give you something to point to when the debt conversation gets abstract.&lt;/p&gt;

&lt;p&gt;They also give you a way to demonstrate progress. "The work we did on the payment service over the last two quarters has reduced incident frequency in that area by sixty percent and cut the average cycle time for payment changes from four days to one and a half" is a compelling narrative. It makes the case for ongoing investment in debt reduction more credibly than any theoretical framework could, because it shows that the investment actually worked.&lt;/p&gt;

&lt;p&gt;Technical debt is not a failure of engineering discipline. Every team that has ever shipped software under real constraints has it. The teams that manage it well are not the ones that have less of it - they are the ones that are honest about it, deliberate about prioritising it, and fluent enough in the business language to communicate about it in terms that the people making resource decisions can actually use.&lt;/p&gt;

&lt;p&gt;The goal is not a debt-free codebase. The goal is a codebase where the debt you carry is debt you chose, debt you understand, and debt you are managing toward a specific outcome. That is a much more achievable and much more useful standard.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: Leading Through Uncertainty - decision-making under pressure, communication cadence, and maintaining team morale when the path forward is not clear.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>leadership</category>
      <category>culture</category>
      <category>techdebt</category>
    </item>
    <item>
      <title>The Engineering Manager as Coach, Not Boss</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:38:28 +0000</pubDate>
      <link>https://dev.to/juststevemcd/the-engineering-manager-as-coach-not-boss-1p90</link>
      <guid>https://dev.to/juststevemcd/the-engineering-manager-as-coach-not-boss-1p90</guid>
      <description>&lt;p&gt;I want you to think about the best manager you have ever had. Not the most technically impressive one, not the one who shipped the most features, but the one who made you genuinely better at your job. I know who that person is for me, and I bet you know who that person is for you too.&lt;/p&gt;

&lt;p&gt;Odds are good that person did not micromanage your work. They probably did not hover over your PRs or tell you exactly how to solve every problem you brought to them. What they likely did was ask you questions that forced you to think more clearly. They gave you feedback that was specific enough to actually act on. They took your career seriously in a way that felt genuine rather than performative. They helped you see the gap between where you were and where you were capable of getting, and then they helped you close it.&lt;/p&gt;

&lt;p&gt;That is coaching. And it is the most underused and underdeveloped skill in engineering management.&lt;/p&gt;

&lt;p&gt;Most engineering managers are never taught how to coach. They come up through the IC track where performance is individual, then they get promoted and suddenly they are responsible for other people's growth without any real framework for how to do it. The instinct that fills the gap is usually managing - telling people what to do, reviewing their work, tracking their progress - rather than coaching - helping people develop the capacity to figure out what to do themselves.&lt;/p&gt;

&lt;p&gt;That distinction matters more than it might sound. A team of engineers who are managed well will execute against a clear plan competently. A team of engineers who are coached well will grow, take on more complex work, make better decisions independently, and stay longer because they feel like they are actually developing as professionals. The output difference between those two teams compounds significantly over time.&lt;/p&gt;

&lt;p&gt;So let's get into what coaching actually looks like in the day-to-day context of engineering management.&lt;/p&gt;

&lt;p&gt;The one-on-one is your primary coaching surface. Not a status meeting. Not a task review. The one-on-one is where you develop your understanding of what this person is working on, what is getting in their way, what they are learning, and what they are ready for next. Done well, it is where the most important growth conversations happen.&lt;/p&gt;

&lt;p&gt;Most one-on-ones fail because they do not have a clear purpose and the manager ends up driving the conversation around whatever is top of mind for them - team updates, upcoming deadlines, process concerns. Flip that. The one-on-one belongs to the engineer. Open with something like: "what is on your mind this week?" or "what do you most want to talk through today?" Then actually follow their lead.&lt;/p&gt;

&lt;p&gt;What you are listening for is not just the content but the subtext. An engineer who says "this sprint has been a bit slow" might be telling you they are bored. An engineer who says "I have just been heads down in the ticket queue" might be telling you they feel disconnected from meaningful work. An engineer who mentions a difficult interaction with a product manager three times across different conversations is probably telling you something important without saying it directly. Coaching requires learning to hear those signals and respond to them honestly rather than staying on the surface.&lt;/p&gt;

&lt;p&gt;Questions are your most important tool as a coach. The instinct for most technically strong managers is to give answers. Someone comes with a problem and you tell them how to solve it. That feels productive and it often is in the short term. But it creates dependency. Every time you give someone an answer you had the chance to help them develop the capability to generate that answer themselves, and you declined it.&lt;/p&gt;

&lt;p&gt;The shift is to respond to problems with questions. Not in a frustrating Socratic way where you refuse to ever be direct, but in a way that draws out the engineer's own reasoning before you add yours. "What have you already tried?" "What do you think is causing it?" "If you had to guess at the best path forward, what would you say?" These questions accomplish two things simultaneously: they give you information about how the engineer is thinking, and they often help the engineer arrive at the answer themselves, which is a much more durable outcome than being told the answer.&lt;/p&gt;

&lt;p&gt;When you do share your perspective, frame it as a perspective rather than a directive. "Here is how I would think about this" rather than "here is what you should do." That keeps the ownership with the engineer and signals that their judgment is part of the equation, not just an obstacle between them and the right answer.&lt;/p&gt;

&lt;p&gt;Performance conversations are where a lot of engineering managers lose their nerve, and that avoidance does real damage. When someone's performance is not where it needs to be, the kindest thing you can do is tell them clearly and specifically, with enough context that they can actually act on it. Delayed feedback is not compassion. It is conflict avoidance dressed up as patience, and it ends up being far harder on the engineer in the long run.&lt;/p&gt;

&lt;p&gt;The structure that works for performance conversations, whether they are corrective or developmental, follows a consistent pattern. Specific observation first - not "your code quality has been slipping" but "the last three PRs you submitted had significant issues that were caught in review and would have caused problems if they had gone to production." Then impact - why this matters for the team, the product, or the engineer's own growth. Then genuine curiosity - what is going on from their side, what might be contributing to this, what do they think they need. And then a clear shared understanding of what improvement looks like and a timeline for revisiting it.&lt;/p&gt;

&lt;p&gt;The specific observation part is what most managers skip or soften into meaninglessness. "I just want to flag that some people have mentioned concerns about collaboration" is not actionable feedback. "In the incident last week, two engineers told me they felt their concerns were dismissed when they raised them in the thread - I want to understand what happened and work through it with you" is actionable feedback. Specificity is the thing that makes feedback usable rather than just uncomfortable.&lt;/p&gt;

&lt;p&gt;Growth plans are one of the most underutilised management tools I know. Not the formal HR document kind that gets filled out once a year and then filed - the real kind, where you and an engineer have an honest conversation about where they want to go, what is in the way, and what you are both going to do about it.&lt;/p&gt;

&lt;p&gt;A useful growth plan starts with a genuine conversation about what the engineer actually wants. Not what you think they should want, not what the standard career ladder says they should be aiming for. What do they actually want? More technical depth? Leadership experience? Exposure to a different part of the stack? The answer to that question changes what kinds of opportunities are meaningful to them and what kinds of stretch assignments will actually develop them rather than just adding to their workload.&lt;br&gt;
                                                                                                                                             From there, you build backward. If someone wants to move toward a staff-level role, what does the gap between where they are now and what that role requires actually look like? What specific skills, what visibility, what demonstrated impact would they need to show? Get specific enough that both of you could look at a piece of work and agree on whether it counts as progress toward the goal.&lt;br&gt;
                                                                                                 Then identify the opportunities. Not abstract goals - specific opportunities in the actual work ahead. This project coming up would give them the technical leadership visibility they need. This initiative has a coordination component that would develop their cross-team communication. This piece of the codebase is complex enough that owning it would give them the depth that is currently missing from their profile. When you can connect growth goals to real work, growth planning stops being a separate activity and becomes part of how you think about the team's work anyway.&lt;br&gt;
                                                                            Revisit the plan regularly. Not in the annual performance review - in one-on-ones, every few weeks. "How is the work on that service going - is it giving you what you were hoping for?" That regularity signals that you are actually invested in the plan rather than using it as a box-checking exercise.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                      Career development conversations are different from growth plans in a specific way: they require you to engage with the engineer's ambitions beyond their current role, and sometimes beyond your team. This is uncomfortable for some managers because it feels like you are helping someone leave. In reality, you are building the kind of trust and loyalty that makes people want to stay.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The engineers who leave most readily are the ones who feel invisible - whose manager has never asked where they want to go, who cannot see a path forward from where they are. The engineers who stay in companies that would otherwise lose them are often the ones who have a manager who takes their development seriously enough to have honest conversations about what they want, even when the answer is complicated.&lt;/p&gt;

&lt;p&gt;If an engineer tells you they want to move into management, help them explore that seriously rather than steering them back toward IC work because you need their technical contribution. If someone tells you they want to work on a different kind of problem than your team addresses, help them think through what that could look like internally before they start looking externally. That kind of engagement builds a relationship where the engineer feels like their interests are actually considered, not just tolerated.&lt;/p&gt;

&lt;p&gt;Feedback culture within a team starts with how the manager gives feedback, but it does not stop there. If you want engineers to give each other honest, constructive feedback - in code review, in design discussions, in retrospectives - you need to model it and you need to make it safe. That means appreciating directness when you see it even when it is directed at you, and explicitly naming feedback behaviours you want to see more of when they show up.&lt;/p&gt;

&lt;p&gt;One habit that is underrated: giving positive feedback with the same specificity you bring to corrective feedback. "Good job on that feature" lands with almost no impact because it tells the engineer nothing about what specifically was good. "The way you handled the rollback plan on that deploy was exactly the kind of thing I want to see more of - you anticipated the failure mode before it happened rather than waiting for it" - that lands. It tells the engineer something specific about their performance and it communicates something about your values. Both of those things are useful.&lt;/p&gt;

&lt;p&gt;The gap between a manager who manages and a manager who coaches is ultimately a gap in how they think about their job. A manager who manages thinks their job is to get work done through their team. A manager who coaches thinks their job is to develop people who can get increasingly complex and important work done without needing to rely on the manager as a decision-making crutch.&lt;/p&gt;

&lt;p&gt;The first version of that job has a ceiling. The second version compounds.&lt;br&gt;
Engineers who work for coaches become more capable over time. They take on bigger problems, mentor more junior colleagues, contribute to the team in ways that go beyond their individual output. They are also, in my experience, far more likely to speak honestly with you about what is working and what is not - because the coaching relationship is built on genuine engagement rather than authority, and that changes what people are willing to say.&lt;br&gt;
That honesty, in the end, is one of the more valuable things you can build. Because you can only fix the problems you know about.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next in the series: Technical Debt - When to Fix, When to Ship. A framework for making trade-off decisions and communicating them to stakeholders who care about outcomes, not architecture.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>leadership</category>
      <category>culture</category>
    </item>
    <item>
      <title>Strategy vs. Execution: How Leaders Set Technical Vision</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:36:35 +0000</pubDate>
      <link>https://dev.to/juststevemcd/strategy-vs-execution-how-leaders-set-technical-vision-44k2</link>
      <guid>https://dev.to/juststevemcd/strategy-vs-execution-how-leaders-set-technical-vision-44k2</guid>
      <description>&lt;p&gt;There is a version of technical leadership that looks like this: you are deep in a sprint, your team is shipping, the product roadmap is clear, and everyone knows what they are building. Everything is humming. You feel productive. You feel useful.&lt;/p&gt;

&lt;p&gt;And then six months later you look up and realise you have been executing against a direction that no longer makes sense. The architectural decisions made eighteen months ago are now actively fighting the product requirements coming in today. The platform you are running on does not support the scale you need. The technical debt you kept deferring is now the reason your velocity has halved.&lt;/p&gt;

&lt;p&gt;This is what happens when execution runs ahead of strategy. And in my experience, it is one of the most common failure modes in technical leadership, because the short-term feedback loop for execution is so much tighter and more satisfying than the long-term feedback loop for strategy.&lt;/p&gt;

&lt;p&gt;Execution tells you immediately whether it worked. Strategy takes months or years to validate, and by then the leader who made the call may not even be around to learn from it. That asymmetry creates a natural gravitational pull toward doing over thinking, shipping over planning, tactical clarity over strategic ambiguity. Understanding that pull is the first step toward resisting it when you need to.&lt;/p&gt;

&lt;p&gt;This article is about the work that sits above execution: how senior engineering leaders set technical direction, how you align that direction with product and business goals, how you communicate it to the people building against it, and how you maintain it over time as circumstances change.&lt;/p&gt;

&lt;p&gt;Setting technical vision starts with a question that sounds simple but is actually quite hard to answer well: what problem is this engineering organisation trying to solve? Not "ship features quickly" -- that is a capability, not a purpose. Not "build a reliable platform" -- that is a property, not a direction. The real answer connects technical work to business outcomes and to user needs in a specific and falsifiable way.&lt;/p&gt;

&lt;p&gt;Something like: we are building a data infrastructure that lets our analysts answer any product question within four hours without engineering support. Or: we are rebuilding our checkout system to reduce payment failure rates below one percent so we can expand to markets where card infrastructure is unreliable. Or: we are investing in observability tooling so that the time between a production anomaly and a diagnosis is under fifteen minutes.&lt;/p&gt;

&lt;p&gt;These are statements that have a clear definition of success attached to them. They tell engineers not just what to build but why, and they connect technical decisions to outcomes that the business actually cares about. When your technical vision has that kind of specificity, it becomes a useful decision-making tool. Engineers can look at a proposed piece of work and ask: does this move us toward the goal? If the answer is no, it does not necessarily mean the work is wrong, but it raises a question that is worth answering.&lt;/p&gt;

&lt;p&gt;Without that specificity, technical vision tends to drift into vague aspiration. "We want a clean, scalable, well-tested codebase" is aspirational, but it does not tell anyone what to prioritise when they have to choose between three competing things. Vision that cannot guide a tradeoff is not really vision. It is decoration.&lt;/p&gt;

&lt;p&gt;Aligning technical direction with product and business goals is where most of the actual work of technical leadership happens, and it is messier than the frameworks make it sound. Product and engineering are not always pulling in the same direction. Business stakeholders have timeframes and objectives that do not always map cleanly onto technical realities. There are genuine tensions that have to be surfaced and worked through, not papered over.&lt;/p&gt;

&lt;p&gt;The most effective technical leaders I have observed spend a disproportionate amount of their time in conversations that are not with their engineering team. They are in product reviews. They are in quarterly planning sessions. They are talking to the finance team about what the infrastructure cost structure actually looks like. They are in customer calls listening to the friction points that end up becoming product requirements six months from now. They are building enough context about the business that their technical direction is grounded in something real rather than purely in engineering instinct.&lt;/p&gt;

&lt;p&gt;This matters because technical decisions made in isolation from business context tend to be technically elegant and practically wrong. A beautiful microservices architecture might be the right long-term call, but if the business needs to move fast in the next twelve months and your team has four engineers, it might be the wrong short-term one. The technical leader who understands the business well enough to make that call explicitly, rather than defaulting to the architecturally "correct" answer, is significantly more valuable than one who does not.&lt;/p&gt;

&lt;p&gt;Roadmap ownership is the operational expression of technical vision. A technical roadmap is not a feature list. It is a sequenced set of investments that tell a coherent story about how the system is evolving and why. The sequencing matters as much as the content, because it reflects your actual prioritisation under real constraints rather than an idealised list of everything you want to do.&lt;/p&gt;

&lt;p&gt;When I think about what makes a good technical roadmap, a few properties matter. It is honest about current state. It does not pretend that the existing system is further along than it is, because that dishonesty creates misleading expectations. It distinguishes between investments that enable future capability and investments that address current risk. Those are different categories with different urgency profiles. It has explicit tradeoffs documented. When you chose to sequence thing A before thing B, there was a reason. Write it down so that when circumstances change and someone asks "can we move B up?", you have the context to answer that question thoughtfully rather than from gut instinct.&lt;/p&gt;

&lt;p&gt;And crucially, the roadmap is a living document, not a contract. One of the most common problems I see with technical roadmaps is that they get published once and then decay. The codebase evolves, the product requirements shift, new information comes in -- and the roadmap that was accurate six months ago is now quietly misleading anyone who reads it. Treat your roadmap as something that needs regular maintenance, not something that needs a polished quarterly reveal.&lt;/p&gt;

&lt;p&gt;Communicating technical direction is a distinct skill from having it, and it is one that engineers who move into senior leadership roles often underestimate. The way you talk about technical strategy to your engineering team is different from how you talk about it to your product counterparts, which is different again from how you talk about it to executive stakeholders.&lt;/p&gt;

&lt;p&gt;To your engineering team, the goal is to give people enough context that they can make good local decisions without needing to escalate everything. You want them to understand the direction well enough that when they encounter an ambiguous situation, they can reason about what the right call is rather than waiting for guidance. That means going deep on the why: why is this the direction, what are the tradeoffs we considered, what would make us revisit this. Engineers who understand the reasoning behind a decision are better positioned to execute against it and to flag when something they are seeing in the work suggests the reasoning might be wrong.&lt;/p&gt;

&lt;p&gt;To product and business stakeholders, the translation layer is about outcomes and risk, not architecture. "We are building a service boundary between the payments system and the billing system" is not a compelling framing for a product conversation. "We are making a structural change that will let us run payment experiments independently of billing changes, which should cut our experiment cycle time by about half" is a compelling framing, because it connects the technical work to something product actually cares about.&lt;/p&gt;

&lt;p&gt;Getting good at this translation is not about dumbing things down. It is about understanding what the other person actually needs to know to make good decisions or to trust that you are making good decisions. That is a different question than "what do I know about this topic."&lt;/p&gt;

&lt;p&gt;One of the tensions that comes up consistently in technical leadership is the relationship between strategic investment and near-term delivery pressure. There is almost always more pressure to ship features than there is to invest in the platform, pay down debt, or make structural improvements that will pay off over time. That pressure is legitimate -- the business has real needs -- but if it never gets balanced against platform investment, you end up in a state where the platform is actively fighting your ability to deliver.&lt;/p&gt;

&lt;p&gt;The framing that I have found most useful for navigating this tension is to treat platform investment not as something that competes with delivery but as something that enables delivery over a longer time horizon. The argument is not "we need to stop shipping features to fix the platform." The argument is "here is what the platform investment unlocks, here is the timeline on which it pays off, and here is the cost we are currently paying by deferring it." That framing, with specific numbers attached when possible, tends to land better with business stakeholders than an abstract argument about technical health.&lt;/p&gt;

&lt;p&gt;It also requires you to do the work of actually knowing what the cost of the current state is. That means having data. How much engineering time per sprint is going to work that would not be necessary if the platform were in better shape? What is the approximate blast radius of the current debt in terms of velocity impact? How many incidents in the last quarter were attributable to the areas you are proposing to invest in? This is the kind of evidence that turns a technical argument into a business argument.&lt;/p&gt;

&lt;p&gt;There is a version of technical leadership that is mostly reactive: responding to product requests, triaging incidents, making tactical calls as they come up. That version can look like strong leadership from the outside because things are moving. But it does not create the conditions for the team to do its best work over time. The conditions for sustained, high-quality engineering output require a clear technical direction, explicit prioritisation, and the kind of structural investment that only happens when someone with enough context and enough authority is thinking more than one quarter ahead.&lt;/p&gt;

&lt;p&gt;That is what technical vision is actually for. Not the vision document, not the architecture diagram, not the quarterly roadmap review. The underlying habit of mind that keeps asking: given everything we know about where this product and this business are going, are we building the system that will get us there? And if the honest answer is no, being willing to say so and to do something about it.&lt;/p&gt;

&lt;p&gt;That willingness is what separates technical leaders from technical managers. And it is more about courage than it is about technical knowledge.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: The Engineering Manager as Coach -- practical techniques for performance conversations, growth planning, and career development.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>leadership</category>
      <category>culture</category>
    </item>
    <item>
      <title>Building Psychological Safety in Engineering</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:34:32 +0000</pubDate>
      <link>https://dev.to/juststevemcd/building-psychological-safety-in-engineering-9fm</link>
      <guid>https://dev.to/juststevemcd/building-psychological-safety-in-engineering-9fm</guid>
      <description>&lt;p&gt;Let me ask you something uncomfortable: when was the last time someone on your team told you they had made a mistake before you found out about it yourself?&lt;/p&gt;

&lt;p&gt;If you have to think hard about that, or if the honest answer is "I am not sure that has ever happened," you probably have a psychological safety problem. Not a people problem. Not a talent problem. A culture problem, and one that sits squarely in your lap as the person responsible for setting the conditions under which your team operates.&lt;/p&gt;

&lt;p&gt;Psychological safety is one of those concepts that gets talked about in engineering leadership circles to the point where it starts to feel like a buzzword. But the research behind it is solid, and the practical reality is straightforward: engineers on teams where they feel safe to take risks, speak up, and make mistakes without fear of punishment consistently outperform engineers on teams where they do not. Google's Project Aristotle found it was the single biggest predictor of team effectiveness across hundreds of internal teams. It matters more than individual talent, more than technical skill, more than compensation.&lt;/p&gt;

&lt;p&gt;So why is it so rare? Partly because it is genuinely hard to build. Partly because the behaviours that build it are counterintuitive for technically minded people who have been rewarded their whole careers for being right. And partly because leaders often think they have it when they do not, because the absence of psychological safety is largely invisible from the top.&lt;/p&gt;

&lt;p&gt;Here is what I mean by that. When engineers do not feel safe, they do not tell you. They work around problems quietly. They do not raise concerns in meetings. They stay silent when they disagree. They fix bugs without mentioning them. They leave rather than confront a difficult situation. From your vantage point as a manager, everything might look fine. The problems are just invisible, which is exactly what makes the culture so hard to improve from the inside if you are not actively looking for the signals.&lt;/p&gt;

&lt;p&gt;So let's talk about what to look for, what to build, and what to stop doing.&lt;/p&gt;

&lt;p&gt;The first signal to watch is who speaks in group settings. In a psychologically safe team, contributions are distributed. Junior engineers raise concerns. Senior engineers acknowledge uncertainty. Different people push back on ideas in different meetings. If you notice that the same two or three people do all the talking, or that nobody ever disagrees with whoever has the most tenure, or that critical questions always come up in private after the meeting rather than in the meeting itself, those are meaningful signals. The information is moving, but it is moving through channels that feel safer, which means you are not actually getting it when and where it matters.&lt;/p&gt;

&lt;p&gt;The second signal is how the team handles incidents and failures. This is the clearest window into your culture. When something breaks in production, what happens? Is the post-mortem focused on understanding the failure and improving the system, or does it subtly or not so subtly focus on who was responsible? Do engineers run toward an incident or away from it? Do people escalate early when something is going wrong, or do they hold out hoping to fix it themselves rather than surface the problem?&lt;/p&gt;

&lt;p&gt;I have seen teams where engineers would rather work through the night on a production issue than escalate because they were afraid of how the escalation would be received. That is not dedication. That is fear. And it is a direct result of an environment where being the person who raised the alarm felt more dangerous than trying to quietly fix the problem.&lt;/p&gt;

&lt;p&gt;The blameless post-mortem is the standard recommendation for addressing this, and it is a good one - but only if it is actually blameless. A lot of post-mortems that are nominally blameless still subtly assign fault through the framing of their questions: "why did the engineer merge without a code review" rather than "what in our process allowed this change to go out without sufficient review." The difference in those two questions is the difference between a culture that learns and a culture that punishes while pretending not to.&lt;/p&gt;

&lt;p&gt;Write your post-mortem templates with this framing explicitly built in. Questions like: what in our system made this failure possible, what would have needed to be different for this to have been caught earlier, what can we change about our process to prevent this class of failure. Not: who approved this, why was this not caught, whose responsibility was this. If you run post-mortems this way consistently, over time it changes how the team thinks about failure - as information about the system, not as evidence of someone's incompetence.&lt;/p&gt;

&lt;p&gt;Feedback loops are the second major lever. A team without healthy feedback loops is a team where problems silently compound until they become crises. Building those loops requires two distinct things: creating channels for feedback to flow, and demonstrating through your own behaviour that feedback is welcome and acted on.&lt;/p&gt;

&lt;p&gt;The first part is structural. Regular one-on-ones where you ask specific questions rather than open-ended check-ins. Retrospectives that have genuine psychological safety built into them (anonymous input tools can help here if the team is not yet comfortable with open discussion). Skip-level conversations if your org is large enough. Anonymous pulse surveys for tracking sentiment over time. These are all mechanisms for surfacing information that would otherwise stay invisible.&lt;/p&gt;

&lt;p&gt;The second part is behavioural and it is more important. If engineers give you feedback and nothing happens, they stop giving feedback. If you react defensively when someone raises a concern, they stop raising concerns. If you say you want honesty but subtly reward people who tell you what you want to hear, you will get people who tell you what you want to hear.&lt;/p&gt;

&lt;p&gt;The most powerful thing you can do to build feedback culture is to model receiving feedback well. When someone tells you something uncomfortable, thank them specifically for the specificity of the feedback. Then act on it and tell them you acted on it. Do this consistently and over time it signals that feedback in this team is not just tolerated but genuinely valued and used. That signal compounds. People talk to each other about how you respond, and your reputation as someone who receives feedback well becomes one of the structural features of your team's culture.&lt;/p&gt;

&lt;p&gt;Ask for feedback on yourself directly and explicitly. Not "any feedback for me?" in a one-on-one where the power dynamic makes honest negative feedback almost impossible. More specific prompts: "I ran that planning session last week - was there anything about the format that was not working for you?" or "I made a call on the architecture last month and I have been wondering whether I handled that well - what was your read on it?" Specific questions lower the barrier enough that more honest answers become possible.&lt;/p&gt;

&lt;p&gt;The third lever is how you respond to mistakes, and this is the most visible signal your team gets about what the culture actually is regardless of what you say it is.&lt;/p&gt;

&lt;p&gt;When an engineer makes a significant mistake - pushes a bug to production, misestimates badly, handles a difficult customer situation poorly - your response in that moment is teaching the whole team what the rules are. Not just the person involved. Everyone who hears about it is watching what happens.&lt;/p&gt;

&lt;p&gt;The response that builds psychological safety has a few consistent properties. It is proportionate to the situation. It focuses on understanding what happened and learning from it rather than on assigning fault. It is private when the situation calls for privacy. It treats the person as an intelligent adult who does not need to be punished but who does need support in improving.&lt;/p&gt;

&lt;p&gt;None of that means being soft on genuinely unacceptable behaviour. If someone repeatedly ignores code review conventions, ships without testing, or behaves disrespectfully to a colleague, those are different situations that call for direct and specific feedback. The distinction is between mistakes (things that happen despite good intentions and reasonable effort) and patterns of behaviour that reflect poor judgment or disrespect. Psychological safety is not about protecting people from the consequences of repeated poor behaviour. It is about ensuring that honest effort and reasonable risk-taking are not punished.&lt;/p&gt;

&lt;p&gt;There is a specific failure mode worth naming here because it is common and because managers who fall into it often do not know they are doing it. It is what I think of as the chilling effect of public criticism. When a leader criticises an engineer's work in front of the team (in a code review, in a meeting, in a Slack thread where multiple people are watching) the impact is not contained to that one person. Every engineer on the team who sees it learns something about the risk of being visible and vulnerable. The criticised engineer may recover. But the observation travels further than you think, and it quietly teaches people to keep their heads down.&lt;/p&gt;

&lt;p&gt;Code review is where this plays out most often in engineering teams. Reviews where every comment is framed as a problem to be fixed, where reviewers never acknowledge what is working well, and where tone is dismissive or impatient -- that culture compounds over time and eventually starts to affect how openly engineers communicate and how willing they are to take on work that is outside their comfort zone.&lt;/p&gt;

&lt;p&gt;It is worth reviewing the norms around code review explicitly with your team. Not to make code review soft, but to make it genuinely useful. The goal of code review is to improve the code and grow the engineer. Both parts matter. A review that consistently improves the code but leaves the engineer feeling beaten up is a review that is failing at half its job.&lt;/p&gt;

&lt;p&gt;One last thing that does not get enough attention: the connection between psychological safety and retention. Engineers leave teams for a lot of reasons, but one of the more consistent ones is the slow erosion of feeling like they can do their best work. That erosion is usually not a single incident but an accumulation of small signals - a feedback that was not heard, a mistake that was handled poorly, a concern that was dismissed, a pattern of who gets credit and who does not. It is invisible until someone puts in their notice, at which point the manager usually asks "what happened?" without realising the honest answer is "a lot of things, over a long time."&lt;/p&gt;

&lt;p&gt;Psychological safety is not a team-building exercise. It is not a workshop or a values statement. It is the cumulative result of thousands of small interactions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how you respond to a mistake&lt;/li&gt;
&lt;li&gt;how you run a post-mortem&lt;/li&gt;
&lt;li&gt;how you receive feedback&lt;/li&gt;
&lt;li&gt;how you handle a tense code review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That either add up to a culture where people feel safe enough to do their best work, or they do not.&lt;/p&gt;

&lt;p&gt;You build it slowly and you can damage it quickly. That asymmetry is worth keeping in mind every time you have one of those small interactions. Because you are always, whether you realise it or not, signalling to your team what the rules are.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next in the series: Strategy vs. Execution -- how senior engineering leaders align product, technology, and business goals into a coherent technical direction.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>culture</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Scaling Engineering Teams Without Losing Velocity</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:32:45 +0000</pubDate>
      <link>https://dev.to/juststevemcd/scaling-engineering-teams-without-losing-velocity-2ldg</link>
      <guid>https://dev.to/juststevemcd/scaling-engineering-teams-without-losing-velocity-2ldg</guid>
      <description>&lt;p&gt;Here is something that surprises almost every engineering manager the first time they live through it: growth slows you down before it speeds you up.&lt;/p&gt;

&lt;p&gt;You hire five more engineers. You expect output to increase proportionally. Instead, things get messier. PRs take longer to review. Standups run over. Someone built a feature that conflicts with something another team was working on. The planning process that worked for eight people completely breaks down at fifteen. Velocity, by most measures, actually drops.&lt;/p&gt;

&lt;p&gt;This is not a failure of hiring. It is a failure of org design. And it happens so reliably, at so many companies, that it has become one of the most predictable traps in engineering leadership.&lt;/p&gt;

&lt;p&gt;The good news is that it is largely avoidable if you think about the structural side of growth before you need it, rather than after things are already falling apart. That is what this article is about. Not headcount strategy in the abstract, but the practical mechanics of how teams grow: org structure, hiring cycles, cross-team coordination, and the specific bottlenecks that kill velocity as you scale.&lt;/p&gt;

&lt;p&gt;I want to start with the thing most scaling guides skip, which is the relationship between team size and communication overhead. It is not intuitive until you see the numbers.&lt;/p&gt;

&lt;p&gt;When you have a team of four engineers, there are six possible communication paths between them. Add a fifth person and you get ten. Add a sixth and you get fifteen. The formula is &lt;code&gt;n(n-1)/2&lt;/code&gt;, and the point is not to memorise it but to understand the curve. Every person you add increases the number of relationships, dependencies, and potential misalignments on the team. At some point, that overhead exceeds the capacity benefit of adding another person.&lt;/p&gt;

&lt;p&gt;This is what Brooks' Law is actually pointing at when it says adding manpower to a late software project makes it later. It is not that people are useless. It is that onboarding and coordination costs are real, and if you add people without managing the structural overhead, you pay more than you gain.&lt;/p&gt;

&lt;p&gt;The practical implication is that teams above a certain size need organisational structure to remain effective, not because structure is inherently good, but because it reduces the surface area of coordination that each person has to manage. Smaller, autonomous teams with clear ownership can each operate with low overhead while the aggregate output scales. This is the core logic behind the two-pizza team concept, and it holds up in practice even if the pizza metric itself is a bit silly.&lt;/p&gt;

&lt;p&gt;So when should you split a team? The signals I watch for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;when more than half your standup is irrelevant to most of the people in it&lt;/li&gt;
&lt;li&gt;when engineers are regularly blocked waiting for decisions from a single point of authority&lt;/li&gt;
&lt;li&gt;when the codebase has grown to the point where nobody fully understands the system end-to-end&lt;/li&gt;
&lt;li&gt;when you have grown past eight or nine people and the coordination overhead is visibly slowing things down.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any one of those is a reasonable trigger. Two or more at once and you are already overdue.&lt;/p&gt;

&lt;p&gt;How you split matters as much as when. The worst splits are by technical layer; frontend team, backend team, infrastructure team. That structure creates handoff-heavy workflows where a single feature requires coordination across three teams and nobody owns the outcome end-to-end. Conway's Law tells you that your org structure will reflect your system architecture, so if you want loosely coupled systems, you need loosely coupled teams with bounded ownership. Organise around product domains or user-facing capabilities instead. A team that owns the payments experience end-to-end - from the frontend interaction to the backend logic to the database schema - can move fast without waiting on anyone else.&lt;/p&gt;

&lt;p&gt;The flip side of this is that you need to be deliberate about shared infrastructure. When teams own their domains end-to-end, they will inevitably have shared concerns: authentication, logging, deployment pipelines, internal libraries. Left unaddressed, every team solves these independently and you end up with seven slightly different implementations of the same thing. The solution is a platform or infrastructure function - a small team explicitly chartered to own the things that are shared - with a strong bias toward building self-service tooling rather than becoming a bottleneck that product teams have to request work from.&lt;/p&gt;

&lt;p&gt;Now let's talk about hiring cycles, because this is where a lot of scaling plans go wrong in a different way.&lt;/p&gt;

&lt;p&gt;Hiring has a lag. From the moment you open a role to the moment a new hire is meaningfully contributing is typically four to six months, sometimes more. That includes recruiting time, interview process, notice period, onboarding, and ramp-up. If you wait until your team is visibly overloaded to start hiring, you will be waiting for relief for a long time while the people already on your team burn out.&lt;/p&gt;

&lt;p&gt;The right time to start a hiring cycle is before you need the capacity, not when you need it. That requires a planning horizon most engineering managers are not used to operating on. You need to look three to six months out and ask: what does this team need to look like at that point, given what we have committed to delivering? Work backward from that and you will usually find that you needed to start hiring two months ago.&lt;/p&gt;

&lt;p&gt;This requires building a relationship with your recruiting function and being a genuine partner rather than just a consumer. Understand what is in the pipeline. Give fast, specific feedback on candidates. Keep your job descriptions current. Show up to sourcing conversations. Engineering managers who treat recruiting as someone else's job tend to have longer time-to-hire and weaker candidate pools than those who engage actively.&lt;/p&gt;

&lt;p&gt;One thing worth doing if you have the data: track your actual time-to-productivity for new hires by role and seniority. Not time-to-hire, but time-to-first-meaningful-contribution and time-to-full-productivity. That data tells you how far ahead you need to plan, and it often reveals onboarding gaps that are adding weeks of unnecessary ramp-up time.&lt;/p&gt;

&lt;p&gt;Cross-team handoffs are where velocity goes to die once you have multiple teams. I have seen this pattern so many times it is almost a cliche: two teams are working on adjacent parts of a system, the interface between them is not well defined, and what should be a clean integration turns into a weeks-long negotiation over API contracts, data ownership, and who is responsible for what.&lt;/p&gt;

&lt;p&gt;The solution is not more meetings between teams. It is clearer ownership and better written contracts upfront. Before work begins on anything that spans team boundaries, you want a written definition of: who owns what, what the interface looks like, what the expected behaviour is, and what the escalation path is when something goes wrong. This does not have to be elaborate. A short technical spec or an RFC that both teams have reviewed and agreed to is usually enough.&lt;/p&gt;

&lt;p&gt;Some teams formalise this with an internal RFC process, where any significant technical change that affects other teams requires a written proposal with a comment period before work begins. Done lightly, this is genuinely useful. Done heavily, it becomes bureaucratic overhead that slows everything down. The right level depends on your team size and the pace of change in your system. At ten engineers, a quick shared doc and a Slack thread is probably enough. At fifty, you probably want something more structured.&lt;/p&gt;

&lt;p&gt;Dependency management is a related problem that gets worse as teams scale. When team A is blocked on team B's work, that is a velocity killer. The standard advice is to minimise cross-team dependencies, which is correct but not always achievable. When dependencies are unavoidable, the goal is to surface them as early as possible, sequence work so that the dependent team can start on other things, and have a clear escalation path when a dependency is at risk of slipping.&lt;/p&gt;

&lt;p&gt;This is one of the things that makes engineering planning genuinely hard at scale. You are not just sequencing work within a team. You are managing a dependency graph across multiple teams, each with their own priorities and constraints. The teams that do this well treat inter-team dependencies as first-class items in planning - visible, tracked, and actively managed - rather than background assumptions that only surface when they blow up.&lt;/p&gt;

&lt;p&gt;Let me talk about a specific bottleneck that I see consistently kill velocity in growing teams: the single point of decision authority.&lt;/p&gt;

&lt;p&gt;It usually starts innocuously. A senior engineer or tech lead has strong opinions and good judgment, so decisions naturally flow through them. This works fine at small scale. At scale, it creates a queue. Every architectural decision, every significant PR, every cross-cutting concern sits waiting for one person's input. That person becomes increasingly overloaded and increasingly a blocker, not because they are doing anything wrong but because the structure has not kept up with the team's growth.&lt;/p&gt;

&lt;p&gt;The solution is to distribute decision-making authority deliberately. That means being explicit about who has authority over what class of decision, investing in documentation and principles that allow people to make good decisions independently, and creating a culture where "I made a call and here is my reasoning" is the norm rather than "I waited for someone to tell me what to do."&lt;/p&gt;

&lt;p&gt;This requires accepting that some decisions made without your input will be decisions you would have made differently. That is an uncomfortable tradeoff. But a slightly suboptimal decision made quickly and autonomously is almost always better for team velocity than the perfect decision that took two weeks to reach because everything had to go through one person.&lt;/p&gt;

&lt;p&gt;The other bottleneck worth naming explicitly is the release process. Teams that can deploy independently and safely move faster than teams that share a deployment pipeline or a release schedule. If your release process requires coordination across multiple teams or a human approval gate that creates a queue, that is a structural constraint on your velocity that hiring more people will not fix. Investing in a deployment pipeline that gives teams independent, safe, fast release capability is one of the highest-leverage infrastructure investments you can make as you scale.&lt;/p&gt;

&lt;p&gt;There is a meta-principle underlying most of what I have described: the goal of organisational design as you scale is to keep the blast radius of any individual team's decisions small while maximising their autonomy within that boundary. Small blast radius means that when a team makes a mistake, it does not cascade across the whole system. High autonomy means the team can move fast without waiting for permission or coordination.&lt;/p&gt;

&lt;p&gt;Those two things are in tension, and the job of engineering leadership is to find the right balance for your organisation at its current size and stage. The balance shifts as you grow. What worked at fifteen engineers will break at forty. What works at forty will need to change at a hundred. The managers who navigate this well are the ones who keep asking "is our structure still serving us, or are we serving our structure?" and are willing to change the answer when it needs to change.&lt;/p&gt;

&lt;p&gt;Growth is not just an execution problem. It is a design problem. The teams that scale well treat their org structure, hiring cadence, and cross-team coordination as things that need to be actively designed and maintained, not defaults that exist until they break.&lt;/p&gt;

&lt;p&gt;Start designing before you need to. You will be glad you did.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next in the series: Building Psychological Safety in Engineering -- creating a culture where failure is a learning opportunity, not a liability.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>leadership</category>
      <category>culture</category>
    </item>
    <item>
      <title>Hiring Engineers: A Manager's Playbook</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:30:53 +0000</pubDate>
      <link>https://dev.to/juststevemcd/hiring-engineers-a-managers-playbook-3o8o</link>
      <guid>https://dev.to/juststevemcd/hiring-engineers-a-managers-playbook-3o8o</guid>
      <description>&lt;p&gt;Hiring is one of the highest-leverage things you will do as an engineering manager. A single great hire compounds over years. A single bad hire - or more precisely, a bad hiring process that lets the wrong person through while filtering out the right ones; costs you more than you think. Not just in time and salary, but in team morale, velocity, and the invisible tax of managing a poor fit.&lt;/p&gt;

&lt;p&gt;Most engineering hiring processes are broken in ways that managers do not fully recognise until they have been on the other side of them. I have sat in hiring debriefs where five engineers gave five different verdicts on the same candidate and nobody could articulate why they felt the way they did. I have seen candidates with brilliant portfolios fail whiteboard tests that had nothing to do with the actual job. I have watched great engineers get filtered out because they had a quiet interview style and the interviewer mistook silence for incompetence.&lt;/p&gt;

&lt;p&gt;The goal of this article is to help you build a process that is structured enough to reduce those failure modes, flexible enough to surface genuine talent, and honest enough to actually tell candidates what working on your team is like. That last one matters more than most people realise. Hiring is a two-way evaluation, and the best candidates have options.&lt;/p&gt;

&lt;p&gt;Let's walk through the full arc: structure, technical assessment, culture and values alignment, and onboarding. Not as separate checklists but as a coherent system that you design intentionally.&lt;/p&gt;

&lt;p&gt;Defining what you are actually hiring for is the step most hiring processes skip or do half-heartedly. A job description is not a definition of success. "Strong communication skills and a passion for technology" is not a definition of success. Before you post a role, sit down with a blank document and answer three questions: what does this person need to be able to do in the first ninety days to be considered a strong hire? What do they need to be able to do after a year? What specific gaps on the team are we trying to close?&lt;/p&gt;

&lt;p&gt;Those answers should drive everything else. The technical assessment should test for the actual skills the role requires. The interview questions should probe for the actual behaviours the role demands. The evaluation criteria should map directly to those definitions of success. When you have that clarity upfront, the rest of the process becomes much easier to design and much harder to game.&lt;/p&gt;

&lt;p&gt;It also forces an honest conversation about scope. Are you hiring a senior engineer to independently own complex technical problems, or a mid-level engineer to execute on well-defined work? Those are different roles, different assessments, and different interview conversations. Conflating them (which is extremely common) leads to either overhiring for the work or underhiring for the scope and setting someone up to fail.&lt;/p&gt;

&lt;p&gt;The interview structure itself should follow a consistent format for every candidate for the same role. Not because consistency is a bureaucratic virtue but because without it, you cannot make fair comparisons. If candidate A had a rigorous technical discussion and candidate B got a casual conversation about their career history, your debrief is comparing two different things. Structured interviews reduce that problem significantly.&lt;/p&gt;

&lt;p&gt;A reasonable interview structure for a senior engineering role looks something like this. A recruiter or hiring manager screen that covers career background, motivations, and basic role fit; thirty minutes tops, mostly conversational. A technical assessment that gives you signal on the actual skills the role requires; more on format in a moment. A values and working-style conversation with one or two people on the team. And a final conversation with the hiring manager that covers the role more deeply, answers the candidate's questions, and gives you a chance to assess how they think about their work at a higher level.&lt;/p&gt;

&lt;p&gt;The exact structure will vary by seniority and team. But the principle is the same:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;each stage should have a clear purpose&lt;/li&gt;
&lt;li&gt;a clear set of things it is trying to evaluate&lt;/li&gt;
&lt;li&gt;a consistent format so that different candidates are being measured against the same bar.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Technical assessment is where the most disagreement happens and where the most damage is done. Let me be direct about what I think: timed algorithm puzzles and whiteboard problems that bear no resemblance to the actual job are a poor way to assess most engineering roles. They test a specific kind of performance under artificial pressure that correlates poorly with day-to-day engineering work. They also systematically disadvantage experienced engineers who have been in industry roles and simply do not practice leetcode.&lt;/p&gt;

&lt;p&gt;That does not mean technical assessment is not important. It absolutely is. It means the assessment should reflect the actual work.&lt;/p&gt;

&lt;p&gt;For most backend engineering roles, a practical take-home that asks the candidate to do something roughly analogous to what they would do on the job is a better signal than a timed algorithm test. Something like: here is a small API, extend it with this feature, write tests, and leave notes on any tradeoffs you made. That tells you how someone actually writes code, how they think about testing, how they communicate their decisions; &lt;em&gt;all things that matter in the role&lt;/em&gt;. You can review it asynchronously, which is fairer to candidates in different timezones or with jobs that make synchronous sessions difficult.&lt;/p&gt;

&lt;p&gt;If you do a live technical session, make it collaborative rather than evaluative in the traditional sense. Work through a problem together. Let the candidate look things up. Ask them to walk you through their reasoning. That is a much closer simulation of actual engineering work than asking someone to produce a correct answer under observation with no resources.&lt;/p&gt;

&lt;p&gt;One thing worth doing regardless of format: review your technical assessment regularly to check whether it is actually predicting performance. If you can look back at candidates you hired and compare their assessment performance to their on-the-job performance, you will often find that the correlation is weaker than you assumed. That is useful information for calibrating the assessment over time.&lt;/p&gt;

&lt;p&gt;Culture fit is a phrase that gets misused constantly, so let's reframe it. What you are actually evaluating in a "culture fit" conversation is values alignment and working style compatibility. Not whether someone shares your hobbies or went to the same kind of school. The distinction matters because culture fit as commonly practiced is one of the most reliable vectors for homogeneity in hiring.&lt;/p&gt;

&lt;p&gt;Values alignment is about the things that actually affect how someone works: how they handle disagreement, how they respond to ambiguity, how they communicate when something is going wrong, whether they default to transparency or opacity under pressure, how they think about ownership. Those are things you can probe for directly with behavioural questions.&lt;/p&gt;

&lt;p&gt;Behavioural questions should be specific and past-oriented. "Tell me about a time when you had to push back on a technical decision you disagreed with" gives you real signal. "Are you someone who speaks up when you disagree" gives you the answer the candidate thinks you want. The difference in data quality between those two questions is significant.&lt;/p&gt;

&lt;p&gt;A few questions that reliably surface useful signal across seniority levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tell me about a time a project you were responsible for went significantly off track. What happened and what did you do?&lt;/li&gt;
&lt;li&gt;Tell me about a piece of feedback that genuinely changed how you work.&lt;/li&gt;
&lt;li&gt;How do you typically approach a technical decision when the right answer is not obvious?&lt;/li&gt;
&lt;li&gt;Tell me about a time you worked with someone whose working style was very different from yours.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What you are listening for is not the story but the self-awareness. Does this person understand their own patterns? Do they take ownership of things without being defensive? Do they demonstrate genuine curiosity and willingness to learn? Those qualities matter across almost every technical role and almost every team culture.&lt;/p&gt;

&lt;p&gt;The debrief structure matters as much as the interview itself. A common failure mode is the unstructured debrief where the first person to speak sets the frame and everyone else anchors to their opinion. This is how groupthink gets baked into hiring decisions.&lt;/p&gt;

&lt;p&gt;A better approach: everyone writes their evaluation independently before the debrief begins. Not a detailed report, just a hire or no-hire recommendation and a few bullet points on the key evidence for each dimension you were evaluating. Then in the debrief, go around the room before any open discussion. Once everyone has shared their read, then you discuss disagreements.&lt;/p&gt;

&lt;p&gt;This surfaces more honest signal and makes disagreements more productive. When two interviewers have very different reads on the same candidate, that disagreement is itself information worth understanding. Sometimes it means the candidate gave different answers to different people. Sometimes it means the interviewers were evaluating for different things. Either way, working through it leads to a better decision.&lt;/p&gt;

&lt;p&gt;Define your evaluation rubric before the process starts, not after. Dimensions like technical depth, communication clarity, problem-solving approach, and ownership mindset should have concrete descriptions of what strong, acceptable, and weak looks like for the role in question. Rubrics are not bureaucratic overhead. They are the thing that makes your process defensible and improvable.&lt;/p&gt;

&lt;p&gt;Onboarding is the part of the hiring process that most engineering managers treat as someone else's responsibility. It is not. The way you integrate a new engineer into the team in their first sixty days has a direct effect on how quickly they become productive, how connected they feel to the team, and whether they stay past the twelve-month mark.&lt;/p&gt;

&lt;p&gt;A good engineering onboarding plan has a few consistent elements. It starts before day one: send the new hire an onboarding doc before they start so they know what to expect in week one. Not a firehose of information, just an overview of what the first few weeks look like and who they will be meeting.&lt;/p&gt;

&lt;p&gt;In the first week, the goal is orientation not productivity. Get their environment set up, walk them through the architecture at a high level, introduce them to the key people they will work with, and give them something small but real to ship. That first commit or merged PR is important for psychological reasons; it makes the new hire feel like a contributor rather than an observer.&lt;/p&gt;

&lt;p&gt;In weeks two through four, graduate the complexity. Give them increasingly meaningful work with explicit context about why it matters and what good looks like. Run a proper one-on-one at the end of week one, week two, and week four specifically to check in on how the onboarding is going and surface any confusion or friction early.&lt;/p&gt;

&lt;p&gt;Assign a buddy. Not a formal mentor relationship, just someone on the team who has been around for a while and is available to answer the questions the new hire is too self-conscious to ask their manager. Questions like "where does this documentation actually live" or "is it normal that this service takes fifteen minutes to build" are exactly the kind of thing a buddy handles well and that a new hire will not raise in a one-on-one for fear of seeming underprepared.&lt;/p&gt;

&lt;p&gt;At the sixty-day mark, have an explicit conversation about how the onboarding has gone. What worked? What was confusing? What would have helped to know earlier? This is partly for the new hire's benefit and partly for yours. The feedback you get from new engineers about your onboarding process is some of the most valuable signal you have for improving it, because they just experienced it with fresh eyes.&lt;/p&gt;

&lt;p&gt;One broader principle worth naming: the hiring process is a product. It has users (candidates and interviewers), it produces an output (a hiring decision), and it can be improved iteratively based on data. Treat it that way.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Track your offer acceptance rate.&lt;/li&gt;
&lt;li&gt;Track time-to-hire.&lt;/li&gt;
&lt;li&gt;Talk to candidates who declined your offers and find out why.&lt;/li&gt;
&lt;li&gt;Talk to new hires at the sixty-day mark and find out where the process was unclear or misleading.&lt;/li&gt;
&lt;li&gt;Run a retrospective on your process once a quarter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most engineering teams never do any of this. They run the same hiring process for years without measuring whether it works, then wonder why their candidate pool is shallow or their offer acceptance rate is low. A small amount of deliberate iteration goes a long way.&lt;/p&gt;

&lt;p&gt;The last thing I want to say about hiring is about honesty. Be honest with candidates about the role, the team, and the company. Tell them what is genuinely hard about working there. If your codebase has serious technical debt, say so. If the team is going through a restructuring, say so. If the role requires a lot of on-call, say so clearly and early.&lt;/p&gt;

&lt;p&gt;This feels counterintuitive because you want to sell the role. But candidates who join with an accurate picture of what they are getting into are better retained and faster to trust you than candidates who discover the gap between the pitch and the reality in their first month. The best hiring conversations I have seen treat the candidate like an intelligent adult who deserves real information, not a polished brand narrative.&lt;/p&gt;

&lt;p&gt;That respect carries forward into the relationship if they join. And it saves everyone a painful offboarding conversation six months later.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next in the series: Scaling Engineering Teams Without Losing Velocity - org design, hiring cycles, and avoiding the bottlenecks that slow teams down as they grow.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>leadership</category>
      <category>culture</category>
    </item>
    <item>
      <title>From IC to Manager: First Steps for New Engineering Leads</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:29:13 +0000</pubDate>
      <link>https://dev.to/juststevemcd/from-ic-to-manager-first-steps-for-new-engineering-leads-4h0g</link>
      <guid>https://dev.to/juststevemcd/from-ic-to-manager-first-steps-for-new-engineering-leads-4h0g</guid>
      <description>&lt;p&gt;The first thing I want to tell you is that the discomfort you are feeling is not a warning sign. It is the job working correctly.&lt;/p&gt;

&lt;p&gt;When you step into an engineering leadership role for the first time, almost everything that made you good at your previous job stops being directly useful. The skills that got you promoted - the ability to reason through a hard problem, write clean code, ship features reliably - those are no longer your primary tools. Your primary tool is now other people. And working with people is a fundamentally different craft from working with code.&lt;/p&gt;

&lt;p&gt;That transition is jarring for most engineers, and it catches a lot of first-time managers completely off guard. Not because they are not capable, but because nobody told them what to actually expect.&lt;/p&gt;

&lt;p&gt;So let's talk about what to actually expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Just Changed (And What Didn't)
&lt;/h2&gt;

&lt;p&gt;Here is the uncomfortable truth about moving into engineering management: your output is no longer measurable in the same way. When you were an IC, you could point to pull requests, shipped features, performance improvements, and bug fixes. There was a feedback loop. You could tell, more or less, whether you were doing well.&lt;/p&gt;

&lt;p&gt;As a manager, your output is the team's output. That is a much longer feedback loop, and it is far less legible. Did that one-on-one conversation you had three weeks ago lead to someone shipping better work today? Did the process change you introduced last month reduce friction on the team? You often will not know for weeks or months, and sometimes you will never know for certain.&lt;/p&gt;

&lt;p&gt;This drives a lot of new engineering managers straight back to coding. Not because they need to - they just miss the feedback loop. They miss feeling productive in a way they can measure.&lt;/p&gt;

&lt;p&gt;I am not going to tell you to never write code again. That is both unrealistic and unnecessary. But I will tell you that if you are writing code to avoid the uncomfortable parts of your new job, you are solving the wrong problem. The goal is to get comfortable with the longer feedback loop, not to escape it.&lt;/p&gt;

&lt;p&gt;What did not change: you still need to understand the technical work deeply. Not at a "I could have written that PR myself" level necessarily, but at a "I can have a real conversation about this tradeoff and push back when something feels wrong" level. Technical credibility matters enormously in engineering leadership. You earn it by staying engaged with the work, asking good questions in code review, understanding the architecture, and remembering what it actually feels like to ship something under pressure.&lt;/p&gt;

&lt;p&gt;The shift is not from technical to non-technical. It is from doing the technical work to enabling others to do it better than you could alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Letting Go of the Keyboard
&lt;/h2&gt;

&lt;p&gt;This is the hardest part for most engineers, so let's spend some real time here.&lt;/p&gt;

&lt;p&gt;You are probably good at coding. Maybe very good. You have built up years of intuition about how to structure a problem, where the edge cases are, what a clean solution looks like. And now you are sitting in meetings watching your team write code that you could see the issues in immediately, and you have to... not fix it yourself.&lt;/p&gt;

&lt;p&gt;That feeling does not go away quickly. But here is a reframe that helped me when I was working through this with engineers making the management transition: your job is no longer to write the best code. Your job is to build a team that writes better code than you ever could alone.&lt;/p&gt;

&lt;p&gt;That reframe changes the question from "why am I not fixing this?" to "how do I help this person grow into someone who catches this themselves?" Those are very different questions, and they lead to very different actions.&lt;/p&gt;

&lt;p&gt;Concretely, letting go of the keyboard looks like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reviewing instead of rewriting.&lt;/strong&gt; When you see a PR that you would have written differently, write a thoughtful review comment explaining your reasoning instead of just fixing it yourself. This is slower in the short term and faster in the long term. The engineer learns something, and next time they will write it better without needing your input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asking questions instead of giving answers.&lt;/strong&gt; When an engineer comes to you with a problem, resist the instinct to solve it. Ask questions instead. "What have you already tried? What do you think is causing it? What would you do if you had to take a guess?" This feels slower and sometimes frustrating (for both of you) but it builds problem-solving independence. Engineers who get answers handed to them stay dependent. Engineers who are coached through problems become more capable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delegating things that make you nervous.&lt;/strong&gt; The tasks you are most reluctant to hand off are usually the tasks you most need to hand off. That is often because they are high-visibility or technically complex, and you do not fully trust someone else to handle them yet. That distrust is sometimes justified, but often it is just anxiety. Start with lower-stakes delegation and build up. Give people room to own something end-to-end, and resist the urge to hover.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Your Time as a New Leader
&lt;/h2&gt;

&lt;p&gt;Your calendar is about to become your main work surface. That is a real adjustment.&lt;/p&gt;

&lt;p&gt;As an IC, your calendar was mostly something that interrupted your work. Meetings were overhead. Focus time was the actual job. As a manager, your calendar is the actual job. One-on-ones, team syncs, cross-team coordination, recruiting conversations, performance check-ins - these are not the things getting in the way of your work. They are your work.&lt;/p&gt;

&lt;p&gt;That said, there are a few time management habits that separate effective new managers from ones who burn out quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protect one or two focus blocks per week.&lt;/strong&gt; Even as a manager, you need uninterrupted time to think. Not to write code, but to read documents carefully, think through an organizational problem, draft a strategy, or just catch up on what is actually happening in the codebase. If you let your entire week become back-to-back meetings, you will always be reacting and never thinking ahead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time-box your one-on-ones and run them consistently.&lt;/strong&gt; One-on-ones are your single most important management tool, and new managers often either skip them when things get busy or run them without a clear purpose. Do not do either. Hold them every week, keep them to thirty or forty-five minutes, and treat them as the engineer's time - not your status update meeting. Come with a light agenda, ask real questions, and actually listen. "How is everything going" is not a real question. "What is the most frustrating part of your work right now" is a real question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch your administrative overhead.&lt;/strong&gt; Expense reports, tool approvals, interview scheduling, performance review paperwork - this stuff has to get done but it does not require your best brain hours. Block thirty minutes at the end of a Tuesday for admin. Do not let it colonize your mornings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn to say no to your own instincts.&lt;/strong&gt; You will constantly want to jump in, add yourself to things, take on tasks you see falling through cracks. Some of that instinct is good leadership. A lot of it is just difficulty letting go of the doing. Before you take something on, ask: is this actually mine to handle, or am I adding myself here because it feels more comfortable than delegating?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Expectation Gap
&lt;/h2&gt;

&lt;p&gt;Here is something that does not get talked about enough in the IC-to-manager transition: the expectations of the people around you shift before you have had time to develop new competencies.&lt;/p&gt;

&lt;p&gt;Your team expects you to have answers about career growth, organizational direction, and team priorities. Your manager expects you to surface problems early, have a handle on your team's capacity, and represent your team in cross-functional conversations. Your peers across other teams expect you to be reliable and aligned.&lt;/p&gt;

&lt;p&gt;And you are three weeks in, still figuring out what one-on-ones are supposed to accomplish.&lt;/p&gt;

&lt;p&gt;The gap between those expectations and your current capabilities is real, and trying to close it by pretending you have everything figured out is a trap. The better approach is direct acknowledgment. "I am still finding my footing on this" is a perfectly acceptable thing to say in your first ninety days. It signals self-awareness and honesty, which are actually two of the more important traits in a new manager.&lt;/p&gt;

&lt;p&gt;What you should not do is fake competence in areas where you are genuinely unsure, because that creates expectations you will then have to maintain. Better to set honest expectations early and then exceed them than to promise things you cannot deliver.&lt;/p&gt;

&lt;p&gt;The areas where most new engineering managers genuinely struggle in the first six months:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance conversations.&lt;/strong&gt; Giving critical feedback to someone whose work is not meeting expectations is one of the hardest things managers have to do, and most ICs have almost no experience with it. The instinct is to soften, delay, or avoid. Push against that instinct. Clear, specific, compassionate feedback delivered early prevents the much harder conversation later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prioritization and saying no.&lt;/strong&gt; As an IC, someone else decided what was on the roadmap. Now you are involved in those decisions, and you have to be willing to push back when the team is being asked to do too much. That requires a different kind of conversation than most engineers have had to have before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Navigating organizational ambiguity.&lt;/strong&gt; Companies are messier from the inside than they look from a distance. There are competing priorities, unclear ownership, processes that exist for reasons nobody remembers, and political dynamics that take time to understand. Your job is to make sense of enough of that context to give your team clarity, even when you do not have full clarity yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills to Start Building Right Now
&lt;/h2&gt;

&lt;p&gt;If you are a new engineering manager reading this, here are the things I would focus on in your first ninety days:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get really good at listening.&lt;/strong&gt; Not just hearing - actually listening. When an engineer tells you something in a one-on-one, are you thinking about your response while they are talking, or are you actually taking in what they are saying? Active listening is a learnable skill and it is foundational to everything else in this job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start a manager journal.&lt;/strong&gt; Write down what you tried, what happened, and what you would do differently. Management has a long feedback loop, which means you need to be deliberate about capturing the signal. A weekly reflection of ten minutes is enough. Over six months it becomes an invaluable record of your own development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find a peer group or mentor.&lt;/strong&gt; The transition into management is isolating in a specific way: the problems you are dealing with are not something you can fully discuss with your direct reports, and they are different from the technical problems your IC friends are working on. Finding other people at a similar stage - through communities, networks, or formal mentoring - gives you a sounding board that is genuinely hard to replace. Some of the most useful conversations I have had with engineers making the management transition were not about tactics at all. They were just about realizing that the disorientation they were feeling was normal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn the business context.&lt;/strong&gt; As an IC, you needed to understand the technical context of your work. As a manager, you need to understand the business context too. Why does your company prioritize certain things? How does your team's work connect to revenue, users, or strategic goals? The better you understand that, the better equipped you are to make good prioritization decisions and advocate for your team effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be patient with yourself.&lt;/strong&gt; This is easier to say than to live, but it is true. Most engineers who become managers underestimate how long the real learning curve is. You will make mistakes in your first year. Some of them will be painful. That is not a sign that you made the wrong choice. It is a sign that you are actually doing the job.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Part Nobody Tells You
&lt;/h2&gt;

&lt;p&gt;There is something genuinely satisfying about watching someone you have been working with closely ship something significant, get promoted, or work through a hard problem they could not have solved six months ago. It is a different kind of satisfaction from shipping something yourself - less immediate, more durable.&lt;/p&gt;

&lt;p&gt;You will not feel it right away. In the first few months, it mostly just feels hard. But it shows up eventually, and when it does, it tends to make the tradeoffs feel worth it.&lt;/p&gt;

&lt;p&gt;The move from IC to manager is one of the more significant professional transitions you will make. It is not for everyone, and there is no shame in deciding the role is not the right fit. But if you go in with clear eyes about what is actually changing, build the right habits early, and resist the pull of the familiar, you have a real shot at being genuinely good at it.&lt;/p&gt;

&lt;p&gt;Start there.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next up in this series: Hiring Engineers - a practical playbook for interview structure, technical assessment, and building an onboarding process that actually works.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>management</category>
      <category>leadership</category>
      <category>culture</category>
      <category>remote</category>
    </item>
    <item>
      <title>How to Lead a Remote Engineering Team</title>
      <dc:creator>Steve McDougall</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:26:48 +0000</pubDate>
      <link>https://dev.to/juststevemcd/how-to-lead-a-remote-engineering-team-2n5m</link>
      <guid>https://dev.to/juststevemcd/how-to-lead-a-remote-engineering-team-2n5m</guid>
      <description>&lt;p&gt;Remote engineering leadership is not a soft skill. It is a systems problem.&lt;/p&gt;

&lt;p&gt;I have watched a lot of engineering managers struggle with distributed teams, and the pattern is almost always the same. They take the management habits that worked in an office, transplant them into Slack and Zoom, and then wonder why everything feels slower, noisier, and harder to trust. The problem is not the tools. The problem is that remote work requires a fundamentally different operating model, and most managers never build one.&lt;/p&gt;

&lt;p&gt;This article is about building that model. We will cover async communication rhythms, documentation as infrastructure, how to build genuine trust without physical proximity, and how to protect your team's well-being when the office and home become the same place. These are not abstract principles. They are specific systems you can start implementing this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mental Model Shift That Changes Everything
&lt;/h2&gt;

&lt;p&gt;Before we get into tactics, there is one mental model shift that underpins all of it.&lt;/p&gt;

&lt;p&gt;In a co-located team, the default medium is conversation. Information lives in people's heads and travels through spoken exchanges. When you need to know something, you walk over and ask. The cost of communication is low, so the system works even without much structure.&lt;/p&gt;

&lt;p&gt;In a remote team, the default medium has to be writing. Not because conversations are bad, but because you cannot afford to have knowledge trapped in a single timezone or dependent on someone being available at 2pm on a Tuesday. When you make writing the default, you shift from a pull model (someone asks, someone answers) to a push model (important information is documented and available). That shift is what makes remote teams actually work.&lt;/p&gt;

&lt;p&gt;Every tactic in this article flows from that principle. Async rhythms exist because writing enables async. Documentation-as-infrastructure exists because written knowledge compounds. Trust-building at a distance exists because when people cannot see each other working, shared written context is what replaces ambient presence.&lt;/p&gt;

&lt;p&gt;Keep that framing in mind as we go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Async Rhythms: Designing Your Team's Cadence
&lt;/h2&gt;

&lt;p&gt;The biggest mistake in remote engineering management is treating async as a lesser version of real-time communication. Async is not a fallback. Done well, it is actually more productive than most synchronous alternatives, because it respects deep work, reduces context switching, and creates a written record by default.&lt;/p&gt;

&lt;p&gt;But async does not mean "respond whenever you feel like it." It means designing intentional rhythms so that the team stays aligned without being constantly interrupted.&lt;/p&gt;

&lt;p&gt;Here is the framework I recommend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Define response time expectations explicitly.&lt;/strong&gt; Not "reply promptly": that means different things to different people. Something like: Slack messages get a response within four hours during working hours. Anything urgent uses a dedicated channel or a direct ping with the word "urgent" in it. Anything that can wait for the next day goes into a thread or a document. When these norms are written down and agreed on, a lot of the ambient anxiety around "is this person ignoring me" disappears.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protect focus blocks.&lt;/strong&gt; Remote engineers are often more interrupted than office engineers, not less, because the barrier to sending a message is so low. Push back on this. Establish team-wide focus blocks where Slack notifications are off and meetings do not get scheduled. Two hours in the morning, two hours in the afternoon -- something like that. Treat these as seriously as you would treat a production incident.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run fewer, better meetings.&lt;/strong&gt; Every meeting on a remote team has a higher cost than it appears, because it requires people in different timezones and different focus states to synchronize at a specific moment. Before scheduling anything, ask: can this be an async update, a thread, or a document instead? If the answer is yes, do not schedule the meeting. When you do run meetings, have an agenda written in advance, assign a note-taker, and publish the notes publicly within an hour. Meetings that produce no written artifact are meetings that happened twice: once in real time, and again when people have to ask what was decided.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Have a weekly team update ritual.&lt;/strong&gt; One of the most useful habits I have seen remote engineering managers develop is a simple weekly async standup. Each engineer writes three or four sentences: what they shipped this week, what they are working on next week, any blockers. It takes five minutes to write and five minutes to read, and it gives the team a shared sense of momentum without requiring a meeting. You can do this in Notion, Confluence, a GitHub discussion, or even a dedicated Slack channel. The format matters less than the consistency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Documentation as Infrastructure
&lt;/h2&gt;

&lt;p&gt;Documentation in most engineering orgs is treated as a chore; something you write after the fact to satisfy a process requirement. That approach fails in remote teams, because documentation is the primary way context travels across the organization.&lt;/p&gt;

&lt;p&gt;Think of documentation as infrastructure. It is the road network that lets information move. When the roads are bad, everything slows down. When the roads are good, everything flows.&lt;/p&gt;

&lt;p&gt;There are three categories of documentation that matter most for remote engineering teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decision records.&lt;/strong&gt; When your team makes a significant technical or product decision, write down the context, the options you considered, the tradeoffs, and what you chose. This does not have to be long; a few paragraphs is often enough. The value is that six months later, when someone asks "why did we do it this way," you have an answer that does not depend on someone's memory. Lightweight Architecture Decision Records (ADRs) are a good format for this. Keep them in the repo alongside the code they describe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runbooks and operational guides.&lt;/strong&gt; Every repeatable operational task; deploying, running migrations, handling a specific class of incident, onboarding a new service - should have a written guide. Not a perfect guide, but a good-enough guide that someone can follow without asking five questions. Remote teams that lack runbooks create invisible dependencies on specific individuals, and those dependencies become serious bottlenecks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding documentation.&lt;/strong&gt; The quality of your onboarding docs tells you a lot about the overall health of your remote team's documentation culture. When a new engineer joins, can they get their environment set up, understand the architecture, and ship something meaningful in their first two weeks, mostly by following written guides? If not, you have documentation gaps that are slowing down your entire team, not just new hires.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One practical tip: assign a rotating "documentation duty" to engineers on the team. Each week or sprint, one person is responsible for updating docs that are out of date, filling gaps they notice, and adding anything that was only in someone's head. This turns documentation from a solo chore into a team practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Trust Without Physical Proximity
&lt;/h2&gt;

&lt;p&gt;Trust is harder to build at a distance, and it is worth understanding why. In an office, a lot of trust-building happens through ambient signals; you see someone arrive early, stay late, help a colleague, handle a difficult conversation calmly. Those signals accumulate over time without anyone consciously tracking them.&lt;/p&gt;

&lt;p&gt;Remote teams do not have ambient signals. Trust has to be built deliberately, through consistent behavior over time and through enough human contact that people actually know each other.&lt;/p&gt;

&lt;p&gt;A few things that work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Show up consistently to the few meetings you do have.&lt;/strong&gt; In a remote context, meetings are often the primary venue where people see each other as humans rather than text on a screen. Being reliably present, engaged, and prepared signals that you take the relationship seriously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create low-stakes social touchpoints.&lt;/strong&gt; Virtual coffee chats, optional Friday hangouts, a random channel in Slack for non-work things; these feel frivolous but they matter. People work harder for and communicate more openly with people they actually know. You do not need to force this, but you do need to create the infrastructure for it to happen. In my experience mentoring engineers who have moved between fully remote and hybrid roles, the ones who invest in these informal relationships consistently have an easier time navigating difficult work conversations later. The social foundation carries over.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Give feedback frequently and specifically.&lt;/strong&gt; One of the most trust-eroding patterns in remote management is the feedback vacuum; where engineers go weeks without hearing whether their work is landing. Specific, timely feedback, even small amounts of it, communicates that you are paying attention. "That PR review you wrote last Thursday was really thorough, I appreciated it" takes ten seconds and has a disproportionate effect on someone's sense of being seen.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be transparent about decisions and context.&lt;/strong&gt; Remote engineers who feel out of the loop tend to fill that vacuum with their own narratives, and those narratives are usually more negative than reality. When you make a decision that affects the team, explain why. When something is uncertain, say it is uncertain. When the company is going through something difficult, acknowledge it. Information hoarding in a remote context creates distrust far faster than it would in an office.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust people's work, not their activity.&lt;/strong&gt; This is the one that trips up a lot of managers who are new to remote. The instinct to monitor activity; who is online, how quickly they respond, whether they log into the project management tool every morning - is natural but counterproductive. It signals distrust, and it measures the wrong thing. What matters is whether the work is getting done and getting done well. Build your feedback loops around outcomes and output quality. Evaluate work in PRs and demos and shipped features, not presence signals.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cross-Timezone Collaboration
&lt;/h2&gt;

&lt;p&gt;If your team spans timezones, you have an additional set of challenges worth addressing specifically.&lt;/p&gt;

&lt;p&gt;The most important thing is to identify and protect your overlap window. If your team has people in London, New York, and San Francisco, you probably have one or two hours where everyone is theoretically available. Do not fill that window with status meetings. Use it for the things that genuinely require synchronous interaction: difficult decisions, thorny technical discussions, anything emotionally sensitive. Reserve everything else for async.&lt;/p&gt;

&lt;p&gt;Rotate meeting times when necessary. If your standup is always at 9am Eastern, your engineers in Singapore are always staying up late or waking up early. Rotating the inconvenience is a small gesture that signals you take their time seriously.&lt;/p&gt;

&lt;p&gt;Be explicit about whose timezone is default and when. If your planning docs say "we will ship on Friday," which Friday? Which timezone? These details seem trivial until they are not. Building a habit of including timezone context in time-sensitive communications prevents a class of miscommunications that accumulate into real friction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Well-Being and Sustainable Remote Work
&lt;/h2&gt;

&lt;p&gt;Remote work is fantastic for autonomy and focus. It is genuinely difficult for isolation and boundary erosion. As an engineering manager, you have a responsibility to pay attention to both sides of that equation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch for isolation signals.&lt;/strong&gt; Remote engineers who are struggling often go quiet before they surface the issue explicitly. If someone who used to be active in threads and reviews goes noticeably quiet, check in. Not with "is everything okay" as a formality, but with a real one-on-one conversation where you ask how things are going and actually listen to the answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Normalize the end of the workday.&lt;/strong&gt; One of the more insidious patterns in remote teams is the slow expansion of working hours. When the office and home are the same place, the natural stopping signals disappear. Engineers who routinely work evenings and weekends are not a sign of commitment. They are a sign of an unsustainable pattern that will eventually lead to burnout or attrition. As a manager, model the behavior you want: sign off at a reasonable hour, do not send non-urgent messages late in the evening, and acknowledge when someone is clearly overextended.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check in on workload regularly.&lt;/strong&gt; Capacity conversations should happen in one-on-ones, not just during planning ceremonies. Ask directly: is the workload sustainable right now? Where are you feeling stretched? This gives you information you need before a problem becomes a crisis, and it signals that you actually care about the answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in the physical workspace.&lt;/strong&gt; If your budget allows, provide equipment stipends or home office reimbursements. A good chair, a good monitor, and reliable internet are not luxuries for a remote engineer -- they are the tools of the job. Teams whose employers invest in their physical setup tend to be more productive and more loyal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It Together: The Remote Engineering Operating System
&lt;/h2&gt;

&lt;p&gt;Remote engineering leadership works when you treat it as a system design problem. Here is the short version of what that system looks like:&lt;/p&gt;

&lt;p&gt;Communication has explicit norms; response time expectations, escalation paths, and async as the default. Focus time is protected. Meetings are scarce and well-run.&lt;/p&gt;

&lt;p&gt;Knowledge is written down. Decisions have records. Operational tasks have guides. Onboarding works without someone holding your hand through it.&lt;/p&gt;

&lt;p&gt;Trust is built through consistent behavior, genuine human contact, and outcome-focused evaluation rather than activity monitoring.&lt;/p&gt;

&lt;p&gt;Timezones are planned around. Overlap windows are used strategically. The burden of inconvenient meeting times is shared.&lt;/p&gt;

&lt;p&gt;Well-being is actively monitored. Workload is a real conversation, not a formality. Boundaries around working hours are modeled from the top.&lt;/p&gt;

&lt;p&gt;None of this is complicated. Most of it is just consistent application of basic leadership principles in a context where the defaults do not work and you have to build the structure yourself.&lt;/p&gt;

&lt;p&gt;That is what good remote engineering management looks like: not doing more, but building the right systems so that the team can do their best work without you holding everything together by force of personality.&lt;/p&gt;

&lt;p&gt;Start with the thing that is most obviously broken in your current setup, build one system around it, and then move to the next. You do not need to implement everything at once. You just need to keep building.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you found this useful, the next article in this series covers the transition from individual contributor to engineering manager; specifically how to let go of the work you love doing and build the habits that actually make you effective in a leadership role.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>remote</category>
      <category>management</category>
      <category>leadership</category>
      <category>culture</category>
    </item>
  </channel>
</rss>
