<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Edvaldo Freitas</title>
    <description>The latest articles on DEV Community by Edvaldo Freitas (@ed_dfreitas).</description>
    <link>https://dev.to/ed_dfreitas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ed_dfreitas"/>
    <language>en</language>
    <item>
      <title>Open-source, model-agnostic alternative to Claude Code Review</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Tue, 10 Mar 2026 14:33:56 +0000</pubDate>
      <link>https://dev.to/ed_dfreitas/open-source-model-agnostic-alternative-to-claude-code-review-4021</link>
      <guid>https://dev.to/ed_dfreitas/open-source-model-agnostic-alternative-to-claude-code-review-4021</guid>
      <description>&lt;p&gt;Claude just launched code review and the announced price is $15–$25 per pull request. 🤯&lt;/p&gt;

&lt;p&gt;I got curious and decided to run a simple calculation.&lt;/p&gt;

&lt;p&gt;There are teams processing ~1,000 PRs per week.&lt;/p&gt;

&lt;p&gt;Nothing unusual for larger companies.&lt;/p&gt;

&lt;p&gt;If each review costs $25, that’s something close to $100k per month just for code review. 🫠&lt;/p&gt;

&lt;p&gt;Just to review PRs.&lt;/p&gt;

&lt;p&gt;Code review is a continuous activity. The more a team grows and the more code gets produced, the higher the volume of PRs.&lt;/p&gt;

&lt;p&gt;If the cost scales with the number of PRs, the financial impact can get pretty large pretty quickly.&lt;/p&gt;

&lt;p&gt;You can use Kodus, which is open source, run any model you want, and pay less than 1/10 of that.&lt;/p&gt;

&lt;p&gt;If you want to try it, you can get started right away.&lt;/p&gt;

&lt;p&gt;No credit card, no PR limits, no user limits, and no need to configure an API key.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://kodus.io" rel="noopener noreferrer"&gt;https://kodus.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/kodustech/kodus-ai" rel="noopener noreferrer"&gt;https://github.com/kodustech/kodus-ai&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
    <item>
      <title>I gathered more than 10,000 skills for AI agents in one place</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Wed, 28 Jan 2026 13:30:43 +0000</pubDate>
      <link>https://dev.to/ed_dfreitas/i-gathered-more-than-10000-skills-for-ai-agents-in-one-place-661</link>
      <guid>https://dev.to/ed_dfreitas/i-gathered-more-than-10000-skills-for-ai-agents-in-one-place-661</guid>
      <description>&lt;p&gt;Over the past few months, I’ve been spending a lot of time studying how AI agents are actually being built in practice.&lt;/p&gt;

&lt;p&gt;One thing became very clear: there’s a huge number of skills scattered all over the place, but everything is fragmented, hard to search, and even harder to reuse.&lt;/p&gt;

&lt;p&gt;So I decided to run an experiment:&lt;/p&gt;

&lt;p&gt;👉 a site with more than 10,000 agent skills, organized, searchable, and with direct links to each implementation.&lt;/p&gt;

&lt;p&gt;You can check it out here: &lt;a href="https://ai-skills.io/" rel="noopener noreferrer"&gt;https://ai-skills.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m still refining a few things and adding more skills, so I’d really love your feedback :)&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>resources</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Dealing with legacy code in modern applications</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Fri, 16 Jan 2026 12:01:00 +0000</pubDate>
      <link>https://dev.to/kodus/dealing-with-legacy-code-in-modern-applications-4052</link>
      <guid>https://dev.to/kodus/dealing-with-legacy-code-in-modern-applications-4052</guid>
      <description>&lt;p&gt;A new feature request lands, and you realize it has to touch the old permissions module. The project planning meeting suddenly gets very quiet because everyone knows any change in that part of the &lt;strong&gt;legacy code&lt;/strong&gt; means weeks of careful testing, unpredictable behavior, and a high-stakes deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2Fa-wise-senior-dev-once-said-dont-touch-the-legacy-code-base-v0-6tzcybbk052a1-1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2Fa-wise-senior-dev-once-said-dont-touch-the-legacy-code-base-v0-6tzcybbk052a1-1.webp" alt="" width="500" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the friction point where building new things slows to a crawl, constrained by decisions made years ago by people who are no longer on the team.&lt;/p&gt;

&lt;h2&gt;Beyond the "Rewrite or Refactor" Dichotomy&lt;/h2&gt;

&lt;p&gt;In these moments, someone will almost always suggest a full rewrite. It’s an appealing idea, a clean slate where all past mistakes are erased. The reality is that a &lt;a href="https://kodus.io/en/refactor-or-rewrite/" rel="noopener noreferrer"&gt;full rewrite&lt;/a&gt; is rarely a practical solution. It freezes the delivery of new business value for months or even years, introduces a massive amount of risk, and discards years of battle-tested logic that, for all its faults, currently runs the business. The code might be hard to read, but it contains valuable, implicit knowledge about edge cases you haven't even thought of yet.&lt;/p&gt;

&lt;p&gt;The alternative, incremental refactoring, works much better when treated as a continuous process instead of a standalone project. The goal is to evolve the system, not to achieve a perfect, idealized state. The most valuable code is often the oldest, and learning to work with it is a core engineering skill.&lt;/p&gt;

&lt;h3&gt;When legacy code becomes a real bottleneck&lt;/h3&gt;

&lt;p&gt;The term &lt;a href="https://kodus.io/en/managing-technical-debt-rapid-growth/" rel="noopener noreferrer"&gt;"technical debt"&lt;/a&gt; can be a bit abstract, but a bottleneck is painfully concrete. It’s the module that slows down every new feature. It's the source of recurring performance incidents or the component with a known security vulnerability that’s too entangled to patch easily. You know you've found a bottleneck when multiple teams have to coordinate for even minor changes, or when the operational overhead of keeping a service running outweighs the value it delivers.&lt;/p&gt;

&lt;p&gt;These are the areas that actively impede progress. Identifying them is the first step, because you can't fix everything at once, and trying to do so just leads to paralysis.&lt;/p&gt;

&lt;h2&gt;An Approach for Evolving Systems&lt;/h2&gt;

&lt;p&gt;The most effective approach is to stop thinking about "fixing everything" and instead focus on "strategically evolving" the system. Your job is to create pathways for new development while safely containing the old. Architectural patterns like the Strangler Fig are built on this idea: you gradually intercept traffic to an old system, route it to new services, and eventually, the old system is "strangled" out of existence without a high-risk cutover.&lt;/p&gt;

&lt;p&gt;This requires &lt;a href="https://kodus.io/en/technical-debt-prioritizing-features/" rel="noopener noreferrer"&gt;prioritizing changes&lt;/a&gt; based on a combination of business impact and technical risk. A part of the codebase might be messy, but if it’s stable and rarely changes, it's probably not where you should spend your time. Focus on the parts of the system that are both critical and under constant pressure to change.&lt;/p&gt;

&lt;h3&gt;Techniques for Managing the Boundaries&lt;/h3&gt;

&lt;p&gt;To evolve a system safely, you need to manage the boundaries between the old and new parts of your codebase. Here are a few practical techniques that work well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Identify Seams and Interfaces:&lt;/strong&gt; First, you have to find the natural joints in the system. These are the points where you can intercept calls between components without rewriting either one. A seam could be an API call, a method invocation, or a message being passed to a queue. Once you find them, you can start to redirect behavior.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Isolate Functionality:&lt;/strong&gt; When you have a particularly problematic area, the goal is to contain it. You can write an adapter or an anti-corruption layer that sits between the old code and the rest of the application. This new layer translates requests and responses, hiding the complexity of the legacy component and providing a clean, modern interface for new code to interact with.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Write Characterization Tests:&lt;/strong&gt; You can't safely change code if you don't know what it does. Before you touch anything, write a suite of tests that document the current behavior, including its bugs. These "&lt;a href="https://kodus.io/en/how-to-write-software-test-cases/" rel="noopener noreferrer"&gt;characterization tests&lt;/a&gt;" don't judge the code; they just capture its existing state. When you make a change, you can run these tests to ensure you haven't broken an implicit assumption somewhere else in the system.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Implement Observability:&lt;/strong&gt; You need to see what the old components are doing in production. Adding detailed logging, metrics, and distributed tracing gives you the insight needed to understand runtime behavior. Without this, you're flying blind every time you deploy a change that touches the older code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;The Incremental Evolution Framework: Decide, Delimit, Develop&lt;/h2&gt;

&lt;h3&gt;1. Decide: Where is the real pain?&lt;/h3&gt;

&lt;p&gt;Your resources are finite, so prioritization is everything. The place to start is at the intersection of business value and frequency of change. Which part of the system is most often a blocker for important projects? Evaluate the impact of modernizing a component versus the effort required. Often, the highest-leverage targets are areas with high coupling or poor testability, because improving them unlocks velocity for multiple teams.&lt;/p&gt;

&lt;h3&gt;2. Delimit: How can we contain the change?&lt;/h3&gt;

&lt;p&gt;Once you've decided where to focus, the next step is to draw a boundary around the problem area. This is where architectural decisions come in. You might create a new microservice to house the new logic, using an API gateway or a message queue to bridge the gap between the old and new systems. The key is to create a clear interface that abstracts away the legacy implementation. The rest of the application shouldn't need to know whether it's talking to a 10-year-old monolith or a brand-new service.&lt;/p&gt;

&lt;h3&gt;3. Develop: Execute with small, well-tested iterations&lt;/h3&gt;

&lt;p&gt;With a clear boundary in place, you can start developing. New features that interact with the legacy code should be built using test-driven development to ensure the new logic and the integrations are correct. Small, incremental changes deployed via an automated pipeline give you the confidence to move quickly. Each pull request should be a small, verifiable step forward. Code reviews for this kind of work should be especially focused on the clarity and durability of the new interfaces you're creating.&lt;/p&gt;

&lt;h2&gt;Cultivating a Sustainable Relationship with Your Codebase&lt;/h2&gt;

&lt;p&gt;Working with legacy systems isn't a one-off project with a defined end. It's a continuous process of improvement. The most effective &lt;a href="https://kodus.io/en/scaling-engineering-culture-systems/" rel="noopener noreferrer"&gt;teams build a culture&lt;/a&gt; that values this incremental work, not just reactive fixes when something breaks. This means investing in developer tools and practices that support safe, small changes and make refactoring a low-ceremony activity.&lt;/p&gt;

&lt;p&gt;Ultimately, "modern" is a moving target. The shiny new service you build today will be somebody else's legacy code in five years. The real skill is learning to manage complexity as an ongoing practice, ensuring the codebase you have is one that allows you to keep building, testing, and delivering value.&lt;/p&gt;

</description>
      <category>legacy</category>
    </item>
    <item>
      <title>How to structure technical planning for engineering</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Fri, 16 Jan 2026 10:57:00 +0000</pubDate>
      <link>https://dev.to/kodus/how-to-structure-technical-planning-for-engineering-1h1n</link>
      <guid>https://dev.to/kodus/how-to-structure-technical-planning-for-engineering-1h1n</guid>
      <description>&lt;p&gt;In a fast-growing company, the default state of engineering is reactive. The product roadmap is packed, deadlines are tight, and the team is constantly switching context to put out the fire of the moment. This environment makes any kind of intentional &lt;strong&gt;technical planning&lt;/strong&gt; feel like a luxury you can’t afford, so most teams don’t even try. You end up stuck shipping features, fixing bugs, and hoping core systems don’t fall apart, while &lt;a href="https://kodus.io/en/managing-technical-debt-rapid-growth/" rel="noopener noreferrer"&gt;technical debt&lt;/a&gt; quietly piles up in the background.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2F0_3rGgPkCs0m3m-ZzJ-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2F0_3rGgPkCs0m3m-ZzJ-1.jpg" alt="" width="800" height="870"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;This “just build” approach looks productive in the short term, but it comes with a real and cumulative cost. Before long, you notice engineers on different teams solving the same scaling problem in different ways. A simple feature request now requires changes across five services, each with its own quirks. The speed you were so proud of six months ago starts to drop, and no one can point to a single clear reason why. This is the slow accumulation of uncoordinated decisions.&lt;/p&gt;

&lt;h2&gt;Technical planning for unstable environments&lt;/h2&gt;

&lt;p&gt;Breaking out of this reactive cycle requires a shift in how we think about planning. Most frameworks are too rigid for a company where priorities can change in a week. A detailed two-year technical roadmap is useless if it becomes obsolete a month after it’s written.&lt;/p&gt;

&lt;p&gt;The goal is adaptability, not prediction. It’s about creating shared understanding around technical direction so that, when engineers make local decisions, they align with a broader direction. That forward-looking view is what allows a team to handle change without breaking everything. It turns planning from a bureaucratic obstacle into an accelerator, because you stop wasting time on duplicated work and architectural dead ends. Understanding the real cost of uncontrolled growth is the first step; it shows up as longer onboarding times, higher bug rates in critical modules, and a general sense of friction when trying to build anything new.&lt;/p&gt;

&lt;h2&gt;A practical and adaptable model for technical planning&lt;/h2&gt;

&lt;p&gt;A good technical plan doesn’t live in a dense document no one reads. It’s a living guide that connects engineering work directly to what the business is trying to achieve. It needs to be flexible enough to change quickly, but structured enough to provide real direction.&lt;/p&gt;

&lt;h3&gt;Define your North Star ⭐️&lt;/h3&gt;

&lt;p&gt;Every meaningful technical initiative should be traceable back to a business goal. This isn’t about pleasing stakeholders; it’s about making sure you’re solving the right problems. When the company decides to move upmarket and serve enterprise customers, that translates into very specific technical requirements, like more robust permissions, audit logs, and integrations with common identity providers.&lt;/p&gt;

&lt;p&gt;Framing the work this way pulls the conversation out of the abstract and into concrete outcomes. Discussions that used to sound like “we need to refactor the authentication service” become directly tied to clear business goals, like closing enterprise customers in Q3, which requires supporting SAML and role-based access control. The reason behind the work becomes clear, and the engineering team understands why it matters, not just what needs to be done.&lt;/p&gt;

&lt;h3&gt;Building the “What”: Scope and Outcomes&lt;/h3&gt;

&lt;p&gt;With a clear “why,” prioritization becomes much simpler. You can evaluate technical work side by side with product features based on impact to company goals. A large refactor might not deliver immediate, visible value to users, but if it unlocks the ability to iterate 50% faster in a core part of the product over the next year, the value is obvious.&lt;/p&gt;

&lt;p&gt;This is also where you make conscious trade-offs. You might decide to accept some technical debt in a non-critical area to free up resources and fix a serious scalability bottleneck that’s threatening user growth. The key point is to make these decisions explicitly, instead of letting them happen by accident. When defining scope, the goal is to design for what you know is coming, not every possible future. Build with extensibility where you anticipate change, but avoid overengineering based purely on speculation.&lt;/p&gt;

&lt;h3&gt;The “How”: Planning and Execution&lt;/h3&gt;

&lt;p&gt;Execution needs to happen within the workflow the team already has. When parallel processes or extra ceremonies appear, they tend to be ignored as soon as pressure ramps up.&lt;/p&gt;

&lt;p&gt;Here are a few practices that work well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Architectural Decision Records (ADRs).&lt;/strong&gt; A simple markdown file in the right repository is enough. This is where decisions are recorded, along with the alternatives considered and the reasoning behind the final choice. That context ends up being valuable both for new team members and for your future self.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Involve senior engineers early in the design phase.&lt;/strong&gt; &lt;a href="https://kodus.io/en/how-small-pull-requests-improve-team-flow/" rel="noopener noreferrer"&gt;Don’t wait for a 5,000-line pull request&lt;/a&gt; to show up. Have conversations about the “how” before implementation starts. That might be a short design doc, a whiteboard discussion, or even a dedicated chat channel. This helps spread knowledge and catch architectural issues while they’re still cheap to fix.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Create regular review and adaptation cycles.&lt;/strong&gt; A technical plan isn’t static. Review it quarterly. What changed in the business? What did we learn from the last quarter’s work? Adjust priorities based on new information. This helps keep the plan relevant and useful.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Turning Technical Planning into an Advantage&lt;/h2&gt;

&lt;p&gt;Introducing this process doesn’t require a company-wide initiative. You can start small, with a single team or a critical system. Pick an area of the codebase that’s a known source of pain and build a clear plan to improve it, directly connecting the effort to a product or business outcome.&lt;/p&gt;

&lt;p&gt;At the end of the day, it’s about &lt;a href="https://kodus.io/en/scaling-engineering-culture-systems/" rel="noopener noreferrer"&gt;fostering an engineering culture&lt;/a&gt; where everyone feels responsible for the health of the system. When engineers understand the “why” behind the work and see a clear path to &lt;a href="https://kodus.io/en/evolving-code-standards-scaling-teams/" rel="noopener noreferrer"&gt;improving the codebase&lt;/a&gt;, they’re more engaged. You can measure the impact directly through improved development velocity, system stability, and, most importantly, the ability to respond to new business opportunities without having to rebuild everything from scratch.&lt;/p&gt;

</description>
      <category>management</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>How Small Pull Requests Improve Team Flow</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Thu, 15 Jan 2026 14:05:08 +0000</pubDate>
      <link>https://dev.to/kodus/how-small-pull-requests-improve-team-flow-3gml</link>
      <guid>https://dev.to/kodus/how-small-pull-requests-improve-team-flow-3gml</guid>
      <description>&lt;p&gt;A 2,000-line Pull Request lands in your &lt;a href="https://kodus.io/en/scaling-code-review-teams" rel="noopener noreferrer"&gt;review queue&lt;/a&gt; on a Thursday afternoon, touching a dozen files spread across three different parts of the application. Everyone knows what happens next. You either give it an LGTM while hoping for the best, or you block two hours of your day, completely breaking your own flow, just to understand the changes.&lt;/p&gt;

&lt;p&gt;This is not an exception, it is usually routine for the team. Large PRs slow the team down, increase the risk of errors slipping through, and make the &lt;a href="https://kodus.io/en/how-to-improve-software-delivery-speed/" rel="noopener noreferrer"&gt;delivery cycle less predictable&lt;/a&gt;. The problem is not just the code inside the PR, it is the time it ties up people, decisions, and deploys throughout the entire process.&lt;/p&gt;

&lt;h2&gt;The weight of large changes&lt;/h2&gt;

&lt;p&gt;Large pull requests create bottlenecks that spill outward. The most immediate impact falls on the reviewer, who &lt;a href="https://kodus.io/en/context-switching-is-hurting-your-engineering-team/" rel="noopener noreferrer"&gt;now faces massive context switching&lt;/a&gt;. A change of this size requires not only reading the code, but reconstructing the entire mental model of the person who wrote it. This cognitive load makes it tempting to postpone the review, leaving the PR sitting idle for days while the author’s own context starts to fade. When the review finally happens, the feedback is usually less detailed, because the reviewer is just trying to understand the architecture at a high level instead of identifying logic flaws.&lt;/p&gt;

&lt;p&gt;This delay creates cascading effects. While the large PR waits for review, the &lt;code&gt;main&lt;/code&gt; branch keeps moving forward, making merge conflicts almost inevitable. The longer a branch stays open, the more it diverges and the harder final integration becomes. Dependent work gets blocked, and if a fundamental design issue is found during this late review, the rework cost is huge. A problem discovered days after the code was written is much more expensive to fix than one caught in minutes. The entire process slows down, and the team’s predictability suffers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2025%2F01%2Fhpaef0nwm8d11-928x1024.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2025%2F01%2Fhpaef0nwm8d11-928x1024.webp" alt="" width="800" height="882"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Why small PRs make a difference in the team’s process&lt;/h2&gt;

&lt;p&gt;Moving to smaller, more frequent pull requests tackles these bottlenecks directly. When a review is limited to something around a hundred lines, it can be done in minutes, often between other tasks, without a major context switch. The cognitive load is low, which encourages reviewers to engage quickly and provide &lt;a href="https://kodus.io/en/pull-request-feedback-high-performing-teams/" rel="noopener noreferrer"&gt;more focused and constructive feedback&lt;/a&gt;. The author gets this feedback while the code is still fresh in their mind, allowing for fast iterations. This creates a short cycle of coding, review, and integration that keeps work flowing continuously.&lt;/p&gt;

&lt;p&gt;The benefits compound over time. Small, incremental changes are easier to validate and, if needed, to roll back. The blast radius of any potential issue stays contained within a small, understandable unit of work. Instead of a monolithic, high-risk merge once a week, you end up with a steady flow of low-risk changes reaching production every day. This continuous progress is great for team morale and makes it easier to course-correct if a feature is heading in the wrong direction. The team moves from a batch-processing model to a real-time flow.&lt;/p&gt;

&lt;h2&gt;Main reasons to do small PRs&lt;/h2&gt;

&lt;h3&gt;1. &lt;strong&gt;Faster and more accurate reviews&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When the scope of a Pull Request is small, the reviewer can focus on the details that really matter. This not only speeds up the review but also increases the quality of the feedback. Issues are detected earlier and with greater clarity.&lt;/p&gt;

&lt;h3&gt;2. &lt;strong&gt;Faster feedback&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Smaller PRs make life easier for everyone. Reviewers can respond more quickly, and those waiting for feedback can move forward with less idle time. This speed is essential to keeping productivity high.&lt;/p&gt;

&lt;h3&gt;3. &lt;strong&gt;Reduced conflicts&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each small PR is processed quickly, which reduces the chances of conflicts with other code changes. That means less rework and more focus on what really matters: creating value for the product.&lt;/p&gt;

&lt;h3&gt;4. &lt;strong&gt;Clearer intent&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When you limit the scope of a PR, it becomes much easier for everyone to understand what is being solved. This not only improves communication within the team, but also makes the project history more organized and traceable.&lt;/p&gt;

&lt;h3&gt;5. &lt;strong&gt;Higher deploy frequency&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With small Pull Requests, it becomes possible to do smaller, more frequent deploys. That means features and fixes reach users faster, while the risk of something breaking in production goes down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read also:&lt;/strong&gt;&lt;a href="https://kodus.io/melhorando-qualidade-pull-requests/" rel="noopener noreferrer"&gt; How to improve PR quality&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Building a culture of small Pull Requests&lt;/h2&gt;

&lt;h3&gt;&lt;strong&gt;Breaking work into smaller parts&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;It is not enough to tell people “make smaller PRs”. You need to offer techniques for slicing work into independent, mergeable pieces.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement feature toggles for incomplete functionality. This is the most powerful tool for decoupling deploy from release. It allows incremental merging of the code for a larger feature, even when it is not yet ready for users. Each part can be reviewed and tested in isolation behind a flag.&lt;/li&gt;



&lt;li&gt;Prioritize refactoring as a distinct, isolated change. Avoid mixing &lt;a href="https://kodus.io/en/reduce-technical-debt" rel="noopener noreferrer"&gt;refactoring&lt;/a&gt; with feature delivery in the same PR. When a change only touches structure, it is much easier to review. When it mixes code reorganization with new behavior, the review becomes confusing and risky. Clean up first, merge it, and only then build the feature on top of a clearer base.&lt;/li&gt;



&lt;li&gt;Slice user stories vertically into minimal viable increments. Instead of building an entire feature horizontally (backend, then frontend, then API), find the smallest possible vertical slice that delivers a bit of value to the user. Maybe the first PR only adds the API endpoint with a hardcoded response. The next adds the database schema. The following one connects the two. Each step is a small, verifiable improvement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Team agreements and practices for Pull Requests&lt;/h3&gt;

&lt;p&gt;Consistency comes from shared expectations. The team needs to have explicit conversations about what a “good” PR looks like.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set guidelines for acceptable size and complexity. Some teams use lines of code as a rough proxy (for example, under 250 lines), while others look at the number of files touched or the conceptual weight of the change. The exact number matters less than having a shared understanding.&lt;/li&gt;



&lt;li&gt;Define clear expectations for review turnaround times. Agree on a target response time for reviews, such as a few business hours. This prevents PRs from being forgotten in the queue and signals that reviewing is a priority responsibility for everyone on the team.&lt;/li&gt;



&lt;li&gt;Use tools to identify and flag exceptionally large Pull Requests. Many CI/CD platforms and Git tools can be configured to automatically mark PRs that exceed a certain size threshold. This works as a gentle, automatic reminder of team agreements, without making the conversation confrontational.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Measuring the impact on delivery flow&lt;/h2&gt;

&lt;p&gt;To know whether the change is working, you need to measure. &lt;a href="https://kodus.io/en/engineering-metrics-data-driven-improvement/" rel="noopener noreferrer"&gt;Adding flow metrics&lt;/a&gt; can provide clear evidence of improvement and help justify the investment in this practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Track Lead Time for Changes.&lt;/strong&gt; This metric measures the time between the first commit on a branch and the code running in production. Smaller PRs tend to reduce this number directly by shortening review and integration stages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor Code Review Cycle Time.&lt;/strong&gt; How long does a PR sit waiting for review or iteration? This metric helps surface process bottlenecks and should drop as PRs get smaller and more focused.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch for a reduction in merge conflict frequency and defect rates.&lt;/strong&gt; With shorter branches and smaller changes, conflicts tend to decrease. The faster feedback cycle also helps catch bugs earlier, reducing the rate of defects that get reintroduced or make it to production.&lt;/p&gt;

&lt;h2&gt;At the end of the day&lt;/h2&gt;

&lt;p&gt;Creating smaller PRs is about building a more efficient and sustainable workflow. It is not something that happens overnight, but over time these practices become part of the team’s culture and make all the difference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2025%2F01%2Fahfu67.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2025%2F01%2Fahfu67.jpg" alt="" width="577" height="433"&gt;&lt;/a&gt;&lt;/p&gt;



</description>
      <category>codequality</category>
      <category>productivity</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Code Standards and Best Practices for Growing Teams</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Thu, 15 Jan 2026 10:55:00 +0000</pubDate>
      <link>https://dev.to/kodus/code-standards-and-best-practices-for-growing-teams-l4g</link>
      <guid>https://dev.to/kodus/code-standards-and-best-practices-for-growing-teams-l4g</guid>
      <description>&lt;p&gt;When an engineering team is small, informal agreements tend to work just fine. There’s a shared understanding of how things should be built, because any disagreement can be quickly resolved in a Slack thread or a conversation. But as the team grows from five to fifty developers, these unwritten rules start to cause problems. Suddenly, you find yourself in pull request comments debating brace placement, naming conventions, and which async pattern to use, for the third time in the same week. That’s when the need for explicit &lt;strong&gt;code standards&lt;/strong&gt; becomes painfully obvious.&lt;/p&gt;

&lt;p&gt;The friction isn’t limited to PRs. It also shows up in the cost of context switching, when each microservice has its own rules for errors and configuration, for example. A developer from the checkout team trying to fix a bug in the inventory service first has to spend an hour just understanding the local conventions, before even starting to debug. This fragmentation slows down collaboration and makes the entire system harder to understand. If teams can’t even agree on something basic, like a branching strategy or commit message format, you lose the ability to automate release notes or reliably track changes across services.&lt;/p&gt;

&lt;h2&gt;Code standards as a foundation for shared context&lt;/h2&gt;

&lt;p&gt;The most common reaction to this chaos is to impose a top-down set of rules. An architecture group might write a very comprehensive document dictating everything from file structure to design patterns. This approach almost always creates more problems. It feels bureaucratic, and developers, who are paid to solve problems, will naturally work around any rule that gets in the way without delivering clear value. The goal isn’t to impose dogma, but to build a shared understanding that reduces cognitive load.&lt;/p&gt;

&lt;p&gt;Code standards exist to get trivial decisions out of the way, allowing the team to focus its energy on the real business problem. When they work well, they provide default answers to common questions and clear the path for what actually matters. This is the balance between &lt;a href="https://kodus.io/en/speed-code-quality-startups/" rel="noopener noreferrer"&gt;moving fast now and building a system that remains sustainable for years&lt;/a&gt;. The decision to skip writing tests might speed up the release of a feature by a day, but the long-term cost of that missing context and safety net will be paid with interest during the next production incident or refactor.&lt;/p&gt;

&lt;h3&gt;The real purpose is to reduce cognitive load&lt;/h3&gt;

&lt;p&gt;Think about the decisions an engineer makes every day. Many of them are repetitive: how should I format this file? What should I name this component? How should I structure API error responses? Good standards provide a single answer, automated or well documented, to these questions. For example, an agreed-upon JSON structure for logs, like &lt;code&gt;{"level": "error", "timestamp": "...", "service": "auth-api", "message": "..."}&lt;/code&gt;, means everyone can query and filter logs across the platform using the same tools and techniques, without having to learn the logging quirks of each service.&lt;/p&gt;

&lt;h2&gt;How to keep code standards adaptable over time&lt;/h2&gt;

&lt;p&gt;Good code standards are the ones that can evolve over time. They need to be grounded in principles, not rigid rules, and make sense to the people writing code every day. The focus is on making collaboration easier and maintaining &lt;a href="https://kodus.io/en/code-quality-standards-and-best-practices/" rel="noopener noreferrer"&gt;quality&lt;/a&gt;, without getting in the way of flow.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2Fcoding_standard-1-1024x576.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2Fcoding_standard-1-1024x576.jpg" alt="code standards" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Defining core principles, not rigid rules&lt;/h3&gt;

&lt;p&gt;Instead of a hundred-page document, start with a small set of principles that are easy to remember and apply. Some good examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Focus on clarity and intent.&lt;/strong&gt; Code should be written to be understood by everyone first. A variable name like &lt;code&gt;customerData&lt;/code&gt; is too vague, but &lt;code&gt;activePayingCustomers&lt;/code&gt; clearly communicates intent. This isn’t something a linter can always catch, which makes it a great topic for discussion during code review.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Promote consistency where it matters most.&lt;/strong&gt; Be opinionated about what truly affects collaboration across teams, such as API design, security standards, and the use of core libraries. The style of a private function inside a single module matters much less.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Automate some processes to sustain standards over time &lt;a href="https://kodus.io/en/ai-code-review-tools/" rel="noopener noreferrer"&gt; like code review &lt;/a&gt;.&lt;/strong&gt; People’s time is too valuable to be spent debating formatting or standards everyone should already be following. Automating this makes life easier for everyone, especially when new people are joining the team.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;The “balance zone”&lt;/h3&gt;

&lt;p&gt;There’s always a point of balance. Every new standard adds a bit of friction, so it’s worth asking whether it really improves day-to-day clarity and consistency.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Too little:&lt;/strong&gt; Leads to a chaotic codebase, where every file feels like it was written by a different team. This is where &lt;a href="https://kodus.io/en/managing-technical-debt-rapid-growth/" rel="noopener noreferrer"&gt;technical debt&lt;/a&gt; piles up through duplicated logic and inconsistent patterns.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Too much:&lt;/strong&gt; Creates an overly rigid environment that stifles the team and innovation. If engineers have to fight the tools or fill out a form just to try a new library, they’ll stop experimenting. Standards become a source of frustration for the people building.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Just right:&lt;/strong&gt;Defines a default path for most cases, while still leaving room for exceptions when they make sense. Consistency where it matters helps collaboration, and flexibility everywhere else gives the team space to innovate.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Some strategies you can try&lt;/h3&gt;

&lt;p&gt;Putting standards into practice requires more than just writing them down. You need to integrate them into the daily workflow and create a mechanism for them to evolve.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Define a default path for common tasks.&lt;/strong&gt; Build tools that make doing the right thing the easiest path. A CLI command like &lt;code&gt;platform-cli create-service&lt;/code&gt;, which scaffolds a new project with the correct CI/CD pipelines, logging libraries, and lint configs already set up, is far more effective than a wiki page.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Make standards a team responsibility.&lt;/strong&gt; Create a space to discuss and update standards, such as an engineering guild or a dedicated Slack channel. Changes to standards should be proposed and discussed via pull requests in a shared repository, just like any other code change. This increases team buy-in and helps keep standards up to date.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Implement a feedback loop for refinement.&lt;/strong&gt; Standards aren’t immutable. The team should regularly ask itself, “Is this rule still helping us, or is it getting in the way?”. If a specific lint rule is constantly being ignored, it might be a sign that the problem is with the rule, not the code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Integrating best practices into the workflow&lt;/h3&gt;

&lt;p&gt;In the end, the best standards become part of the team’s daily routine. They’re simply part of how things are done, without adding another checklist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodus.io/en/scaling-code-review-teams/" rel="noopener noreferrer"&gt;Code reviews&lt;/a&gt; become learning opportunities, where a senior engineer can point to documentation that explains the rationale behind a particular standard. Documenting the &lt;em&gt;why&lt;/em&gt; behind a decision in an ADR gives future engineers the context they need to make better decisions. This creates a virtuous cycle in which standards help simplify onboarding for new team members, who in turn learn the conventions and help maintain them, making the entire engineering organization more cohesive and effective as it grows.&lt;/p&gt;

</description>
      <category>codequality</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The challenge of managing multiple projects as a Tech Lead</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Wed, 14 Jan 2026 13:52:36 +0000</pubDate>
      <link>https://dev.to/kodus/the-challenge-of-managing-multiple-projects-as-a-tech-lead-2i5p</link>
      <guid>https://dev.to/kodus/the-challenge-of-managing-multiple-projects-as-a-tech-lead-2i5p</guid>
      <description>&lt;p&gt;Your scope as a Tech Lead almost never stays confined to a single, clean workstream. As a product grows, you end up responsible for a new feature initiative, a critical infrastructure migration, and a lingering performance issue, all at the same time. This isn't a promotion; it's an expansion of responsibility that quietly creeps in until you find yourself spread across three different stand-ups, answering questions about domains you haven't had time to think deeply about in weeks.&lt;/p&gt;

&lt;p&gt;The immediate result is a huge spike in cognitive load. You’re holding multiple complex system diagrams in your head, trying to recall the specifics of a data model for one project while a stakeholder from another is pinging you about their timeline. The pressure to keep all the plates spinning forces a reactive mode of operation, where multitasking feels like the only option, even though we all know the &lt;a href="https://kodus.io/en/context-switching-is-hurting-your-engineering-team/" rel="noopener noreferrer"&gt;heavy penalty of constant context switching. &lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;The Systemic Challenges of Capacity and Focus&lt;/h2&gt;

&lt;p&gt;When everything feels slow despite everyone being busy, it’s tempting to look for individual productivity hacks or better time management techniques. But the root cause is usually systemic. The problem isn't how you manage your calendar; it's that the system is overloaded, and the assumptions about team capacity are fundamentally broken.&lt;/p&gt;

&lt;h3&gt;Resource Allocation&lt;/h3&gt;

&lt;p&gt;Most planning starts with an optimistic view of team bandwidth. We map out projects as if they will be worked on in isolation, with developers dedicating 100% of their focus. In reality, that focus is fragmented. A developer assigned to Project A still gets pulled into on-call rotations, bug fixes from their previous work on Project B, and code reviews for Project C. The actual available bandwidth is much lower than what the roadmap assumes, and this mismatch creates a cycle of missed deadlines and rushed work. Output doesn't just degrade linearly under these conditions; it falls off a cliff once the team hits a certain threshold of cognitive overload and fragmentation.&lt;/p&gt;

&lt;h3&gt;The Productivity Drain of Constant Context Switching&lt;/h3&gt;

&lt;p&gt;Every time an engineer has to switch from thinking about a Kubernetes operator configuration to a complex SQL query for a different project, there's a significant mental cost. The context from the first task gets flushed, and it takes time to load the new one. When a team is responsible for multiple unrelated projects, these switches happen all day long. Interruptions from different Slack channels, competing stakeholder requests, and unexpected bugs from various domains create a constant churn that destroys any chance for deep work. As a leader, one of your most important jobs becomes creating an environment that actively protects focused time, which is nearly impossible when the team's mandate is too broad.&lt;/p&gt;

&lt;h3&gt;Dealing with competing priorities&lt;/h3&gt;

&lt;p&gt;Without a clear framework for prioritization, the "annoying" stakeholder often wins. This leads to a reactive cycle where the team bounces between strategic initiatives, urgent customer requests, and critical maintenance. The most difficult part is advocating for the invisible work, like &lt;a href="https://kodus.io/en/managing-technical-debt-rapid-growth/" rel="noopener noreferrer"&gt;paying down technical debt&lt;/a&gt; or refactoring a brittle module. These tasks have no external stakeholder championing them, but neglecting them guarantees that all future projects will be slower and more painful. Your role shifts from technical guidance to making a business case for technical health, &lt;a href="https://kodus.io/en/technical-debt-prioritizing-features/" rel="noopener noreferrer"&gt;connecting a refactor today with the ability to deliver a key feature next quarter&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;The Compounding Effects on Quality and Team Health&lt;/h2&gt;

&lt;p&gt;A perpetually overloaded system doesn't just move slower; it starts to break down in predictable ways. The small compromises made to keep things moving accumulate, affecting the codebase, the architecture, and the team itself.&lt;/p&gt;

&lt;h3&gt;Cross-Project Dependencies and Unforeseen Bottlenecks&lt;/h3&gt;

&lt;p&gt;When you're managing separate projects, it’s easy to miss the subtle ways they connect. A delay in one team's API development can completely block another team's frontend work. These dependencies are often unmapped, living only in people's heads until something goes wrong. A single person who is the sole expert on a legacy service becomes a bottleneck for three different initiatives. The only way to manage this is to stop viewing projects as independent &lt;a href="https://kodus.io/en/silo-busting-knowledge-flow-engineering-teams/" rel="noopener noreferrer"&gt;silos&lt;/a&gt; and start mapping them as an interconnected system, proactively identifying and tracking the dependencies between them.&lt;/p&gt;

&lt;h3&gt;The weight of accumulating technical debt&lt;/h3&gt;

&lt;p&gt;In a multi-project environment, "cutting corners" becomes a rational survival strategy. When faced with a tight deadline for one project and an urgent request from another, it’s almost always easier to write the quick hack than to do the "right" thing. Each of these small decisions adds up, and the accumulated technical debt acts as a drag on all future development. Simple changes start requiring complex workarounds, and the team's velocity slows down. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2Fimage1-1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkodus.io%2Fwp-content%2Fuploads%2F2026%2F01%2Fimage1-1.gif" alt="" width="900" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Preserving Team Morale and Preventing Burnout&lt;/h3&gt;

&lt;p&gt;The human cost of chronic overload is severe. Engineers become demoralized when they can't take pride in their work because they're always rushing. Unclear priorities and constantly shifting goals create a sense of chaos, leading to frustration and burnout. Adding more pressure never accelerates delivery in these situations; it only accelerates attrition. As a leader, your most critical responsibility is to model &lt;a href="https://kodus.io/en/scaling-engineering-culture-systems/" rel="noopener noreferrer"&gt;sustainable work practices&lt;/a&gt; and be the person who protects the team from unrealistic expectations. This often means saying no, which is one of the hardest but most necessary parts of the job.&lt;/p&gt;

&lt;h2&gt;Some frameworks Tech Leads can use to manage multiple projects&lt;/h2&gt;

&lt;p&gt;Getting out of this reactive trap requires moving from simply managing projects to actively shaping the environment. This involves establishing a few frameworks that bring clarity, distribute responsibility, and create more sustainable workflows in both the short and long term.&lt;/p&gt;

&lt;h3&gt;Shifting from Project Management to Portfolio Optimization&lt;/h3&gt;

&lt;p&gt;Instead of treating projects as a checklist to be completed, view them as a portfolio of investments. This means thinking about how they relate to each other.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-project prioritization:&lt;/strong&gt; Is it possible to organize projects so that the learnings or capabilities built in the first one directly benefit the second? For example, building a new authentication library for one project that can then be used by two others.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Resource Constraints:&lt;/strong&gt; Identify cross-project constraints early. If you only have one database expert and three projects need database work, that needs to be factored into the timeline for all three. Don't pretend you can do them in parallel.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Holistic View:&lt;/strong&gt; Use tools like a multi-project Kanban board to visualize all work in one place. This makes the competing demands visible to everyone, including stakeholders, and forces a more realistic conversation about what's possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Who decides what (and why)&lt;/h3&gt;

&lt;p&gt;A Tech Lead who is the bottleneck for every decision is a system failure. The goal is to push decision-making down to the people with the most context. Define who owns what. For example, a sub-team might own the architectural decisions for their set of microservices, while a chapter lead owns the standards for frontend development. This moves the team from endless debate to clear accountability. Empowering team members with defined boundaries of ownership not only speeds things up but also develops their skills and sense of responsibility.&lt;/p&gt;

&lt;h3&gt;Distributing decisions and responsibility&lt;/h3&gt;

&lt;p&gt;Delegation isn't just about offloading tasks; it's a core strategy for managing workload and growing your team. When you delegate a piece of a project, you're also delegating the responsibility and authority that comes with it. This means providing clear expectations and the necessary support, but then trusting your team to execute without micromanagement. Doing this effectively frees you up to focus on the higher-level concerns that only you can handle, like cross-team alignment, architectural strategy, and stakeholder negotiations.&lt;/p&gt;

&lt;h3&gt;Building a culture of communication&lt;/h3&gt;

&lt;p&gt;When multiple workstreams are in flight, communication needs to be solid to avoid turning into chaos. That’s why you need to design clear protocols to manage the flow of information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistent rhythms:&lt;/strong&gt; Create regular, lightweight mechanisms, like a shared weekly update document or a short demo. This keeps everyone aligned without adding a meeting overload.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Shared Sources of Truth:&lt;/strong&gt; Use dashboards or shared project boards that pull information directly from your ticketing system. This provides real-time visibility and prevents stakeholders from having to ask you for status updates constantly.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Minimize Interruptions:&lt;/strong&gt; Establish clear channels for communication. For example, use a specific Slack channel for urgent operational issues and direct all feature requests and planning questions to your ticketing system. This helps protect the team's focus.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These frameworks are a good starting point because they force conversations that usually stay implicit: what’s at stake, what depends on what, who decides what, and what actually fits into the same quarter. That takes weight off your shoulders and reduces the need to be involved in everything.&lt;/p&gt;

</description>
      <category>career</category>
      <category>leadership</category>
      <category>management</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Refactor or Rewrite? Dealing With Code That’s Grown Too Large</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Sat, 10 Jan 2026 09:11:00 +0000</pubDate>
      <link>https://dev.to/kodus/refactor-or-rewrite-dealing-with-code-thats-grown-too-large-2cm</link>
      <guid>https://dev.to/kodus/refactor-or-rewrite-dealing-with-code-thats-grown-too-large-2cm</guid>
      <description>&lt;p&gt;The decision to &lt;strong&gt;refactor or rewrite&lt;/strong&gt; a large codebase usually starts with a feeling of friction. Small changes that should take a day suddenly take a week. Every new feature seems to break an old one, and the team’s bug backlog grows faster than it shrinks.&lt;/p&gt;

&lt;p&gt;This happens because systems don’t just age, they accumulate history. Every feature request, urgent fix, and change in direction adds another layer of code. Over time, what was once a clean architecture turns into a web of dependencies and workarounds. The pressure to ship new features means there’s rarely time to go back and clean things up, so &lt;a href="https://kodus.io/en/managing-technical-debt-rapid-growth/" rel="noopener noreferrer"&gt;technical debt&lt;/a&gt; piles up. When you reach this point, the system starts actively resisting change, and every pull request becomes a painful negotiation with the past.&lt;/p&gt;

&lt;h2&gt;&lt;strong&gt;Signs the system is at its limit&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;It’s easy to complain about a codebase, but there are clear signs that a system is close to breaking. The most obvious one is a steady drop in development speed. You can measure this with &lt;a href="https://kodus.io/en/optimizing-pr-cycle-time/" rel="noopener noreferrer"&gt;cycle time&lt;/a&gt; or, more simply, by comparing how long it takes to ship a basic feature today versus a year ago.&lt;/p&gt;

&lt;p&gt;Another clear sign is an increase in regressions in specific parts of the system. When fixing one bug almost always creates another, it usually points to an architecture that’s too tightly coupled, where small changes have side effects that are hard to predict.&lt;/p&gt;

&lt;p&gt;There are also human costs. When engineers spend more time fighting the system’s limitations than building with it, motivation drops fast. Onboarding new people becomes hard because the cognitive load required to understand the system is huge. And when your most experienced engineers start asking to work on anything else, or you struggle to hire because no one wants to touch the “legacy” stack, that’s usually a sign the system has reached a critical point.&lt;/p&gt;

&lt;h2&gt;Accidental complexity vs. essential complexity&lt;/h2&gt;

&lt;p&gt;Before making a good decision, you need to understand what kind of complexity you’re actually dealing with. Software complexity usually falls into two groups: essential and accidental.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Essential complexity&lt;/strong&gt; is inherent to the business problem you’re solving. If you’re building a payment processing system, you have to deal with regulations, fraud detection, and multiple payment gateways. No matter how clean the code is, that complexity doesn’t go away. The best you can do is keep it under control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accidental complexity&lt;/strong&gt;, on the other hand, comes from the choices we’ve made along the way. It’s the result of outdated libraries, &lt;a href="https://kodus.io/en/code-smells/" rel="noopener noreferrer"&gt;poorly designed abstractions, inconsistent patterns, or quick fixes&lt;/a&gt; that were never revisited. This is the complexity that makes code hard to read, test, and change, even when the business logic itself is simple.&lt;/p&gt;

&lt;h3&gt;Why this distinction is everything&lt;/h3&gt;

&lt;p&gt;This distinction is the most important factor in the refactor versus rewrite debate.&lt;/p&gt;

&lt;p&gt;Refactoring is an excellent tool for attacking accidental complexity. It helps improve abstractions, simplify parts of the system that no longer make sense, and make the codebase more consistent, which makes day-to-day work easier. If your problem is mostly accidental complexity, a series of targeted refactorings is almost always the right answer.&lt;/p&gt;

&lt;p&gt;A full rewrite, however, is often proposed as a solution to all complexity. The problem is that a rewrite doesn’t remove essential complexity.&lt;/p&gt;

&lt;p&gt;If the team doesn’t deeply understand the business domain and its inherent challenges, they’ll simply recreate the same essential complexity in a new language or framework, only now without years of bug fixes and edge-case handling baked in.&lt;/p&gt;

&lt;p&gt;That’s why so many rewrites fail. They confuse essential complexity with accidental problems and end up producing a new system with the same issues as before, plus several new ones.&lt;/p&gt;

&lt;h2&gt;What are the costs of refactoring or rewriting?&lt;/h2&gt;

&lt;p&gt;Both refactoring and rewriting come with costs that go far beyond engineering hours, and they’re often underestimated.&lt;/p&gt;

&lt;h3&gt;The price of refactoring&lt;/h3&gt;

&lt;p&gt;The most significant cost of refactoring is opportunity cost. Every hour your team spends on internal improvements is an hour not spent on customer-facing features. This can be a hard sell to product and business leaders who don’t see immediate value. On top of that, a large refactoring effort sometimes fails to deliver the promised benefits if it doesn’t address the real architectural problems.&lt;/p&gt;

&lt;p&gt;You can spend months cleaning up modules only to realize that the real issue is how the database is structured or how services communicate.&lt;/p&gt;

&lt;h3&gt;The problem with a full rewrite&lt;/h3&gt;

&lt;p&gt;A full rewrite is one of the riskiest projects a software team can take on. Timelines are almost always wildly optimistic. While the new system is being built, the old one still needs to be maintained, which means running two systems in parallel and splitting the team’s focus. All the unwritten rules and implicit knowledge about why the old system works the way it does get lost, leading to a new wave of bugs and regressions.&lt;/p&gt;

&lt;p&gt;This often leads to what’s known as the “second system effect.” Free from the constraints of the old system, architects try to build a perfect, overengineered solution that solves every problem they can think of. Scope grows out of control, the project drags on for years, and by the time it’s finally ready, the business needs have changed again.&lt;/p&gt;

&lt;h2&gt;When should you refactor or rewrite?&lt;/h2&gt;

&lt;p&gt;Instead of relying on gut instinct, you need a structured way to evaluate your options. The decision should be based on a clear analysis of the trade-offs.&lt;/p&gt;

&lt;h3&gt;Criteria to evaluate your options&lt;/h3&gt;

&lt;p&gt;Here are a few key areas to assess with the team:&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Technical viability&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Being realistic, can the current system be maintained for the next two or three years? Or are there architectural decisions, like a &lt;a href="https://kodus.io/en/monolith-to-microservices-guide/" rel="noopener noreferrer"&gt;monolith&lt;/a&gt; that blocks teams, that refactoring won’t fix?&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Risk to business continuity&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;What can go wrong with each option? Refactoring too aggressively can create production instability. Rewriting everything introduces a different kind of risk, with many unknowns and a long, delicate migration.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Team capacity and knowledge&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Does the team have the skills to execute a rewrite in a new technology? Just as important: is there enough knowledge about the quirks of the old system to avoid repeating past mistakes?&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Total cost of ownership&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Look beyond the initial project cost. Consider long-term maintenance costs, the cost of running systems in parallel during a rewrite, and the impact on hiring and retention.&lt;/p&gt;

&lt;h3&gt;A simple heuristic&lt;/h3&gt;

&lt;p&gt;If you need a simpler way to look at the decision, compare the estimated cost of incrementally refactoring the existing system to a state that meets future requirements with the total cost of starting from scratch.&lt;/p&gt;

&lt;p&gt;Be honest and comprehensive in your estimates, including parallel maintenance, migration, and the operational overhead of a new system.&lt;/p&gt;

&lt;p&gt;If the cost of a rewrite is even close to the cost of refactoring, the incremental path is almost always the safer and better choice, because it allows you to deliver value more continuously and manage risk along the way.&lt;/p&gt;

&lt;h2&gt;How to execute&lt;/h2&gt;

&lt;p&gt;Once the decision is made, everything depends on execution. Both paths can work, as long as there’s discipline, clear ownership, and a direct connection to business goals.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;Incremental refactoring with Strangler Fig&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;If the decision is to stick with the existing system, improvement needs to be continuous, not a large, one-off project. Make room in every sprint to &lt;a href="https://kodus.io/en/technical-debt-prioritizing-features/" rel="noopener noreferrer"&gt;reduce technical debt&lt;/a&gt; and improve what’s already in production.&lt;/p&gt;

&lt;p&gt;When larger architectural changes come into play, the &lt;em&gt;Strangler Fig&lt;/em&gt; pattern often works well. The idea is simple: pick a specific piece of functionality, implement it as a separate service, and start routing traffic to this new component through a proxy or routing layer. Gradually, parts of the old monolith stop being used and can be removed.&lt;/p&gt;

&lt;p&gt;Over time, the legacy system gets replaced in a controlled way, without a big-bang migration. This lets you modernize gradually, keep delivering value, and significantly reduce the risk of breaking something critical along the way.&lt;/p&gt;

&lt;h3&gt;Phased rewrites with a minimum viable scope&lt;/h3&gt;

&lt;p&gt;If a rewrite is truly unavoidable, the key is to aggressively limit scope. Define a &lt;strong&gt;Minimum Viable Rewrite (MVR)&lt;/strong&gt; that focuses on a small, well-understood vertical slice of the system.&lt;/p&gt;

&lt;p&gt;The goal is to get part of the new system into production as quickly as possible, even if it runs alongside the old one. Use feature flags and canary releases to roll the new system out gradually to users, giving yourself time to find and fix issues before a full cutover. This phased approach turns a huge, high-risk project into a series of smaller, manageable steps.&lt;/p&gt;

&lt;h3&gt;Governance and long-term alignment&lt;/h3&gt;

&lt;p&gt;No modernization effort will succeed without clear governance. This might mean setting up an Architecture Review Board to guide technical decisions or implementing automated tools to monitor code quality and technical debt.&lt;/p&gt;

&lt;p&gt;More importantly, any refactoring or rewrite initiative needs to be tied to clear product and business goals. If you can’t connect the technical work to real improvements, like faster delivery, greater stability, or a better user experience, support tends to fade over time.&lt;/p&gt;

</description>
      <category>refactoring</category>
      <category>code</category>
    </item>
    <item>
      <title>Engineering metrics: using data (DORA and others) to improve the team</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Thu, 08 Jan 2026 11:10:00 +0000</pubDate>
      <link>https://dev.to/kodus/engineering-metrics-using-data-dora-and-others-to-improve-the-team-4ad4</link>
      <guid>https://dev.to/kodus/engineering-metrics-using-data-dora-and-others-to-improve-the-team-4ad4</guid>
      <description>&lt;p&gt;The conversation around &lt;strong&gt;engineering metrics&lt;/strong&gt; often gets stuck on the wrong things. We end up tracking activities like lines of code or number of commits per week, which say almost nothing about the health of our system or the effectiveness of the team. In practice, these metrics are easy to game and create incentives for the wrong behaviors, like splitting a single logical change into ten tiny commits.&lt;/p&gt;

&lt;p&gt;This gets even more complicated with AI-based coding assistants. The &lt;a href="https://cloud.google.com/resources/content/2025-dora-ai-assisted-software-development-report" rel="noopener noreferrer"&gt;2025 DORA report&lt;/a&gt; highlights how &lt;a href="https://kodus.io/en/whats-the-real-impact-of-ai-in-software-development/" rel="noopener noreferrer"&gt;AI acts as a problem amplifier&lt;/a&gt;. Teams using AI are shipping code faster, but they’re also seeing stability get worse. What happens is that AI makes it easy to generate a lot of code quickly, but if your &lt;a href="https://kodus.io/en/guide-to-code-review/" rel="noopener noreferrer"&gt;review processes&lt;/a&gt;, testing culture, and deployment pipelines are weak, you’re just pushing broken code to production faster than before.&lt;/p&gt;

&lt;p&gt;The core problem is still the same: we need a way to measure the health of the system as a whole, not just the speed of individual developers.&lt;/p&gt;

&lt;h2&gt;DORA: A framework to understand system health&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://kodus.io/en/essential-devops-metrics/" rel="noopener noreferrer"&gt;DORA metrics&lt;/a&gt; (Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Mean Time to Recovery) offer a way to do that. They focus on system-level outcomes rather than individual output. Looking at these four metrics together gives you a clearer picture of speed and stability, which often complement each other. Improving one without considering the others usually leads to problems.&lt;/p&gt;

&lt;h3&gt;Deployment Frequency&lt;/h3&gt;

&lt;p&gt;This &lt;a href="https://kodus.io/en/how-to-measure-deployment-frequency/" rel="noopener noreferrer"&gt;metric measures how often you can successfully deploy to production&lt;/a&gt;. A higher frequency generally points to a healthier, more automated CI/CD pipeline and a workflow based on small, manageable changes. When deployment frequency is low, it’s usually a sign of large and risky batches, manual deployment steps, or fear of breaking things. For a tech lead, tracking this can help justify investments in better CI or more robust testing.&lt;/p&gt;

&lt;h3&gt;Lead Time for Changes&lt;/h3&gt;

&lt;p&gt;This is the &lt;a href="https://kodus.io/en/lead-time-6-tips-to-optimize-your-projects-efficiency/" rel="noopener noreferrer"&gt;time it takes for a commit to reach production&lt;/a&gt;. It’s one of the most useful diagnostic metrics for a team. A long lead time can point to several different issues: PRs that are too large, a slow code review process, flaky tests in the CI pipeline, or manual QA gates. By breaking lead time into stages (time to first review, time in review, time to merge, time to deploy), you can pinpoint exactly where work is getting stuck. If PRs sit idle for days waiting for review, that’s a team process conversation, not an individual developer speed issue.&lt;/p&gt;

&lt;h3&gt;Change Failure Rate&lt;/h3&gt;

&lt;p&gt;This metric tracks how often a production deployment causes a failure, such as an outage or a rollback. It’s a direct measure of quality and stability. A high failure rate suggests that your testing and review processes aren’t catching issues before they reach users. This often correlates with large batches, since bigger changes are harder to understand and test thoroughly.&lt;/p&gt;

&lt;h3&gt;Mean Time to Recovery (MTTR)&lt;/h3&gt;

&lt;p&gt;When a failure happens, how long does it take to restore service? &lt;a href="https://kodus.io/en/what-is-mean-time-to-recover/" rel="noopener noreferrer"&gt;That’s MTTR&lt;/a&gt;. A low MTTR is a sign of a resilient system and a solid incident response process. It shows that you can detect problems quickly, diagnose them effectively, and roll back or fix issues without causing new ones.&lt;/p&gt;

&lt;h2&gt;AI and engineering metrics&lt;/h2&gt;

&lt;p&gt;AI tools for code don’t change the basic principles of how software is delivered. What they do is amplify what’s already there. AI works as a multiplier of the systems and practices a team already has. If you have strong technical fundamentals, a habit of working in small batches, and a healthy review process, AI tends to accelerate all of that and reduce friction.&lt;/p&gt;

&lt;p&gt;The real risk shows up when those practices don’t exist. If your team already tends to create huge pull requests, AI will help generate even bigger PRs, just faster. If there’s already a lot of &lt;a href="https://kodus.io/en/reduce-technical-debt/" rel="noopener noreferrer"&gt;technical debt&lt;/a&gt;, AI-generated code without context can easily make it worse.&lt;/p&gt;

&lt;p&gt;That’s why good practices become even more important with AI, especially working in small batches, which becomes even more critical. A small, well-understood change is always safer to ship to production, whether it’s written by a person or with the help of AI.&lt;/p&gt;

&lt;h2&gt;&lt;strong&gt;What needs to be in place before using AI, according to DORA&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;To get value from AI without amplifying existing problems, teams need a solid foundation. DORA’s research summarizes this into seven essential capabilities for adopting AI the right way:&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;1. Clear guidance on AI usage&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;Everyone on the team needs to know the rules of the game: which tools can be used, what kind of data can be shared, and how to handle AI-generated code in day-to-day work.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;2. Healthy data ecosystems&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;The quality of the help AI provides depends directly on the quality of the data behind it. Garbage in, garbage out.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;3. Internal data accessible to AI&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;To be truly useful, AI needs context. That includes internal libraries, APIs, company standards, and relevant documentation.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;4. Working in small batches&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;Working with small changes reduces risk and keeps the feedback loop short. This becomes even more important when code can be generated very quickly.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;5. Focus on the end user&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;AI is a means, not an end. It should help solve real user problems, not just increase the amount of code produced.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;6. Well-maintained internal platforms&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;A good internal platform removes repetitive work and provides clear paths to test and deploy. That makes it much safer to integrate AI-generated code.&lt;/p&gt;

&lt;h3&gt;&lt;strong&gt;7. Strong technical fundamentals&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;Loosely coupled architecture, comprehensive test automation, and solid engineering standards are not optional.&lt;/p&gt;

&lt;h2&gt;Other metrics that can help&lt;/h2&gt;

&lt;p&gt;DORA metrics go a long way toward understanding the health of the delivery pipeline, but on their own they don’t explain everything. Other frameworks complement this view by connecting delivery performance to developer experience and overall value flow.&lt;/p&gt;

&lt;h3&gt;SPACE&lt;/h3&gt;

&lt;p&gt;The SPACE framework broadens the view of what actually makes developers productive and satisfied. It argues that you can’t measure output alone. To understand real productivity, you need to look at multiple signals at the same time:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;
&lt;strong&gt;Satisfaction and well-being:&lt;/strong&gt; How fulfilled and healthy are your engineers? Burnout is a major productivity killer.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt; How do individuals and teams perceive their own performance?&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Activity:&lt;/strong&gt; Output metrics like commits or PRs. They’re useful, but dangerous when analyzed in isolation.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Communication and collaboration:&lt;/strong&gt; How well do people and teams work together? Think about how easy it is to find information and the quality of reviews.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Efficiency and flow:&lt;/strong&gt; How effectively can developers work without interruptions or friction? This ties directly to DORA’s Lead Time for Changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The goal of SPACE is to create a balanced set of metrics so you don’t accidentally optimize one area at the expense of another, like increasing activity at the cost of burnout.&lt;/p&gt;

&lt;h3&gt;Value Stream Management (VSM)&lt;/h3&gt;

&lt;p&gt;Value Stream Management is a way to visualize, measure, and improve the entire process, from idea conception to customer delivery.&lt;/p&gt;

&lt;p&gt;While DORA gives you key outcomes (like lead time), VSM helps map all the intermediate steps to understand &lt;em&gt;why&lt;/em&gt; your lead time is what it is. It focuses on flow metrics such as:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Flow Velocity:&lt;/strong&gt; How many work items are completed per unit of time?&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Flow Time:&lt;/strong&gt; How long does an item take from start to finish? (Similar to Lead Time)&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Flow Load:&lt;/strong&gt; How many items are currently in progress? (A proxy for WIP)&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Flow Efficiency:&lt;/strong&gt; What percentage of the total flow time is spent on active work versus waiting? This is often the most revealing metric. It’s common to discover that a ticket spends 90% of its time just waiting for a review, a build, or a handoff.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;VSM adds context to DORA metrics. Your Change Failure Rate might be high, and a value stream map might show that this happens because there’s no dedicated time for QA, forcing developers to rush tests at the last minute.&lt;/p&gt;

&lt;h2&gt;Using metrics to improve the team&lt;/h2&gt;

&lt;p&gt;Collecting metrics doesn’t help much if you don’t act on them. The idea is to use the data to spark good conversations and improve the team, not to create yet another dashboard that no one opens. The DORA improvement loop helps close that gap.&lt;/p&gt;

&lt;h3&gt;The DORA improvement loop&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Establish a baseline:&lt;/strong&gt; First, simply measure your four core metrics to understand where you are today. You can’t improve what you don’t measure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have a conversation:&lt;/strong&gt; Metrics tell you &lt;strong&gt;what&lt;/strong&gt; is happening, but not &lt;strong&gt;why&lt;/strong&gt;. The next step is to talk with the team. A value stream mapping exercise can be extremely useful to visualize the entire process, from idea to production, and identify where the real friction is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commit to improving the biggest constraint:&lt;/strong&gt; Don’t try to fix everything at once. Identify the biggest bottleneck slowing the team down or causing failures and focus on that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Turn the commitment into a plan:&lt;/strong&gt; Create a concrete plan with leading indicators. For example, if the bottleneck is code review time, an indicator might be “average PR size” or “time from PR open to first comment.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do the work:&lt;/strong&gt; This involves systemic changes, not quick fixes. It might mean &lt;a href="https://kodus.io/en/best-developer-productivity-tools/" rel="noopener noreferrer"&gt;investing in better tools&lt;/a&gt;, changing a team process, or paying down a specific chunk of technical debt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check progress and iterate:&lt;/strong&gt; After a few weeks or a sprint, review your DORA metrics and indicators to see if the changes had the expected effect. Then choose the next biggest constraint and repeat the cycle.&lt;/p&gt;

&lt;p&gt;It’s also useful to remember that DORA isn’t the only framework. The SPACE framework is a great complement, as it brings in developer satisfaction, well-being, and collaboration.&lt;/p&gt;

&lt;h2&gt;The most common mistakes when using engineering metrics&lt;/h2&gt;

&lt;p&gt;When you start using these metrics, it’s easy to fall into a few common traps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using metrics to evaluate individual performance:&lt;/strong&gt; This is the fastest way to destroy trust and encourage metric gaming. DORA metrics measure team and system performance, period. They should &lt;strong&gt;never&lt;/strong&gt; be used in individual performance reviews.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The “gaming” metrics trap:&lt;/strong&gt; If you incentivize a specific metric, people will find a way to optimize it, often at the expense of what actually matters. For example, focusing only on Deployment Frequency can lead a team to ship tiny, meaningless changes just to inflate the number.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics overload:&lt;/strong&gt; Don’t try to measure everything. Start with the four core DORA metrics. Once you have a handle on them, you can add others, but keep the focus on a small set of indicators directly tied to your improvement goals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not acting on the data:&lt;/strong&gt; The worst outcome is spending time and effort collecting data and then doing nothing with it. Metrics should always be a catalyst for conversation and action. If they aren’t, it’s worth asking why you’re collecting them in the first place.&lt;/p&gt;

&lt;h2&gt;Some recommendations&lt;/h2&gt;

&lt;p&gt;If the goal is to use data more deliberately to improve the team, here’s a path you can follow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Establish baseline DORA metrics:&lt;/strong&gt; Use a tool or script to get an initial reading of the four core metrics. This gives you a starting point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Focus improvement efforts on the team’s biggest constraint:&lt;/strong&gt; Work with the team to identify the most painful bottleneck right now and make a clear agreement to focus on improving that first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Treat AI adoption as a systemic change:&lt;/strong&gt; It’s not just about handing out Copilot licenses and hoping for the best. Set clear guidelines and reinforce good habits so AI doesn’t simply accelerate existing problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Complement DORA with other frameworks:&lt;/strong&gt; Consider using elements of SPACE to get a more complete view that includes developer experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Embrace continuous improvement as a cultural practice:&lt;/strong&gt; The goal isn’t to achieve a “perfect” score on metrics. It’s to build a culture where the team is constantly working to improve its workflow and the health of its systems.&lt;/p&gt;

</description>
      <category>engineeringmetrics</category>
      <category>dorametrics</category>
    </item>
    <item>
      <title>Tech Lead vs. Engineering Manager: understanding the differences in team roles</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Wed, 07 Jan 2026 11:07:00 +0000</pubDate>
      <link>https://dev.to/kodus/tech-lead-vs-engineering-manager-understanding-the-differences-in-team-roles-2k5b</link>
      <guid>https://dev.to/kodus/tech-lead-vs-engineering-manager-understanding-the-differences-in-team-roles-2k5b</guid>
      <description>&lt;p&gt;What happens in many engineering teams, especially as they grow, is that the line between technical leadership and people management becomes incredibly blurry. The most senior engineer often ends up taking on both roles: being the final word on system architecture while also trying to handle performance reviews and career conversations. This creates a confusing situation where the distinction between tech lead vs manager isn’t clear, because a single person is trying to be both.&lt;/p&gt;

&lt;p&gt;At first, this combination can even seem efficient. But as the team grows from five to fifteen, then to fifty people, the problems start to show.&lt;/p&gt;

&lt;p&gt;The person in this hybrid role becomes overloaded. Either the technical vision becomes inconsistent because they’re buried in management tasks, or the team feels unsupported because their manager is too focused on code to properly handle people issues.&lt;/p&gt;

&lt;p&gt;This ambiguity isn’t just an individual problem, it starts to turn into a company bottleneck. It leads to burnout for the person stuck in the middle and makes it very hard for other senior engineers to see a clear career path for themselves. Do they need to become managers in order to lead technically, or can they keep focusing on code? Without clear roles, no one knows.&lt;/p&gt;

&lt;h2&gt;Main Differences Between Tech Lead vs Manager&lt;/h2&gt;

&lt;p&gt;To solve this, you need to be specific about what each role is actually responsible for. While both are leaders, they operate on different planes and use different tools to get the job done.&lt;/p&gt;

&lt;h3&gt;The Tech Lead’s Main Focus: The System&lt;/h3&gt;

&lt;p&gt;The Tech Lead’s world revolves around the health and direction of the technology. Their primary responsibility is the codebase, the architecture, and the technical quality of what the team builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical vision and system architecture:&lt;/strong&gt; They are responsible for defining the technical direction of a project or system. This means making hard decisions about architecture, patterns, and technologies to ensure the system is scalable, &lt;a title="How to manage technical debt in a fast growing environment" href="https://kodus.io/en/managing-technical-debt-rapid-growth/" rel="noopener noreferrer"&gt;maintainable&lt;/a&gt;, and reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation guidance:&lt;/strong&gt; They are in the code every day. They guide the team through complex implementation details, &lt;a title="Optimizing PR Cycle Time for Developer Teams" href="https://kodus.io/en/optimizing-pr-cycle-time/" rel="noopener noreferrer"&gt;review critical pull requests&lt;/a&gt;, and establish &lt;a title="Code Quality Best Practices" href="https://kodus.io/en/code-quality-standards-and-best-practices/" rel="noopener noreferrer"&gt;good coding practices&lt;/a&gt;, testing, and deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical mentorship:&lt;/strong&gt; They grow other engineers by coaching them on technical skills. This happens through pairing, code reviews, and design discussions. The goal is to raise the technical bar of the entire team.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code quality and reliability:&lt;/strong&gt; At the end of the day, the Tech Lead is accountable for the quality of what the team ships. They ensure the code meets standards and that production systems are stable.&lt;/p&gt;

&lt;h3&gt;The Engineering Manager’s Main Focus: People and Process&lt;/h3&gt;

&lt;p&gt;The Engineering Manager, on the other hand, is focused on creating an environment where the team can do their best work. Their primary responsibility is the people on the team and the organizational systems that support them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;People management and career development:&lt;/strong&gt; They run one-on-ones, performance reviews, promotions, and compensation conversations. Their job is to help each engineer build a satisfying career path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team health:&lt;/strong&gt; They are responsible for team dynamics. This includes resolving conflicts, ensuring psychological safety, and &lt;a title="Building a strong engineering culture at scale" href="https://kodus.io/en/scaling-engineering-culture-systems/" rel="noopener noreferrer"&gt;building a culture&lt;/a&gt; where people can collaborate effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project delivery and stakeholder communication:&lt;/strong&gt; While the Tech Lead handles the “how,” the EM usually handles the “when” and the “why.” They manage timelines, remove blockers, and act as the main communication link between the team and stakeholders like product managers or other departments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource allocation and process improvement:&lt;/strong&gt; They work on hiring, team structure, and making sure the group has what it needs to succeed.&lt;/p&gt;

&lt;h3&gt;Where They Overlap and Where They Differ&lt;/h3&gt;

&lt;p&gt;Both, obviously, care about delivering great software. The main difference lies in the lever each one pulls to make that happen. A Tech Lead pulls the “&lt;strong&gt;technical excellence&lt;/strong&gt;” lever, &lt;a title="The importance of soft skills for Tech Lead" href="https://kodus.io/en/tech-lead-soft-skills-beyond-code/" rel="noopener noreferrer"&gt;influencing the team&lt;/a&gt; through architectural decisions and code quality. An Engineering Manager pulls the “&lt;strong&gt;team effectiveness&lt;/strong&gt;” lever, influencing through career growth, clear processes, and a healthy culture.&lt;/p&gt;

&lt;p&gt;Both contribute to technical strategy, but from different angles. The TL might propose a new database technology based on performance needs, while the EM will raise questions about the hiring and training costs associated with that choice. Success depends entirely on these two roles having a strong partnership and good communication.&lt;/p&gt;

&lt;h2&gt;A Simple Model to Understand the Roles&lt;/h2&gt;

&lt;p&gt;For senior engineers at a decision point, or for organizations trying to define these roles, having a clear mental model helps.&lt;/p&gt;

&lt;h3&gt;The “Impact Vector” Model&lt;/h3&gt;

&lt;p&gt;One useful way to think about this is the “Impact Vector” model. Imagine your contribution to the team as a vector, with direction and magnitude.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech Lead’s impact vector&lt;/strong&gt; points directly at the &lt;strong&gt;technical domain&lt;/strong&gt;. They create value by making the right architectural choices, improving code quality, and solving the hardest technical problems. Their influence scales through the systems they design and the technical excellence they encourage in others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Engineering Manager’s impact vector&lt;/strong&gt; points directly at the &lt;strong&gt;organizational domain&lt;/strong&gt;. They create value by hiring the right people, developing their careers, and tuning team processes. Their influence scales by building a high-performing team that can operate efficiently and autonomously.&lt;/p&gt;

&lt;p&gt;The goal is to have both vectors working together, not to force a single person to point in two directions at the same time.&lt;/p&gt;

&lt;h3&gt;How to choose your path&lt;/h3&gt;

&lt;p&gt;If you’re a senior engineer trying to decide which path to follow, the polished job description won’t help you. Instead, ask yourself where you get more energy and where you feel you can have the greatest impact.&lt;/p&gt;

&lt;p&gt;1 - Do you feel more satisfaction refactoring a complex module to make it cleaner and faster, or guiding a mid-level engineer through a tough feedback conversation that helps them grow?&lt;/p&gt;

&lt;p&gt;2 - When a project is going off the rails, is your first instinct to dive into the code to find the bottleneck, or to reorganize the team’s work, align priorities with product, and shield the team from distractions?&lt;/p&gt;

&lt;p&gt;3 - Where do you feel your biggest impact lies: in the long-term health of the codebase, or in the long-term health and growth of the people who build it?&lt;/p&gt;

&lt;p&gt;There’s no right answer, but being honest about your main motivation is the most important step.&lt;/p&gt;

&lt;h3&gt;How the two can work together&lt;/h3&gt;

&lt;p&gt;When a company separates these roles clearly, the partnership between the Tech Lead and the Engineering Manager becomes the most important relationship on the team. But for that to work, a few basic rules are needed.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Set clear boundaries and regular syncs&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;TLs and EMs should meet frequently to align on technical and team topics. They need to agree on who is primarily responsible for each type of decision. For example, the TL decides on technical architecture, while the EM decides on performance reviews.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;They need to trust each other.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;The EM needs to trust the TL’s technical judgment, even if they don’t agree with every detail. The TL needs to trust the EM’s judgment on people and processes. Neither should try to do the other’s job.&lt;/p&gt;

&lt;p&gt;By treating these roles as distinct but complementary forms of leadership, the company ensures that both the technology and the people get the attention they need to grow.&lt;/p&gt;

</description>
      <category>techlead</category>
      <category>engineeringmanager</category>
    </item>
    <item>
      <title>The importance of soft skills for Tech Lead</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Tue, 06 Jan 2026 11:05:00 +0000</pubDate>
      <link>https://dev.to/kodus/the-importance-of-soft-skills-for-tech-lead-5fj6</link>
      <guid>https://dev.to/kodus/the-importance-of-soft-skills-for-tech-lead-5fj6</guid>
      <description>&lt;p&gt;The standard career path for an engineer usually leads to the role of &lt;strong&gt;tech lead&lt;/strong&gt;, but this promotion can create problems that often go unnoticed. The skills that made you a great programmer, like focus and delivering features independently, do not translate directly into leading a team. Suddenly, your performance is measured by the team’s results, not just your own contributions. That means your new job is to manage outcomes, which involves dealing with people, competing priorities, and a lot of ambiguity. Learning how to set priorities becomes very important in this transition to leadership.&lt;/p&gt;

&lt;p&gt;When these people-centered skills are neglected, the costs show up in concrete ways. Projects get delayed because of simple misunderstandings that drag on for weeks.&lt;/p&gt;

&lt;p&gt;You might see two engineers arguing in circles during a pull request review, when the real issue is a lack of clear ownership or unresolved tension from a previous project. As a tech lead, learning how to &lt;a href="https://kodus.io/en/scaling-code-review-teams/" rel="noopener noreferrer"&gt;scale code review&lt;/a&gt; becomes essential to mitigate this kind of situation and improve team efficiency. Over time, this friction wears people down. Team morale drops, good developers start looking for other opportunities, and you end up spending more time dealing with turnover than building software.&lt;/p&gt;

&lt;h2&gt;Soft Skills Are Not Optional for a Tech Lead&lt;/h2&gt;

&lt;p&gt;We often treat interpersonal skills as something secondary, but in a leadership role, they are fundamental to technical execution. A communication failure is not just uncomfortable, it directly impacts the codebase and the delivery timeline.&lt;/p&gt;

&lt;h3&gt;Why Communication Is an Architectural Decision&lt;/h3&gt;

&lt;p&gt;Think about the last time a large feature had to be rebuilt. The root cause was probably a communication failure. Unclear specifications or a rushed kickoff meeting lead engineers to build on assumptions. When those assumptions turn out to be wrong, the result is a pull request that gets stuck on fundamental issues that should have been resolved weeks earlier. The refactor that follows is not just a technical task, it is the concrete manifestation of a team that was not aligned. Understanding how to apply principles to &lt;a href="https://kodus.io/en/enhancing-code-maintainability-guide/" rel="noopener noreferrer"&gt;improve code maintainability&lt;/a&gt; often starts with better communication and alignment.&lt;/p&gt;

&lt;p&gt;In practice, clear communication works as an architectural prerequisite.&lt;/p&gt;

&lt;p&gt;It establishes the shared context and understanding on which the entire system is built. When priorities become misaligned because one person heard one thing in a meeting and another read something different in a document, the team ends up pulling in different directions. The technical outcome is often duplicated work or components that do not integrate properly.&lt;/p&gt;

&lt;h3&gt;Empathy as a Debugging Process&lt;/h3&gt;

&lt;p&gt;When team velocity drops or an engineer starts shipping buggy code more frequently, the first instinct is to look for technical causes. But there is almost always a human issue behind it.&lt;/p&gt;

&lt;p&gt;Approaching these situations as a debugging process can help uncover the root cause. This debugging process also extends to understanding and applying &lt;a href="https://kodus.io/en/code-quality-standards-and-best-practices/" rel="noopener noreferrer"&gt;code quality best practices&lt;/a&gt;, since poor quality often stems from gaps in communication or empathy.&lt;/p&gt;

&lt;p&gt;You need to look beyond surface-level errors.&lt;/p&gt;

&lt;p&gt;For example, a usually reliable engineer who starts delivering late and low-quality code may be dealing with burnout, not a drop in technical ability. A junior developer struggling to complete tasks may need more explicit mentoring, not just another link to the documentation. Empathy, in this context, means understanding stakeholder needs that were not captured in the JIRA ticket or realizing that a vague requirement is creating frustration and wear across the entire team. When you can diagnose these underlying issues, you can resolve interpersonal conflicts before they escalate and affect a sprint.&lt;/p&gt;

&lt;h2&gt;Developing Skills to Become a Tech Lead&lt;/h2&gt;

&lt;p&gt;Developing these competencies requires the same intentional practice as learning a new programming language or a system design pattern. The goal is to build a toolkit for handling the human aspects of software development with the same rigor applied to technical aspects.&lt;/p&gt;

&lt;h3&gt;Intentional Communication&lt;/h3&gt;

&lt;p&gt;Good communication goes beyond simply speaking. It is about making sure understanding is actually transferred between people. That requires care in how you listen, speak, and write.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Active listening&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;In a technical discussion, it is important to listen to both what is being said and what is not.&lt;/p&gt;

&lt;p&gt;Is a developer hesitant about an approach because of a technical risk they have not yet been able to articulate clearly?&lt;/p&gt;

&lt;p&gt;Is the product manager pushing for a deadline because of an external demand you are not aware of?&lt;/p&gt;

&lt;p&gt;Pay &lt;strong&gt;attention to implicit signals&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Structuring feedback&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Giving constructive feedback is one of the hardest parts of the job. It needs to be specific, actionable, and focused on the work, not the person. Instead of saying “this code is confusing,” try explaining “I had trouble following the logic in this function. Could we add some comments or break it into smaller parts to make the intent clearer?”.&lt;/p&gt;

&lt;p&gt;This shifts the focus from judgment to a collaborative effort to improve the code.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Choosing the right medium&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;The channel used for communication matters. A complex architectural decision should not be made in a fragmented Slack thread. That calls for a design doc and maybe a meeting to discuss trade-offs. On the other hand, a quick clarification does not need a 30-minute meeting. Learning when to use asynchronous versus real-time communication, or written versus verbal, prevents misunderstandings and respects everyone’s time. This strategic approach to communication also plays an important role in &lt;a href="https://kodus.io/en/optimizing-pr-cycle-time/" rel="noopener noreferrer"&gt;optimizing PR cycle time&lt;/a&gt; and overall project delivery.&lt;/p&gt;

&lt;h3&gt;Developing Empathy&lt;/h3&gt;

&lt;p&gt;Empathy, in the engineering context, is about understanding other people’s perspectives to build a more resilient and effective team. It is the foundation of psychological safety, where people feel safe to ask questions, admit mistakes, and challenge ideas without fear of blame.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Adopting different perspectives&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;A senior engineer, a junior engineer, and a product manager see the same project through different lenses. The senior may be focused on long-term maintainability, the junior on learning the codebase, and the PM on hitting a deadline. Your role is to understand these different views and find a solution that balances them.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Building psychological safety&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Create an environment where making mistakes is acceptable. When someone points out a problem in a design you proposed, thank them. When a production incident happens, focus the post-mortem on “what can we learn?” instead of “who was at fault?”. This encourages open dialogue and allows problems to surface early, when they are still small and easy to fix.&lt;/p&gt;

&lt;h4&gt;&lt;strong&gt;Navigating disagreements&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Technical debates are healthy, but they can become unproductive if they turn personal. When mediating a disagreement, your role is to bring the focus back to the shared goal. Acknowledge both viewpoints and guide the conversation toward common ground. Frame the decision in terms of technical trade-offs and what is best for the project, not who “wins” the argument.&lt;/p&gt;

&lt;h3&gt;Leading Through Influence, Not Authority&lt;/h3&gt;

&lt;p&gt;Your title may give you authority, but your effectiveness as a leader comes from your ability to influence the team and guide the work toward a good outcome. This is even clearer in the tech lead role, which often involves leading people at the same hierarchical level, without formal management power. In these cases, leadership is sustained by trust, clarity, and consistency in day-to-day decisions.&lt;/p&gt;

&lt;h4&gt;Mentorship and delegation&lt;/h4&gt;

&lt;p&gt;Scaling your impact means enabling the team. Delegating is not just about handing off smaller tasks, it is about giving real responsibilities, even when you know you could do it faster yourself.&lt;/p&gt;

&lt;p&gt;Good delegation comes with mentorship. Give a more junior engineer a challenging but well-defined task. Explain the context, align expectations, follow up on critical points, and make it clear that they are responsible for the outcome. The goal is not to control every step, but to create space for learning and autonomy. Over time, this reduces dependencies and frees you up to focus on higher-level problems.&lt;/p&gt;

&lt;h4&gt;Conflict resolution&lt;/h4&gt;

&lt;p&gt;Conflicts will happen, especially in technical teams with strong opinions. When they arise, address them directly and in private. Avoid calling people out in public or letting tensions build up.&lt;/p&gt;

&lt;p&gt;Listen carefully to all sides to understand the real issue. Often it is not technical, but a misalignment of expectations, priorities, or communication. Your role is to help the team unblock the situation and move forward, not to decide who “wins” the discussion.&lt;/p&gt;

&lt;h4&gt;Building consensus&lt;/h4&gt;

&lt;p&gt;Whenever possible, avoid imposing technical decisions. Instead, guide the team toward consensus. Present the problem clearly, lay out the viable options, make the trade-offs explicit, and facilitate the discussion.&lt;/p&gt;

&lt;p&gt;When the team participates in the decision, commitment to execution increases significantly. Implementation flows more smoothly, and future debates tend to be more objective. This process may feel slower at first, but in the medium and long term it leads to more consistent decisions and a more engaged team.&lt;/p&gt;

</description>
      <category>techlead</category>
      <category>softskills</category>
    </item>
    <item>
      <title>Why Relying Only on Claude for Code Security Review Fails Growing Teams</title>
      <dc:creator>Edvaldo Freitas</dc:creator>
      <pubDate>Mon, 05 Jan 2026 21:03:45 +0000</pubDate>
      <link>https://dev.to/kodus/why-relying-only-on-claude-for-code-security-review-fails-growing-teams-3i3o</link>
      <guid>https://dev.to/kodus/why-relying-only-on-claude-for-code-security-review-fails-growing-teams-3i3o</guid>
      <description>&lt;p&gt;The first time you see an AI comment on a pull request, the feedback loop stands out. A full review appears in seconds, pointing out potential issues before a human reviewer has even opened the file. The appeal of using a tool like Claude for code security review, a critical part of security in the SDLC, is clear: catch problems early and reduce the team’s manual workload.&lt;/p&gt;

&lt;p&gt;In practice, however, this speed often creates a false sense of security. It works well at first, but starts to break down as the team grows and systems become more complex.&lt;/p&gt;

&lt;p&gt;The problem is that these tools operate with a critical blind spot. They see the code, but they do not see the system. They can analyze syntax, but they do not understand intent, history, or the architectural contracts that keep a complex application working.&lt;/p&gt;

&lt;h2&gt;The Critical Blind Spot&lt;/h2&gt;

&lt;p&gt;A good security review depends on context that is not in the diff. That context lives outside the isolated code. By nature, an LLM does not have access to it. It analyzes only a slice of the code, in isolation, and misses the broader view where, in practice, the most serious vulnerabilities usually live.&lt;/p&gt;

&lt;h3&gt;Architectural and data flow risks that go unnoticed&lt;/h3&gt;

&lt;p&gt;Many critical security flaws are not in the code itself, but in how data flows between components. An LLM does not know the system’s trust boundaries. It does not know, for example, that &lt;strong&gt;UserService&lt;/strong&gt; is internal only, or that any data coming from a publicly exposed &lt;strong&gt;APIGateway&lt;/strong&gt; must be revalidated, regardless of prior validations.&lt;/p&gt;

&lt;p&gt;Consider an authorization flaw that slips through. A developer adds a new endpoint that correctly checks whether the user has the admin role. Looking only at the diff, the code looks correct.&lt;/p&gt;

&lt;p&gt;But a senior engineer knows an implicit system rule: an admin from Tenant A should never access data from Tenant B. The code does not check the tenant ID.&lt;/p&gt;

&lt;p&gt;Claude will not flag this because it does not understand your multi-tenancy model or the internal rules around data isolation and sensitivity. It sees a valid role check and moves on, letting a potential cross-tenant data leak slip through.&lt;/p&gt;

&lt;h3&gt;Ignoring repository history and the evolution of threats&lt;/h3&gt;

&lt;p&gt;A codebase is a living document. The history of commits, pull requests, and incident reports contains valuable security context. A human reviewer may remember a past incident involving incomplete input validation on a specific data model and will be extra alert to similar changes. An LLM has no memory of this.&lt;/p&gt;

&lt;p&gt;For example, a team may have fixed a denial of service vulnerability by adding a hard size limit to a free text field. Six months later, a new developer, working on another feature, adds a similar field but forgets the size validation. The code is syntactically correct, but it reintroduces a known vulnerability pattern. An experienced reviewer spots this immediately. An LLM sees only the new code, with no access to lessons learned in the past.&lt;/p&gt;

&lt;h3&gt;Inability to Learn Team-Specific Security Policies&lt;/h3&gt;

&lt;p&gt;Every engineering team develops its own set of security conventions and policies. They are often domain-specific and not always explicit in the code.&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Your company policy might prohibit storing any form of PII in Redis.&lt;/li&gt;
    &lt;li&gt;Or you may have a rule to use a specific internal library for all cryptographic operations, because standard libraries were misused in the past.&lt;/li&gt;
    &lt;li&gt;Your team may have decided to use UUIDv7 for all new primary keys for performance reasons.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An LLM has no knowledge of these internal standards.&lt;/p&gt;

&lt;p&gt;It may even suggest a solution that directly violates these rules, creating more work for the reviewer, who now has to fix both the code and the AI’s suggestion. The confident and authoritative tone of an LLM can lead more junior developers to assume its suggestions represent &lt;a href="https://kodus.io/en/code-quality-standards-and-best-practices/" rel="noopener noreferrer"&gt;code quality best practices&lt;/a&gt;, even when they contradict standards already established by the team.&lt;/p&gt;

&lt;h2&gt;Scaling Traps: When LLM Limitations Add Up&lt;/h2&gt;

&lt;p&gt;For a small team working on a monolith, some of these gaps may be manageable. But as the organization tries to deal with the challenge of &lt;a href="https://kodus.io/en/scaling-code-review-teams/" rel="noopener noreferrer"&gt;scaling code review in a growing team&lt;/a&gt;, with more engineers, more teams, and more microservices, these limitations create systemic problems that automation cannot solve.&lt;/p&gt;

&lt;h3&gt;The Human Verification Bottleneck&lt;/h3&gt;

&lt;p&gt;reviewing the AI’s own output. With a constant stream of low impact or irrelevant suggestions, engineers quickly develop alert fatigue and start treating AI comments like linter noise, something easy to ignore.&lt;/p&gt;

&lt;p&gt;In practice, every AI generated comment still requires someone to assess its validity, impact, and context. This slows reviews down and pulls attention away from what actually matters. The cognitive load of filtering AI noise can easily outweigh the benefit of catching a few obvious issues.&lt;/p&gt;

&lt;h3&gt;Architectural understanding gaps in LLM-based code security reviews&lt;/h3&gt;

&lt;p&gt;In distributed systems, the most dangerous bugs usually live in the interactions between services. An LLM reviewing a change in a single repository has no visibility into how that change might break an implicit contract with a downstream consumer. It does not notice, for example, that removing a field from a JSON response can cause silent failures in another team’s service that depends on that field.&lt;/p&gt;

&lt;p&gt;The same applies to cryptography errors. An LLM can flag obvious problems, like the use of an obsolete algorithm such as DES. But it tends to miss harder to detect flaws, like reusing an initialization vector (IV) in a block cipher. Identifying this type of issue requires understanding application state and data flow across multiple requests, which goes far beyond static analysis of a code snippet.&lt;/p&gt;

&lt;h3&gt;Hallucinations&lt;/h3&gt;

&lt;p&gt;LLMs can be wrong with a lot of confidence. It is not uncommon to see recommendations for security libraries that do not exist, incorrect interpretations of details from a real CVE, or broken code snippets presented as a “fix.”&lt;/p&gt;

&lt;p&gt;In security, this is especially dangerous. A developer may accept an explanation that sounds plausible but is wrong, and end up introducing a new vulnerability while trying to fix another one. This false sense of confidence undermines learning and can lead to a worse security outcome than the original issue.&lt;/p&gt;

&lt;h2&gt;Why human expertise still matters&lt;/h2&gt;

&lt;p&gt;This does not mean AI tools have no place. The problem is treating them as replacements for human judgment rather than as a complement. Human reviewers provide essential context that machines cannot.&lt;/p&gt;

&lt;h3&gt;Beyond Syntax: Business Logic and Intent&lt;/h3&gt;

&lt;p&gt;A senior engineer understands the &lt;strong&gt;why&lt;/strong&gt; behind the code. They connect the proposed change to its business goal and can ask critical questions that an LLM would never ask.&lt;/p&gt;

&lt;p&gt;“What happens if a user uploads a file with more than 255 characters in the name?” or “Is this new user permission aligned with the company’s GDPR compliance requirements?”&lt;/p&gt;

&lt;p&gt;This kind of reasoning about real world impact is the foundation of a good security review.&lt;/p&gt;

&lt;h3&gt;Mentorship and Building a Security Culture&lt;/h3&gt;

&lt;p&gt;Code reviews are one of the main mechanisms for knowledge transfer within a team. When a senior engineer points out a security flaw, they do not just say “this is wrong.” They explain the risk, reference a past decision or an internal document, and use the review as a learning moment.&lt;/p&gt;

&lt;p&gt;This process raises security awareness across the entire team and strengthens a culture of shared responsibility. An automated bot comment offers none of that. It just feels like another checklist item to clear.&lt;/p&gt;

&lt;h2&gt;A Hybrid Review Model&lt;/h2&gt;

&lt;p&gt;The goal is not to reject new tools, but to be intentional about how they are used. A healthy security posture uses automation to augment human judgment, not to replace it.&lt;/p&gt;

&lt;h3&gt;Augment, Not Replace: Where LLMs Make Sense&lt;/h3&gt;

&lt;p&gt;The best use of LLMs in code review is as a first automated pass for a very specific class of problems. For example:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;p&gt;Hardcoded secrets and API keys&lt;/p&gt;
&lt;/li&gt;
    &lt;li&gt;
&lt;p&gt;Use of known insecure libraries or functions (such as &lt;code&gt;strcpy&lt;/code&gt; in C or &lt;code&gt;pickle&lt;/code&gt; in Python)&lt;/p&gt;
&lt;/li&gt;
    &lt;li&gt;
&lt;p&gt;Common patterns indicating SQL injection or XSS&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The output should be treated as a suggestion, not a verdict. Final authority still rests with the human reviewer.&lt;/p&gt;

&lt;h3&gt;Invest in Context&lt;/h3&gt;

&lt;p&gt;Getting consistently useful results from an LLM requires significant investment in providing the right context. This includes architectural diagrams, data flow information, and internal team policies, often guided by advanced prompt engineering practices.&lt;/p&gt;

&lt;p&gt;That context also needs to be kept up to date, which creates an ongoing maintenance burden. Before making an LLM a mandatory step in CI/CD, it is necessary to understand that cost and those limits.&lt;/p&gt;

&lt;h3&gt;Cultivate a Strong Security Posture to Scale&lt;/h3&gt;

&lt;p&gt;In the end, a strong security culture depends on human judgment. Automation works well for simple, repetitive, and context-free tasks. This frees more experienced engineers to focus on complex, dependency-heavy risks, where experience really matters. Balancing the efficiency of automation with the judgment of those who know the system is the only way to build a security practice that truly scales.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>code</category>
      <category>security</category>
    </item>
  </channel>
</rss>
