<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cameron Pavey</title>
    <description>The latest articles on DEV Community by Cameron Pavey (@cpave3).</description>
    <link>https://dev.to/cpave3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cpave3"/>
    <language>en</language>
    <item>
      <title>Don't Abandon the Discipline</title>
      <dc:creator>Cameron Pavey</dc:creator>
      <pubDate>Thu, 12 Mar 2026 11:56:32 +0000</pubDate>
      <link>https://dev.to/cpave3/dont-abandon-the-discipline-560k</link>
      <guid>https://dev.to/cpave3/dont-abandon-the-discipline-560k</guid>
      <description>&lt;p&gt;You've adopted agentic coding tools, and you can now output a complete feature in an afternoon. The speed is unlike anything you've experienced before, and it's addictive. You just want to keep building, keep pushing new features out into your customers' hands.&lt;/p&gt;

&lt;p&gt;This is the reality for a lot of us right now. But as adoption of AI coding tools continues, I've noticed a concerning trend. Day by day, teams are abandoning established, known-good practices that have helped us deliver software well for years. Practices like Agile, code reviews, customer validation, and automated testing are being left by the wayside in favour of a "speed at all costs" approach. I've seen sentiments including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Human code reviews are irrelevant for AI code. As long as it works, reviews just slow things down.&lt;/li&gt;
&lt;li&gt;There's no point doing Agile development. The overhead takes away from the speed of AI development.&lt;/li&gt;
&lt;li&gt;Automated testing is unnecessary. If the feature works, why bother?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The common theme is that best practices are being dismissed because they are perceived to slow down the otherwise fast pace of agentic coding. This misses the point entirely. Yes, writing code used to be the bottleneck. AI has compressed or removed it. But writing code was also where a lot of understanding happened. Developers discovered edge cases during implementation, challenged assumptions while debugging, and built mental models line by line. AI didn't just speed up the bottleneck, it moved it. The constraint is now understanding: knowing what to build, knowing that what you built is right, and having the confidence to ship it. None of that got faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agile Was About Validation, Not Velocity
&lt;/h2&gt;

&lt;p&gt;Agile has been the industry's best answer to building quality software for decades. It put a focus on sustainable development, working software, quality practices, and crucially, early iteration and continuous delivery.&lt;/p&gt;

&lt;p&gt;But here's what people forget: Agile was never about building faster. Speed was a side effect of breaking work into smaller chunks and validating as you built. The real value was in compressing learning loops. Frequent contact with reality, so you could course-correct before investing too much in a wrong direction.&lt;/p&gt;

&lt;p&gt;Small sprints, small PRs, daily standups - these were all mechanisms to force feedback. The chunk size was small because &lt;em&gt;building&lt;/em&gt; was slow. Small chunks were the only way to learn fast enough to avoid wasting months on the wrong thing.&lt;/p&gt;

&lt;p&gt;Now, people are applying AI speed to skip the learning, not accelerate it. You can build a feature in an afternoon, but you still don't know if users want it until they touch it. That hasn't changed. The &lt;a href="https://www.faros.ai/blog/key-takeaways-from-the-dora-report-2025" rel="noopener noreferrer"&gt;2025 DORA Report&lt;/a&gt; makes this tangible: AI adoption actually correlates with a 7.2% &lt;em&gt;reduction&lt;/em&gt; in delivery stability. DORA's core finding is that AI is an "amplifier." It magnifies the strengths of high-performing organisations and the dysfunctions of struggling ones in equal measure.&lt;/p&gt;

&lt;p&gt;By regressing from Agile back to waterfall, and then applying the speed gains from AI, you're now building your misconceptions perfectly, efficiently, and potentially irreversibly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Double Abdication
&lt;/h2&gt;

&lt;p&gt;Something special happens when you combine the resurgence of waterfall-like practices with the adoption of AI coding tools. You lose understanding from two directions at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first abdication is validation.&lt;/strong&gt; Moving from Agile back to waterfall removes continuous validation. You plan, then you build, but you don't &lt;em&gt;understand&lt;/em&gt; what you're building the way you do with iterative delivery. In Agile, you continuously challenge and validate assumptions through iteration. In waterfall, you assume your assumptions are correct and find out whether they were at the end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The second abdication is authorship.&lt;/strong&gt; AI coding has a subtle but significant side effect: you are no longer writing code, you are reviewing someone else's. Your understanding of the code is fundamentally more superficial than when you write it yourself, think through it as you go, and build a mental model line by line.&lt;/p&gt;

&lt;p&gt;When you combine these, the worst case scenario looks like this: assume how the feature should work, generate it, review it superficially, ship it. You end up with code that no one on your team &lt;em&gt;actually&lt;/em&gt; understands at an ownership level, based on assumptions that nobody properly validated.&lt;/p&gt;

&lt;p&gt;And the code you generate is still your code. It has your name on it. You own the consequences. When the system goes down at 3am on a Saturday and you have to wake up and fix it, there is no one more suitable to delegate to. If you have no clue how it works, you are owner in name only. If you own the planning and know how the feature &lt;em&gt;should&lt;/em&gt; work, you might be able to figure it out at the eleventh hour. But if you double-abdicated and can't explain the feature end to end, let alone the code, you're flying blind.&lt;/p&gt;

&lt;p&gt;The data backs this up. &lt;a href="https://www.helpnetsecurity.com/2025/12/23/coderabbit-ai-assisted-pull-requests-report/" rel="noopener noreferrer"&gt;CodeRabbit's study of 470 pull requests&lt;/a&gt; found that AI-authored code contains 1.7x more issues than human-written code, with 75% more logic errors and 3x more readability issues. And &lt;a href="https://graphite.com/blog/how-i-got-claude-to-write-better-code" rel="noopener noreferrer"&gt;Graphite's data&lt;/a&gt; shows that only 24% of PRs exceeding 1,000 lines of code receive any review comments at all. Reviewers don't carefully evaluate large AI-generated changesets. They rubber-stamp them.&lt;/p&gt;

&lt;p&gt;You've neither designed nor learned. You've orchestrated. Maybe this gets you the outcome you want. But you're trading one competitive edge that was scarce (truly proper Agile validation and deep understanding) for one that everyone can buy for $200 per month (rapid build time). When everyone has access to something, that's not a competitive advantage - that's table stakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Was Agile a Stepping Stone?
&lt;/h2&gt;

&lt;p&gt;Let's consider the devil's advocate position. Perhaps Agile was the best we could do &lt;em&gt;given slow build times&lt;/em&gt;, but it was never the ideal end state. Maybe waterfall was always conceptually right. Plan comprehensively, build once, ship. If building is now near-instant, doesn't comprehensive upfront planning make sense again?&lt;/p&gt;

&lt;p&gt;No. Because the argument rests on a false promise: that when something is wrong, you can "just regenerate."&lt;/p&gt;

&lt;p&gt;You can't. You have two options, and both are bad. You do a partial regeneration, patching the parts that were wrong while keeping the parts that weren't. This leaves echoes. Maybe it fixes the immediate problem, but it will never be truly clean. Remnants of invalid architecture accumulate as tech debt, and over time your architecture reflects every wrong assumption you ever made, partially corrected but never resolved. Option two, you do a full regeneration. Throw it all away and start again. Assuming &lt;em&gt;some&lt;/em&gt; of it was fine, you've now discarded working code and spun the wheel of chance again, hoping the AI produces something better this time. This isn't engineering. This is gambling.&lt;/p&gt;

&lt;p&gt;As Eisenhower put it, "Plans are worthless, but planning is everything." Understanding doesn't come from the plan itself. It comes from the process of planning and then confronting reality. The plan is a hypothesis. Iteration is how you test it. AI-accelerated waterfall risks skipping the thinking entirely, because doing is so cheap that thinking feels like a waste of time.&lt;/p&gt;

&lt;p&gt;An &lt;a href="https://www.infoq.com/articles/PDCA-AI-code-generation/" rel="noopener noreferrer"&gt;InfoQ experimental study&lt;/a&gt; found that in unstructured AI coding sessions, 80% of tokens were spent &lt;em&gt;after&lt;/em&gt; the agent declared the task complete. All of that effort went into debugging, resolving incomplete implementation, and correcting assumptions. The "build fast, fix later" approach doesn't even save time. The thinking you skip upfront just comes back as rework downstream.&lt;/p&gt;

&lt;h2&gt;
  
  
  Theory, Meet Practice
&lt;/h2&gt;

&lt;p&gt;Even with those problems, you might still feel like the logic holds. If we just plan &lt;em&gt;well enough&lt;/em&gt;, AI-accelerated waterfall should work. It makes intuitive sense.&lt;/p&gt;

&lt;p&gt;But in practice, reality continues to surprise us. We're just surprised faster, and more expensively, because we built more before discovering we were wrong.&lt;/p&gt;

&lt;p&gt;When building is cheap, it &lt;em&gt;feels&lt;/em&gt; wasteful to spend time planning and validating. Why spec it out when you can just build it and see? Why write acceptance criteria when the AI can generate the feature in twenty minutes? Because the cost was never in the building. The cost is in being wrong at scale. Wrong assumptions baked into architecture. Wrong UX shipped to customers. Wrong patterns replicated across the codebase by an AI that doesn't know they're wrong, and will happily propagate them into every file it touches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.augmentcode.com/guides/ai-coding-agents-for-spec-driven-development-automation" rel="noopener noreferrer"&gt;Augment Code found&lt;/a&gt; that multi-file AI coding tasks succeed at only 19.36% accuracy, while single-function tasks achieve 87.2%. The difference is specification quality. The less you think before generating, the worse the output, and this scales non-linearly with complexity. A vague prompt for a small function might produce something close enough. A vague prompt for a multi-service feature produces something that looks correct, passes a cursory review, and fails in production in ways you never anticipated because you never thought through the edge cases.&lt;/p&gt;

&lt;p&gt;There's a deeper problem here too. When you tell an AI "give me all the users born after a certain date who like cheese," you get working code. But you have no idea how it works. You didn't think about what data structure to use, how to filter efficiently, or what happens when the dataset is large. You skipped all of that, and the code runs, so it feels like it doesn't matter.&lt;/p&gt;

&lt;p&gt;But it does matter, because that thinking is where understanding comes from. When a developer solves a problem by hand, they spend far more time thinking about it than writing code. They need to understand the API they're calling, the concept of the operations they're performing, and they need to sequence the solution in their head. This compounds over time and becomes knowledge. AI lets you skip straight to a working result without building that mental model. You can &lt;em&gt;use&lt;/em&gt; the output without &lt;em&gt;understanding&lt;/em&gt; the output. Which is fine until something breaks, or until you need to make a judgment call the AI can't make for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agile With Larger Brushstrokes
&lt;/h2&gt;

&lt;p&gt;So what actually works? Not abandoning discipline for speed. But not clinging to pre-AI chunk sizes either. The old model of sub-200 line PRs and two-week sprints was optimised for a world where building was slow. That constraint has genuinely changed.&lt;/p&gt;

&lt;p&gt;Think of it this way. In waterfall, you might plan six months of work, build it, and find out at the end whether your assumptions were right. Agile compressed that. You plan a task thay might be sixty minutes of work, build it, validate, and iterate. What AI enables is somewhere in between: you plan six hours of work, generate it, and validate through the stack. The planning horizon grew because AI made implementation faster, but it didn't grow to six months. The feedback loops are still short. You're still validating continuously. The chunks got bigger, but the discipline didn't disappear.&lt;/p&gt;

&lt;p&gt;AI changed the cost of building, but it didn't change the cost of being wrong. So the chunk size can grow, but only if the feedback loops are preserved at the right points. Concretely, this looks like three shifts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specs as collaborative hypotheses, not contracts.&lt;/strong&gt; Product, design, and engineering converge on a shared specification &lt;em&gt;before&lt;/em&gt; AI generates anything. Not Big Design Up Front. A focused conversation, fifteen to sixty minutes, that surfaces assumptions, defines edge cases, and gives the AI coherent direction. Pre-AI, developers accumulated this understanding incidentally during slow implementation. AI removes that incidental learning, so you need to replace it with something deliberate. The spec is that replacement. It's a hypothesis about what to build and how, not a guarantee that you've thought of everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stacked PRs as iterative delivery.&lt;/strong&gt; AI can generate a large changeset in hours, but it shouldn't land as a single monolithic pull request. Instead, it lands as three to five sequential PRs, each validated independently. Preview deployments per PR mean design and product see real output continuously, not at a staging gate at the end. Each PR in the stack is a learning checkpoint where assumptions meet reality. A monolithic AI-generated PR is waterfall at the code level, while a stack of validated PRs is Agile at the code level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layered review that matches the new reality.&lt;/strong&gt; AI review tools handle the mechanical checks: style, common bugs, security patterns. Humans focus on what AI genuinely can't judge. Does this architecture make sense for where we're headed? Does this match what the customer actually needs? The review bottleneck isn't removed. It's redirected to where human judgment is irreplaceable.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.faros.ai/blog/key-takeaways-from-the-dora-report-2025" rel="noopener noreferrer"&gt;DORA Report&lt;/a&gt; is explicit about this: "working in small batches amplifies AI's positive effects." Teams with strong engineering discipline see dramatically better outcomes from AI adoption. Teams without it see negative ROI, producing technical debt faster than they ever could by hand. The discipline isn't the obstacle. It's the multiplier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottleneck Moved. Fix It, Don't Remove It.
&lt;/h2&gt;

&lt;p&gt;The teams struggling with AI adoption are the ones treating every source of friction as an obstacle to remove. Code review is slow? Skip it. Sprint planning takes time? Drop it. Writing tests feels redundant when the AI code works? Don't bother.&lt;/p&gt;

&lt;p&gt;The teams succeeding are the ones recognising that the bottleneck has shifted from building to understanding, and investing accordingly.&lt;/p&gt;

&lt;p&gt;Pre-AI, understanding came partly for free through the act of writing code. AI removed that incidental learning, and nothing automatically replaced it. The practices that work in an AI-native world (spec-driven development, stacked validation, layered review) aren't new overhead bolted onto a fast process. They're the deliberate replacement of understanding that used to come for free during slow implementation. Without them, you're building fast but building blind.&lt;/p&gt;

&lt;p&gt;You can have the speed. You can have the quality. But only if you invest in the thinking that the speed demands. The competitive advantage was never building fast. Everyone can do that now. The competitive advantage is understanding what to build, and knowing that what you built is right. That was always the hard part. It still is.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codequality</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Automated Code Review: Benefits, Tools &amp; Implementation (2026 Guide)</title>
      <dc:creator>Cameron Pavey</dc:creator>
      <pubDate>Thu, 05 Mar 2026 11:24:29 +0000</pubDate>
      <link>https://dev.to/cpave3/automated-code-review-benefits-tools-implementation-2026-guide-5dgd</link>
      <guid>https://dev.to/cpave3/automated-code-review-benefits-tools-implementation-2026-guide-5dgd</guid>
      <description>&lt;p&gt;Code review has become the single biggest bottleneck in modern software development. As AI coding tools accelerate generation, with &lt;a href="https://www.elitebrains.com/blog/aI-generated-code-statistics-2025" rel="noopener noreferrer"&gt;41% of all code now AI-assisted&lt;/a&gt;, review queues have ballooned, creating a paradox where individual developer speed rises while organizational throughput stalls or declines. The &lt;a href="https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report" rel="noopener noreferrer"&gt;DORA 2024 report&lt;/a&gt; found that a 25% increase in AI tool adoption correlated with a 7.2% decrease in delivery stability, largely because AI enables larger changesets that overwhelm review capacity.&lt;/p&gt;

&lt;p&gt;This guide walks you through the three levels of automated code review. From basic linting through Static Analysis to AI-powered semantic analysis, you will see how to implement a system that turns review from a bottleneck into a competitive advantage.&lt;/p&gt;

&lt;p&gt;The stakes are real. Research &lt;a href="https://www.it-cisq.org/cisq-files/pdf/CPSQ-2020-report.pdf" rel="noopener noreferrer"&gt;consistently shows&lt;/a&gt; that a bug caught in production costs 10x more than one found during design, with &lt;a href="https://www.theregister.com/2021/07/22/bugs_expense_bs/" rel="noopener noreferrer"&gt;some estimates&lt;/a&gt; putting that multiplier as high as 100x. The &lt;a href="https://www.it-cisq.org/wp-content/uploads/sites/6/2022/11/CPSQ-Report-Nov-22-2.pdf" rel="noopener noreferrer"&gt;Consortium for IT Software Quality&lt;/a&gt; pegs the total US cost of poor software quality at $2.41 trillion annually. Yet &lt;a href="https://www.linkedin.com/pulse/why-your-pull-requests-stuck-how-unstick-them-daksh-guard-09gyc/" rel="noopener noreferrer"&gt;analysis of 730,000+ pull requests across 26,000 developers&lt;/a&gt; reveals that PRs sit idle for 5 out of every 7 days of cycle time. Automated code review directly attacks this gap by catching defects earlier, accelerating merge velocity, and freeing human reviewers to focus on architecture and business logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI code explosion has made review the new constraint
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://www.faros.ai/blog/ai-software-engineering" rel="noopener noreferrer"&gt;2025 Faros AI study&lt;/a&gt; of 10,000+ developers found that engineers using AI tools complete 21% more tasks and merge 98% more PRs, but PR review time increased by 91%. Teams that once handled 10 to 15 PRs per week now face 50 to 100. Features that take 2 hours to generate can require 4 hours to review. &lt;a href="https://linearb.io/resources/engineering-benchmarks" rel="noopener noreferrer"&gt;LinearB's 2025 benchmark&lt;/a&gt; of 8.1 million PRs confirmed the pattern: AI-generated PRs wait 4.6x longer before a reviewer picks them up.&lt;/p&gt;

&lt;p&gt;More code is entering pipelines than human reviewers can properly validate. A &lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;CodeRabbit analysis&lt;/a&gt; of 470 GitHub PRs found AI-generated code produces 1.7x more issues than human-written code, logic errors up 75%, security vulnerabilities up 1.5 to 2x, and performance inefficiencies appearing 8x more frequently. The &lt;a href="https://www.sonarsource.com/state-of-code-developer-survey-report.pdf" rel="noopener noreferrer"&gt;Sonar 2026 State of Code&lt;/a&gt; survey confirmed that 96% of developers don't fully trust AI-generated code's functional accuracy, yet only 48% always verify it before committing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F389ssjpay2me8urq1ir9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F389ssjpay2me8urq1ir9.png" alt="The cycle of increasing presssure" width="759" height="1660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DORA's 2024 research identified the root cause: AI tools violate &lt;a href="https://graphite.com/guides/best-practices-managing-pr-size" rel="noopener noreferrer"&gt;small-batch principles&lt;/a&gt; by enabling larger changesets that increase risk. &lt;a href="https://graphite.com/guides/common-developer-kpis-graphite-insights" rel="noopener noreferrer"&gt;Elite-performing teams&lt;/a&gt; deploy multiple times daily with sub-5% change failure rates. However, AI adoption without review automation pushes teams toward larger batches, eroding the very practices that make elite performance possible. The path forward is automating the review process itself, not just code generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 1: linting and formatting eliminate the noise
&lt;/h2&gt;

&lt;p&gt;The foundation of any automated review system is deterministic tooling that enforces consistency and catches syntax-level issues before they reach human reviewers. This layer eliminates style debates entirely and ensures every PR starts from a clean baseline.&lt;/p&gt;

&lt;p&gt;Linters analyse your code for logical errors, anti-patterns, and style violations. Rather than checking whether code runs, they encode your team's standards as rules applied automatically on every change. Formatters handle a narrower but equally important job: they take any valid code and rewrite it into a single canonical style, making diffs cleaner and reviews faster. The two tools work in tandem, with the linter catching what you mean, and the formatter controlling how it looks.&lt;/p&gt;

&lt;p&gt;In the JavaScript ecosystem, &lt;a href="https://eslint.org/" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; and &lt;a href="https://prettier.io/" rel="noopener noreferrer"&gt;Prettier&lt;/a&gt; are the dominant tools for these roles respectively, and both saw significant releases in early 2026. ESLint's v10 completed a multi-year architectural overhaul, added multithreading for large codebases, and expanded beyond JavaScript to cover CSS, HTML, JSON, and Markdown. Prettier's v3.8 introduced a Rust-powered CLI with meaningful speed improvements. Together they cover virtually every file type in a modern web project.&lt;/p&gt;

&lt;p&gt;Implementing both via GitHub Actions is straightforward and should be the first automation any team deploys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Code Quality&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;lint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;20'&lt;/span&gt;
          &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;npm'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npx eslint . --cache --max-warnings &lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npx prettier --check .&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In CI, run formatters in &lt;code&gt;--check&lt;/code&gt; mode (developers should fix issues locally) and enforce passing checks via branch protection rules. Adding ESLint caching and parallel jobs per language keeps feedback under 30 seconds, which is critical for developer adoption. Pre-commit hooks using tools like Husky and lint-staged catch issues before they even reach CI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 2: SAST and security scanning catch what linters miss
&lt;/h2&gt;

&lt;p&gt;Static Application Security Testing tools analyse code for vulnerabilities, complexity, and deeper quality issues that pattern-based linters cannot detect. &lt;a href="https://www.sonarsource.com/products/sonarqube/whats-new/2026-1/" rel="noopener noreferrer"&gt;SonarQube Server 2026.1 LTA&lt;/a&gt; leads this category with support for 30+ languages, advanced taint analysis tracking data flow across functions and files, and detection of &lt;a href="https://graphite.com/guides/owasp-code-review-guidelines" rel="noopener noreferrer"&gt;OWASP Top 10 vulnerabilities&lt;/a&gt; including SQL injection, XSS, SSRF, command injection, and path traversal. SonarQube's AI CodeFix feature uses LLMs to generate remediation suggestions for detected issues, while its AI Code Assurance capability automatically identifies and applies stricter quality gates to AI-generated code.&lt;/p&gt;

&lt;p&gt;SAST tools commonly detect injection flaws (SQL injection, XSS, command injection, LDAP injection, SSRF, and XXE), data exposure issues (hardcoded secrets and credentials, sensitive data in logs, missing encryption), memory and buffer issues (buffer overflows, use-after-free, integer overflows), and input validation failures (path traversal, insecure deserialization, unvalidated redirects).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti59ftgtbuoqzue9komn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti59ftgtbuoqzue9komn.png" alt="The different levels of automated review" width="800" height="1062"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Detection rates vary significantly. On the OWASP Benchmark, modern AI-enhanced SAST tools like &lt;a href="https://qwiet.ai/beating-the-owasp-benchmark/" rel="noopener noreferrer"&gt;Qwiet AI have achieved&lt;/a&gt; 100% true positive rates with 25% false positive rates, while traditional tools historically scored around 33%. SonarQube achieves false positive rates as low as 1% on mature codebases. The key advance in 2025 to 2026 has been combining SAST with LLM-based post-processing. &lt;a href="https://aisecurity-portal.org/en/literature-database/llm-driven-sast-genius-a-hybrid-static-analysis-framework-for-comprehensive-and-actionable-security/" rel="noopener noreferrer"&gt;One study showed this combination reduced false positives by 91%&lt;/a&gt; compared to standalone Semgrep scanning.&lt;/p&gt;

&lt;p&gt;SonarQube's &lt;a href="https://docs.sonarsource.com/sonarqube-server/10.3/user-guide/clean-as-you-code/" rel="noopener noreferrer"&gt;Clean as You Code&lt;/a&gt; philosophy, where quality gates apply only to new code rather than the entire codebase, makes adoption practical for legacy projects. Configure gates to fail on any new blocker or critical vulnerability, while incrementally addressing existing technical debt. This approach follows a zero-noise principle: only flag issues developers can act on right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 3: AI-powered review and workflow platforms change everything
&lt;/h2&gt;

&lt;p&gt;The most significant shift in 2025 to 2026 has been the emergence of &lt;a href="https://graphite.com/guides/ai-code-reviewers-how-they-work" rel="noopener noreferrer"&gt;AI-powered code review&lt;/a&gt; that understands code semantics, developer intent, and project context, moving well beyond pattern matching into genuine comprehension. This is where platforms like &lt;strong&gt;Graphite&lt;/strong&gt; operate, combining AI review intelligence with workflow automation to address the full "outer loop" of development.&lt;/p&gt;

&lt;p&gt;The AI foundation is now proven. Anthropic's Claude model family powers multiple code review tools across the Claude Sonnet, Haiku, and Opus tiers, balancing capability, speed, and cost for different review workloads. Claude Code includes a built-in &lt;code&gt;/code-review&lt;/code&gt; command that launches four parallel review agents, scores issues by confidence, and surfaces only findings above an 80% confidence threshold — important for &lt;a href="https://graphite.com/guides/ai-code-review-false-positives" rel="noopener noreferrer"&gt;managing false positives&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://graphite.com/" rel="noopener noreferrer"&gt;Graphite&lt;/a&gt; exemplifies the Level 3 platform approach. Following its acquisition by Cursor in December 2025 (at a valuation exceeding its previous $290M), Graphite serves 100,000+ developers across 500+ companies including &lt;a href="https://graphite.com/customer/shopify" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt;, Snowflake, Figma, and Notion. Its thesis: AI tools have dramatically accelerated the "inner loop" of writing code, making the "outer loop" of review, merge, and deploy the new constraint. Graphite addresses this with four integrated capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://graphite.com/features/agent" rel="noopener noreferrer"&gt;Graphite Agent&lt;/a&gt;&lt;/strong&gt; provides &lt;a href="https://graphite.com/features/ai-reviews" rel="noopener noreferrer"&gt;AI-powered PR review&lt;/a&gt; built on Anthropic's Claude. Unlike general-purpose AI reviewers with a 5-15% false positive rate, it achieves a &lt;a href="https://www.graphite.com/guides/ai-code-review-false-positives" rel="noopener noreferrer"&gt;5-8%&lt;/a&gt; false positive rate through multi-step validation including voting, chain-of-reasoning, and self-critique. The results are compelling: &lt;a href="https://graphite.com/guides/best-ai-pull-request-reviewers-2025" rel="noopener noreferrer"&gt;67% of AI suggestions lead to actual code changes&lt;/a&gt;, and the tool maintains a 96% positive feedback rate from developers. You can &lt;a href="https://graphite.com/guides/ai-code-review-implementation-best-practices" rel="noopener noreferrer"&gt;define custom review rules&lt;/a&gt; in plain language, something like "ensure auth-service never makes direct database calls", and Graphite Agent enforces them on every PR.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://graphite.com/guides/benefits-of-stacked-diffs-in-code-review" rel="noopener noreferrer"&gt;Stacked PRs&lt;/a&gt;&lt;/strong&gt; directly address the batch-size problem identified by DORA. &lt;a href="https://www.propelcode.ai/blog/pr-size-impact-code-review-quality-data-study" rel="noopener noreferrer"&gt;Analysis of 50,000+ PRs&lt;/a&gt; shows defect detection rates drop from 87% for PRs under 100 lines to just 28% for PRs over 1,000 lines. Stacking breaks large features into small, dependent PRs that build on each other. &lt;a href="https://graphite.com/docs/graphite-cli" rel="noopener noreferrer"&gt;Graphite's CLI&lt;/a&gt; (&lt;code&gt;gt stack submit&lt;/code&gt;) manages the entire stack lifecycle including automatic recursive rebasing. The impact is measurable: &lt;a href="https://graphite.com/customer/semgrep" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt; saw a 65% increase in code shipped per engineer after adopting stacking, while &lt;a href="https://graphite.com/customer/shopify" rel="noopener noreferrer"&gt;Shopify&lt;/a&gt; reports 33% more PRs shipped per developer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://graphite.com/features/merge-queue" rel="noopener noreferrer"&gt;Merge Queue&lt;/a&gt;&lt;/strong&gt; is the only stack-aware merge queue available, processing dependent PRs in parallel while ensuring the main branch stays green. It supports batching multiple PRs to &lt;a href="https://graphite.com/use-cases/reducing-ci-costs" rel="noopener noreferrer"&gt;reduce CI costs&lt;/a&gt; and hot-fix prioritization for critical changes.&lt;/p&gt;

&lt;p&gt;Customer metrics demonstrate the platform effect. &lt;a href="https://graphite.com/customer/ramp" rel="noopener noreferrer"&gt;Ramp&lt;/a&gt; achieved a 74% decrease in median time between merged PRs (from 10 hours to 3). &lt;a href="https://graphite.com/customer/asana" rel="noopener noreferrer"&gt;Asana&lt;/a&gt; engineers shipped 21% more code and saved 7 hours per week per engineer within 30 days. Across all customers, the average Graphite user merges 26% more PRs while reducing median PR size by 8 to 11%.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling out automation without overwhelming your team
&lt;/h2&gt;

&lt;p&gt;The most common failure mode is deploying too many blocking checks at once, triggering alert fatigue that erodes developer trust. Research shows false positives are the &lt;a href="https://graphite.com/guides/benefits-of-automated-code-review-tools" rel="noopener noreferrer"&gt;number-one adoption killer&lt;/a&gt; for automated review tools. The solution is a progressive, trust-building rollout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1 (Weeks 1 to 4): Foundation.&lt;/strong&gt; Deploy ESLint and Prettier as non-blocking CI checks. Add PR size warnings for changes exceeding 400 lines. Establish baseline metrics: current cycle time, defect escape rate, and PR merge frequency. This phase should be completely frictionless — developers see suggestions but are never blocked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2 (Weeks 5 to 10): Security gates.&lt;/strong&gt; Introduce SonarQube or equivalent SAST scanning in advisory mode. Configure severity thresholds so only critical security findings (SQL injection, hardcoded secrets) become blocking. All other findings appear as PR comments. Begin tracking false positive rates and tune rules aggressively — a finding that never gets fixed is noise, not signal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3 (Weeks 11 to 16): AI-powered review.&lt;/strong&gt; Enable &lt;a href="https://graphite.com/features/agent" rel="noopener noreferrer"&gt;Graphite Agent&lt;/a&gt; or equivalent AI review as a non-blocking reviewer. Start with 1 to 3 volunteer teams who provide feedback on suggestion quality. Use this phase to configure custom team rules and &lt;a href="https://graphite.com/guides/calibrate-ai-code-review-feedback" rel="noopener noreferrer"&gt;calibrate the AI&lt;/a&gt; to your codebase's conventions. The key metric to track is acceptance rate — the percentage of AI comments that result in code changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4 (Week 17+): Full platform.&lt;/strong&gt; Introduce &lt;a href="https://graphite.com/guides/stacked-diffs" rel="noopener noreferrer"&gt;stacked PR workflows&lt;/a&gt;, &lt;a href="https://graphite.com/docs/graphite-merge-queue" rel="noopener noreferrer"&gt;merge queue automation&lt;/a&gt;, and promote AI review to soft-gate status (require acknowledgment of critical findings). Implement &lt;a href="https://graphite.com/docs/insights" rel="noopener noreferrer"&gt;productivity insights&lt;/a&gt; to measure before/after impact.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r4ggrz8mzbrczvwh40f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r4ggrz8mzbrczvwh40f.png" alt="The stages of AI code review adoption" width="552" height="1724"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Three principles govern successful rollouts. First, start non-blocking and graduate to blocking only after false positive rates stabilize below 5%. Second, integrate into existing workflows. Review feedback should appear as inline PR comments, not in separate dashboards. Third, measure and share wins: when developers see that automated review caught a real bug or saved them 30 minutes, adoption becomes self-reinforcing. &lt;/p&gt;

&lt;h2&gt;
  
  
  The cost equation favors aggressive automation
&lt;/h2&gt;

&lt;p&gt;The financial case for automated code review is straightforward to model. A team processing 200 PRs monthly that saves 20 minutes of reviewer time per PR at an $80 loaded rate generates roughly $25,600 in annual savings from review efficiency alone. Blocking even 10 high-severity bugs per quarter that would have cost $5,000 each in production adds another $200,000 in avoided remediation costs. Against typical platform costs of $20,000 to $40,000 annually for a 25-person team, the total benefit of roughly $226,000 delivers an ROI of between 5:1 and 11:1 in the first year, depending on platform tier.&lt;/p&gt;

&lt;p&gt;The deeper value is strategic, though. DORA research consistently shows that elite teams combine fast delivery with high stability, and they achieve this through small batches, automated testing, and rapid feedback loops. Automated code review is the mechanism that makes this possible at scale, especially as AI-generated code volumes continue to grow. Teams that treat review as an afterthought will face compounding technical debt: &lt;a href="https://byteiota.com/ai-tech-debt-crisis-75-hit-by-2026-studies-warn/" rel="noopener noreferrer"&gt;75% of technology decision-makers&lt;/a&gt; are projected to face moderate-to-severe technical debt from AI-speed practices by end of 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The automated code review landscape in 2026 has matured into a clear three-level stack. &lt;/p&gt;

&lt;p&gt;Level 1: Linting with ESLint and Prettier. This is table stakes that every team should have deployed. &lt;br&gt;
Level 2: SAST with tools like SonarQube. This catches security vulnerabilities and code smells that linters miss. &lt;br&gt;
Level 3: &lt;a href="https://graphite.com/guides/ai-code-review-agents" rel="noopener noreferrer"&gt;AI-powered semantic review&lt;/a&gt; combined with workflow automation. This represents the frontier, and it's where the highest-impact gains live.&lt;/p&gt;

&lt;p&gt;Platforms like &lt;a href="https://graphite.com/" rel="noopener noreferrer"&gt;Graphite&lt;/a&gt; that integrate AI review, stacked PRs, and merge automation into a unified system address the full outer-loop bottleneck rather than just one piece of it. The data is clear: small PRs reviewed by AI catch 3x more defects than large PRs reviewed by humans alone, and teams using integrated automation platforms ship 20 to 65% more code while maintaining or improving quality. For engineering leaders, the question is no longer whether to automate code review, but how quickly you can reach Level 3.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>programming</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Understanding Types; Static vs Dynamic, &amp; Strong vs Weak.</title>
      <dc:creator>Cameron Pavey</dc:creator>
      <pubDate>Thu, 10 Dec 2020 11:27:18 +0000</pubDate>
      <link>https://dev.to/cpave3/understanding-types-static-vs-dynamic-strong-vs-weak-4eh4</link>
      <guid>https://dev.to/cpave3/understanding-types-static-vs-dynamic-strong-vs-weak-4eh4</guid>
      <description>&lt;p&gt;Generally speaking, when talking about programming languages, most — if not all — languages can be classified by where they fall in a quadrant, with one axis containing Static and Dynamic typing, and the other containing Strong and Weak typing. But what do these terms mean?&lt;/p&gt;

&lt;h2&gt;
  
  
  Static Typing
&lt;/h2&gt;

&lt;p&gt;Essentially, static typing means that variable types are checked at “compile-time”, or before the code is executed.&lt;/p&gt;

&lt;p&gt;Let’s look at an example in TypeScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function foo(a: number) {
    if (typeof(a) === 'number') {
        return 'number';
    } else {
        return 'not number';
    }
}

foo(1);
foo('1'); // this will throw a compiler error, because it is not a number
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attempting to pass a string into the function &lt;code&gt;foo&lt;/code&gt; after explicitly stating it accepts numbers, would cause it to throw an error such as&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Argument of type '"1"' is not assignable to parameter of type 'number'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dynamic Typing
&lt;/h2&gt;

&lt;p&gt;On the other hand, dynamic typing means that the variables’ types are checked on the fly, as the code is executed. Consider the following PHP code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function foo($a) {
    if (gettype($a) === 'integer') {
        return 'integer';
    } else {
        // This will never be evaluated
        return 'not integer';
    }
}

echo foo(1); // integer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Depending on what value is passed into the function &lt;code&gt;foo&lt;/code&gt;, the variable &lt;code&gt;$a&lt;/code&gt; could technically be any type, and it is only known which one when the code is executed.&lt;/p&gt;

&lt;p&gt;On the other axis of our Types quadrant, we have Strong and Weak typing. This is a bit more confusing because there is no universal consensus on what these terms mean, even though they get thrown around a lot. That being said, let’s try to understand them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strongly Typed
&lt;/h2&gt;

&lt;p&gt;A lot of — but not all — developers agree that the essence of a strongly typed language is the fact that it converts a variable or value’s type to suit the current situation automatically. This means the &lt;code&gt;"123"&lt;/code&gt; is always treated as a string and is never used as a number without manual conversion or intervention. Consider the following Python code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;z = x + y; // This will fail
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This fails and returns the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Traceback (most recent call last):
  File "main.py", line 3, in &amp;lt;module&amp;gt;
    z = x + y;
TypeError: unsupported operand type(s) for +: 'int' and 'str'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is because the two variables which we tried to add together and assign to &lt;code&gt;z&lt;/code&gt; were different types, and as the error says, adding an int and a string is not supported.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weakly Typed
&lt;/h2&gt;

&lt;p&gt;A weakly typed language is — as you might expect — the opposite of what is described above. The interpreter or compiler attempts to make the best of what it is given by using variables in ways that might seem confusing at first, but make sense once we understand what they are doing.&lt;/p&gt;

&lt;p&gt;This means that in some situations — for example — an integer might be treated as if it were a string to suit the context of the situation.&lt;/p&gt;

&lt;p&gt;To demonstrate, let’s try the above example again in a weakly typed language, like JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const z = x + y; // 12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that this returned 12, instead of 3. This is because rather than performing a math operation as we requested, it treated both values as strings, and concatenated them together, resulting in the string “12”.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Hopefully, this sheds some light on the minutia of language typing systems for those who do not already know. This is just a fundamental summary, and there is, of course, a lot more to it, but this should hopefully be a good starting point in understanding the finer details of your favourite programming languages. If you found this article helpful, please &lt;a href="https://twitter.com/cpave3"&gt;&lt;strong&gt;follow me on Twitter&lt;/strong&gt;&lt;/a&gt; for more content like this!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>100daysofcode</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>How I doubled my productivity while working from home</title>
      <dc:creator>Cameron Pavey</dc:creator>
      <pubDate>Sat, 05 Dec 2020 23:38:11 +0000</pubDate>
      <link>https://dev.to/cpave3/how-i-doubled-my-productivity-while-working-from-home-3537</link>
      <guid>https://dev.to/cpave3/how-i-doubled-my-productivity-while-working-from-home-3537</guid>
      <description>&lt;p&gt;This year has been an interesting one, not least of all because many people around the world - especially in the tech sector - ended up working from home for an extended period of time. &lt;/p&gt;

&lt;p&gt;Working from home used to have something of a workplace stigma around it. Employers were seemingly sceptical of how you could be productive in such a distracting environment. Needless to say, it's been proven it can be done, but what I wasn't expecting is how it would be the catalyst for an actual &lt;em&gt;increase&lt;/em&gt; in productivity. Let's take a look at some of the factors which contributed to this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Track your productivity
&lt;/h2&gt;

&lt;p&gt;The first step on any great transformative journey is to gather data. Gather data of where you started from, what you do each step of the way, and ultimately where you end up. With something as subjective as productivity, this is hard to do. For me, I've had good results using  &lt;a href="https://www.rescuetime.com" rel="noopener noreferrer"&gt;RescueTime&lt;/a&gt;, a tool which shows me a summary of where I spent my time on any given day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1607207285187%2FaqiKB7RpL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1607207285187%2FaqiKB7RpL.png" alt="Screen Shot 2020-12-06 at 9.27.24 am.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As they say, hindsight is 20/20. Having this kind of information available makes it easy to reflect and see where you wasted time, and where you can win some extra productivity. &lt;/p&gt;

&lt;p&gt;(Whenever I see my stats, seeing how long I spend on Slack per day is a bit jarring at first, but then I remember that on average, office workers spend about 28% of their day on email.)&lt;/p&gt;

&lt;p&gt;Tracking where your time is spent isn't a step you should skip. It's like training your body. If you are trying to lose 10kg but never set foot on the scales, how will you know if it is working?&lt;/p&gt;

&lt;p&gt;If you are trying to boost your productivity, but have no way of measuring it, then good luck to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoid multitasking
&lt;/h2&gt;

&lt;p&gt;Humans are good at a lot of things, but generally not multi-tasking, or remembering lots of small things. This leads to a well-known problem called "context switching", where your productivity will effectively plummet when you try to switch between multiple tasks.&lt;/p&gt;

&lt;p&gt;This happens for a few reasons, but essentially, the "context" that you create around a task consists of lots of small pieces of information which you need to keep in your head to complete the task. If at any point, someone or something interrupts your tasks, all of these small bits of context are at risk of being lost. Even if the distraction is only a minute long, the cost of switching between tasks will make the impact much greater than just 1 minute.&lt;/p&gt;

&lt;p&gt;According to  &lt;a href="https://insights.sei.cmu.edu/devops/2015/03/addressing-the-detrimental-effects-of-context-switching-with-devops.html" rel="noopener noreferrer"&gt;Todd Waits&lt;/a&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;[...] Switching between projects requires an operational overhead for the team member to figure out where he or she left off, what needs to be done, how that work fits in the project, etc. Once a team member is assigned five projects, his or her ability to contribute to any given project drops below 10 percent, with 80 percent effort being lost to switching between project contexts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The effects of context switching are well known and impossible to conquer. No matter how good you think you are when it comes to multi-tasking, chances are, you are giving yourself far too much credit. The only way to properly combat the effects of context switching is to remove it from the equation as much as possible.&lt;/p&gt;

&lt;p&gt;Carl Newport's  &lt;a href="https://www.amazon.com.au/Deep-Work-Focused-Success-Distracted/dp/0349411905/" rel="noopener noreferrer"&gt;excellent book "Deep work"&lt;/a&gt; takes a deep dive into this topic and expands on systems which you can use to get the most out of the tasks that matter. If you're going to take anything away from this article, let it be this. &lt;/p&gt;

&lt;p&gt;Don't multitask. It doesn't work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Externalise everything
&lt;/h2&gt;

&lt;p&gt;This was the big one for me.&lt;/p&gt;

&lt;p&gt;It's easy to say "Don't multitask" I know. In practice, it is a lot harder, because even if you don't plan to do it, all it takes is an errant Slack message and then Boom, context switched. (It's worth noting, RescueTime has an awesome Slack integration to mute notifications during focus time. Killer feature)&lt;/p&gt;

&lt;p&gt;How, then, should we solve this? After listening to  &lt;a href="https://alifeofproductivity.com/the-productivity-project/" rel="noopener noreferrer"&gt;Chris Bailey's "The Productivity Project"&lt;/a&gt;, where he touches on the benefits of externalising your todo list, I became a little bit obsessed with lists. At first, it was exactly how Chris suggested it. If you have something you need to do, write it down, so you don't have to think about it anymore, essentially. &lt;/p&gt;

&lt;p&gt;Eventually, this evolved into something of a rolling "work journal". I tried a few tools and ultimately settled on  &lt;a href="https://www.notion.so" rel="noopener noreferrer"&gt;Notion&lt;/a&gt; (but you could use anything really). &lt;/p&gt;

&lt;p&gt;The idea is simple. We &lt;em&gt;know&lt;/em&gt; we will be interrupted, and we &lt;em&gt;know&lt;/em&gt; that when we are, it will tank our productivity because we will lose all the little bits of context in our head.&lt;/p&gt;

&lt;p&gt;What if we could externalise that context though?&lt;/p&gt;

&lt;p&gt;When working through complex issues with a lot of moving parts, any time there was a new noteworthy development, such as a discovery, problem, or solution, I would make a note of it. It doesn't need to be too detailed; it is just a note to your future self so that when you are inevitably distracted, your notes should contain all the information you need to pick up right where you left off.&lt;/p&gt;

&lt;p&gt;This information could be your current challenges you are trying to solve, todo items of what you still need to complete, or even just small anecdotal quirks about the system you are working with, that you don't want to dedicated mental space to remember, but can't afford to forget.&lt;/p&gt;

&lt;p&gt;It doesn't sound like it should work half as well as it does.&lt;/p&gt;

&lt;p&gt;Employing this system, I've been able to largely mitigate a lot of the burden I've typically suffered when it comes to context switching. I don't have to worry about taking a lunch break and losing my place. I don't have to dread every incoming slack message, and I am confident that when we return to the office in force, I won't have to worry as much about that casual tap on the shoulder causing my delicate mental house of cards to come crashing down around me.&lt;/p&gt;

&lt;p&gt;If this has given you some ideas to boost your productivity, or if you have some tricks of your own, I'd love to hear from you. Leave a comment below, or &lt;a href="https://twitter.com/cpave3" rel="noopener noreferrer"&gt;follow me on Twitter&lt;/a&gt; for more content like this.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>programming</category>
      <category>webdev</category>
      <category>100daysofcode</category>
    </item>
    <item>
      <title>Get Developer Super Powers with Vim</title>
      <dc:creator>Cameron Pavey</dc:creator>
      <pubDate>Sun, 18 Oct 2020 22:15:26 +0000</pubDate>
      <link>https://dev.to/cpave3/get-developer-super-powers-with-vim-5f6p</link>
      <guid>https://dev.to/cpave3/get-developer-super-powers-with-vim-5f6p</guid>
      <description>&lt;p&gt;Get Developer Super Powers with Vim&lt;br&gt;
You've probably heard of Vim if you've been working with code for a while. Vim is a command-line text editor available for a lot of operating systems. Initially released in 1991, Vim is one of the most popular text editors in the computing community.&lt;/p&gt;

&lt;p&gt;So why should you care about this old text editor? You should care because it is pretty much ubiquitous, and is worth learning unless your heart already belongs to an alternative like emacs or even nano (if you really want to).&lt;br&gt;
Vim is scary at first. It does so much, and its shortcuts are not likely ones that you have used before. Even quitting out of the software is notoriously tricky for newcomers. Once you have learnt the basics though, they will be the gift that keeps on giving, and I'll tell you why.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ubiquity
&lt;/h2&gt;

&lt;p&gt;Vim is everywhere, even when it isn't. For instance, pretty much any Linux or Mac operating system you touch will have Vim installed and ready to go. With the advent of "Windows Subsystem for Linux" on Windows 10, Vim is now readily available on Windows as well (and could be downloaded anyway before this).&lt;br&gt;
Most code editors also have Vim emulation plugins available, so even if you don't want to use Vim directly, you can still get all of the benefits it brings from the comfort of the code editor or IDE you know and love.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portability
&lt;/h2&gt;

&lt;p&gt;With Vim, you learn the shortcuts and commands once and take them with you everywhere. As mentioned above, any code editor you use will have Vim bindings, and many coding websites like Codewars also support Vim mode. What this means is that all of your devices, editors, and environments can share the same keybindings easily. You don't have to manage the mental burden of memorising different shortcuts for different setups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Productivity
&lt;/h2&gt;

&lt;p&gt;The real secret sauce that makes learning Vim worthwhile is the productivity benefits it offers. A key principle of Vim is to deliver maximum productivity and functionality from the keyboard, without needing to move your hand to the mouse, and without needing to leave the home row so much on the keyboard. This is the main factor that leads to some of the weird shortcuts and keybindings, like :w to save instead of the more commonly used +s.&lt;/p&gt;

&lt;p&gt;The main factor which drove me to become familiar with Vim was the portability so that I would be more productive while working in remote ssh environments. Once I started incorporating it into my daily workflow through the JetBrains IdeaVim plugin, and the corresponding VS Code plugin, I realised that the shortcuts and keybindings themselves were actually a huge time saver even when not working with remote systems. I quickly found myself using v and hjkl to select chunks of text instead of lifting my hand to use the mouse. I often use vim to save, search, replace, copy, paste, and delete text, even though VS Code and PHP Storm have all these features natively. The reason is simple. It's just faster and more natural to do it all in Vim.&lt;/p&gt;

&lt;p&gt;Learning Vim is daunting at first, but give it some time, it will almost certainly be an investment which pays itself off in spades and serves you well for years to come.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>codenewbie</category>
    </item>
  </channel>
</rss>
