<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gjøran Voldengen</title>
    <description>The latest articles on DEV Community by Gjøran Voldengen (@gjoranv).</description>
    <link>https://dev.to/gjoranv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gjoranv"/>
    <language>en</language>
    <item>
      <title>I Stopped Reading Diffs. Here's What I Review Instead.</title>
      <dc:creator>Gjøran Voldengen</dc:creator>
      <pubDate>Mon, 30 Mar 2026 06:47:00 +0000</pubDate>
      <link>https://dev.to/gjoranv/i-stopped-reading-diffs-heres-what-i-review-instead-1khh</link>
      <guid>https://dev.to/gjoranv/i-stopped-reading-diffs-heres-what-i-review-instead-1khh</guid>
      <description>&lt;p&gt;AI coding tools made me 3x faster at writing code. But I'm not shipping 3x faster. I spend more time than ever waiting for reviews and handling review feedback. The bottleneck moved.&lt;/p&gt;

&lt;p&gt;This is the second article in a series about how I use Claude Code. The &lt;a href="https://dev.to/gjoranv/claude-code-doesnt-remember-heres-how-i-fixed-that-3ah6"&gt;first&lt;/a&gt; covered persistent execution plans and external memory.&lt;/p&gt;

&lt;p&gt;Code Review tools handle the automated side well. Claude Code's built-in review dispatches multiple agents to catch bugs and flag style issues when a PR opens. That's useful, but it's not where the time goes. The time goes to the human review cycle: giving thoughtful feedback on a colleague's design decisions, responding to comments on your own PR, the back-and-forth that turns a draft into something mergeable. Some of that is pure wait time: a PR sits idle while reviewers are busy. The skills in this article speed up the other part, what happens when you actually sit down to review or respond.&lt;/p&gt;

&lt;p&gt;The answer isn't to review faster. It's to change what you're reviewing. Instead of reading diffs line by line, I review Claude's assessment of the changes in context of the full codebase. I focus on whether the analysis is right, whether it missed something, and whether the design makes sense. Two custom Claude Code skills handle the mechanics, one for each side of the review cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reviewing others' PRs
&lt;/h2&gt;

&lt;p&gt;I run &lt;code&gt;/gh-review-pr owner/repo#123&lt;/code&gt;. Claude checks out the PR branch, reads the diff with full codebase access (it can follow imports, check how changed functions are used, and understand how the PR fits into the larger architecture), and presents findings grouped by severity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Must fix
- src/api/handlers.go:47: Missing auth check on the new endpoint.
  All existing endpoints use requireAuth middleware.

Should fix
- src/api/handlers.go:62: Similar pagination logic exists in listUsers.
  Consider extracting a shared helper.

Nit
- Test name on line 15 could be more descriptive.

Praise
- Clean error handling throughout. The custom error types are well done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most useful thing the skill does is search the repo and the broader GitHub org for prior art before forming its review. If someone adds a caching layer, it checks whether there's already an established pattern elsewhere in the org. This catches the "we already have a way to do this" issues that are invisible from a diff alone, and it's the biggest gap in existing review tools, including the built-in ones.&lt;/p&gt;

&lt;p&gt;That cross-repo context is what makes reviewing at a higher level possible. I'm not scanning the diff for problems. I'm reading a structured assessment and deciding whether the analysis is right, whether it missed anything, and which findings are worth posting.&lt;/p&gt;

&lt;p&gt;A few other details that compound over time. If I'm the PR author (self-review before requesting others), the skill detects that and keeps findings in the conversation instead of posting comments on the PR. If I've already reviewed and the author pushed fixes, it focuses on what changed since my last review rather than re-reviewing everything. I confirm which comments to post and what review action to take. Claude submits them as a single batched review.&lt;/p&gt;

&lt;h2&gt;
  
  
  Responding to review feedback
&lt;/h2&gt;

&lt;p&gt;The other side of the bottleneck. Someone reviewed my PR and left comments. Before this skill, handling that meant reading each comment, context-switching to the code, making the fix, writing a reply, going back to GitHub to resolve the thread, then doing it again for the next comment.&lt;/p&gt;

&lt;p&gt;Now I run &lt;code&gt;/gh-review-respond&lt;/code&gt;. Claude reads all unresolved threads and presents them in three categories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fix needed (2)
- @sarah: "Missing null check on line 34" → add guard clause
- @sarah: "This query needs an index" → add database index

Discussion (1)
- @sarah: "Why not use the existing UserService here?"

Disagree (1)
- @sarah: "This should be a separate microservice"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The categorization is the key step. I review Claude's assessment: yes, fix those two, they're real issues. The UserService question needs context, so I explain my reasoning and Claude drafts the reply. For the microservice suggestion, I disagree. The current approach is simpler and we can extract later if the need becomes clear. Claude writes a response that acknowledges the concern and explains the tradeoff.&lt;/p&gt;

&lt;p&gt;Getting the tone right on disagreements matters. A dismissive response can derail a productive review. The skill handles the diplomacy; I focus on the substance.&lt;/p&gt;

&lt;p&gt;After I confirm, Claude applies the code fixes, replies to each comment on the PR, and resolves all addressed threads in a single batch. Then it commits. One detail that matters more than you'd expect: for human reviewers, it always creates a new commit so the reviewer can see exactly what changed. For bot reviewers (like Copilot), it squashes into existing commits since nobody needs those diffs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mechanics change. The judgment doesn't.
&lt;/h2&gt;

&lt;p&gt;These skills automate the mechanical parts of the review cycle: reading diffs in context, posting comments, resolving threads, choosing commit strategies. What they don't automate is the judgment. Deciding that a missing auth check is a blocker but a naming suggestion isn't. Recognizing when feedback improves the design vs. when it's a preference. Knowing when to push back and how to frame it. The time I used to spend on mechanics, I now spend on those decisions.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/gjoranv/claude-code-doesnt-remember-heres-how-i-fixed-that-3ah6"&gt;first article&lt;/a&gt; in this series, I described the core pattern as "the edge is judgment, not config." The same applies here. The skills handle the cycle. The decisions are still yours.&lt;/p&gt;

&lt;p&gt;I've published both skills at &lt;a href="https://github.com/gjoranv/claude-review-skills" rel="noopener noreferrer"&gt;claude-review-skills&lt;/a&gt;. If your review workflow is different, or you review in a domain with specific requirements (security, API compatibility, accessibility), Claude Code's &lt;code&gt;/skill-creator&lt;/code&gt; can build custom review skills from a description of your process.&lt;/p&gt;

&lt;p&gt;I'll cover more patterns in future posts.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>github</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Claude Code Doesn't Remember. Here's How I Fixed That.</title>
      <dc:creator>Gjøran Voldengen</dc:creator>
      <pubDate>Mon, 09 Mar 2026 09:29:28 +0000</pubDate>
      <link>https://dev.to/gjoranv/claude-code-doesnt-remember-heres-how-i-fixed-that-3ah6</link>
      <guid>https://dev.to/gjoranv/claude-code-doesnt-remember-heres-how-i-fixed-that-3ah6</guid>
      <description>&lt;p&gt;Claude Code is remarkably capable. It can read codebases, write code, run tests, create pull requests. But it struggles with continuity. You can resume a conversation with &lt;code&gt;claude -r&lt;/code&gt;, but as conversations grow longer, auto-compaction drops earlier context, token costs rise, and response quality degrades. Finding an old conversation is surprisingly hard, too, since Claude Code's conversation management is minimal. The result: complex tasks that span multiple sessions tend to lose their thread.&lt;/p&gt;

&lt;p&gt;Over the past several months, I've built two patterns that changed how I work with Claude Code. The first turns multi-step tasks into structured execution plans that survive across sessions. The second gives Claude persistent knowledge about any codebase, even repos you don't own. They're simple to set up, and they compound over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plans that survive across sessions
&lt;/h2&gt;

&lt;p&gt;Real work rarely fits in a single conversation. A feature might span several files, require multiple commits, and take days. Claude Code starts fresh every session, so you lose the thread. You end up re-explaining what you've already done, what's left, and what decisions were made along the way.&lt;/p&gt;

&lt;p&gt;I solved this with GitHub issues as execution plans, managed by a set of custom Claude Code skills (I've published these skills at &lt;a href="https://github.com/gjoranv/claude-plan-skills" rel="noopener noreferrer"&gt;claude-plan-skills&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The workflow has four stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a plan&lt;/strong&gt; (&lt;code&gt;/gh-create-plan&lt;/code&gt;): I describe the work to Claude, and it creates a structured GitHub issue with a description, steps as checkboxes, and relevant links. The issue becomes the single source of truth for the task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Implement the plan&lt;/strong&gt; (&lt;code&gt;/gh-implement-plan&lt;/code&gt;): Claude reads the issue, presents the steps, and works through them in order. After each step, it commits the changes and checks off the checkbox on the issue. If something is unclear, it stops and asks rather than guessing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Update the plan&lt;/strong&gt; (&lt;code&gt;/gh-update-plan&lt;/code&gt;): As work progresses, decisions change, or new information surfaces, the plan issue is updated with session notes. A new conversation can pick up exactly where the last one left off.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Close the plan&lt;/strong&gt; (&lt;code&gt;/gh-close-plan&lt;/code&gt;): When the work is done, the issue gets a summary and is closed. It then serves as documentation: what was built, why, and what decisions were made along the way.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The plan isn't a one-shot document. I typically run &lt;code&gt;/gh-update-plan&lt;/code&gt; several times before implementation starts, refining the scope, adding decisions, and updating steps as the approach becomes clearer. By the time I run &lt;code&gt;/gh-implement-plan&lt;/code&gt;, the hard thinking is already done.&lt;/p&gt;

&lt;p&gt;Here's a simplified view of what a plan issue looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Description&lt;/span&gt;
&lt;span class="gu"&gt;### What&lt;/span&gt;
Add a new &lt;span class="sb"&gt;`validate`&lt;/span&gt; subcommand that checks config files for syntax errors before deployment.

&lt;span class="gu"&gt;### Why&lt;/span&gt;
Users currently discover config errors only at deploy time, which wastes CI minutes and blocks releases.

&lt;span class="gu"&gt;## Steps&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [x] Add CLI argument parsing for &lt;span class="sb"&gt;`validate`&lt;/span&gt; subcommand
&lt;span class="p"&gt;-&lt;/span&gt; [x] Implement config file parser with error reporting
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Add unit tests for valid and invalid configs
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Update help text and README
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The checked boxes show where a previous session left off. When I start a new conversation and run &lt;code&gt;/gh-implement-plan&lt;/code&gt;, Claude reads the issue, sees that two steps are done, and picks up at "Add unit tests." No re-explaining needed.&lt;/p&gt;

&lt;p&gt;The plan issues also include diagrams, generated as Mermaid blocks. When I describe the work, Claude produces both the issue structure and a diagram to visualize the flow. Because Mermaid is plain text, the diagrams live inside the issue body, are version-controlled like everything else, and render natively on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmth83y0gu2ugb3hy60di.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmth83y0gu2ugb3hy60di.jpeg" alt="Mermaid diagram showing a validate subcommand flow" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is text in the issue, not an uploaded image. Anyone reading the issue sees the rendered diagram. Anyone editing the issue sees the source. Claude can update it as the plan evolves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Personal context for every project
&lt;/h2&gt;

&lt;p&gt;The plan cycle solves the "losing the thread" problem. But there's a second kind of context loss that's more subtle.&lt;/p&gt;

&lt;p&gt;Claude Code reads &lt;code&gt;CLAUDE.md&lt;/code&gt; files for project context, and you can generate one with &lt;code&gt;/init&lt;/code&gt;. But &lt;code&gt;CLAUDE.md&lt;/code&gt; is a shared, committed file. It's the team's context. I often need personal context on top of that: my custom test commands, the modules I specialize in, details about my expert areas that are too specific for the shared file. And for open-source projects where I'm a contributor but not a maintainer, I can't commit a &lt;code&gt;CLAUDE.md&lt;/code&gt; at all.&lt;/p&gt;

&lt;p&gt;Claude Code also reads a file called &lt;code&gt;CLAUDE.local.md&lt;/code&gt; from the project directory. It works just like &lt;code&gt;CLAUDE.md&lt;/code&gt;, but it's meant for personal context that you don't commit. This is the key to the pattern.&lt;/p&gt;

&lt;p&gt;I keep a private repo called &lt;code&gt;ai-memory&lt;/code&gt; that holds &lt;code&gt;CLAUDE.local.md&lt;/code&gt; files for every project I work with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ai-memory/
  claude/
    some-oss-project/
      CLAUDE.local.md
    my-teams-service/
      CLAUDE.local.md
    infra-tools/
      CLAUDE.local.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each file contains what I need Claude to know beyond what's in the shared &lt;code&gt;CLAUDE.md&lt;/code&gt;: how I run tests, which modules I own, architecture details relevant to my work, conventions I've learned from code review. Here's a simplified example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# CLAUDE.local.md&lt;/span&gt;

&lt;span class="gu"&gt;## My test commands&lt;/span&gt;
Run the integration tests I usually work with:
&lt;span class="sb"&gt;`mvn test -Dtest=ClassName#methodName -Pintegration`&lt;/span&gt;

&lt;span class="gu"&gt;## Areas I work on&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; The config parser module (src/main/java/config/)
&lt;span class="p"&gt;-&lt;/span&gt; The validation pipeline (src/main/java/validate/)

&lt;span class="gu"&gt;## Conventions I've learned&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Commit messages: conventional commits (feat:, fix:, chore:)
&lt;span class="p"&gt;-&lt;/span&gt; PRs must reference an issue number
&lt;span class="p"&gt;-&lt;/span&gt; No wildcard imports
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I symlink these files into each project's checkout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;~/git/some-oss-project/CLAUDE.local.md
  → ~/git/ai-memory/claude/some-oss-project/CLAUDE.local.md
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when I open Claude Code in any project, it picks up my personal context automatically alongside the project's shared &lt;code&gt;CLAUDE.md&lt;/code&gt;. No re-explaining. The ai-memory repo is private and version-controlled, so the knowledge accumulates. When I learn something new about a project, a test quirk, a build flag, a convention I picked up from code review, I add it to the file. Every future session benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond these two patterns
&lt;/h2&gt;

&lt;p&gt;These two patterns are the foundation, but they're not the whole setup. I also use a granular permission model that controls what Claude can do without asking, an instruction that prevents it from speculating about code it hasn't read, and skills that handle the back-and-forth of GitHub PR reviews, some of which I'll cover in a future post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scribe coding: the edge is judgment, not config
&lt;/h2&gt;

&lt;p&gt;I've come to call this approach &lt;strong&gt;scribe coding&lt;/strong&gt;, the deliberate counterpart to vibe coding. Where vibe coders feel their way through, scribe coders write their way through. You do the thinking in markdown; the AI does the typing in code.&lt;/p&gt;

&lt;p&gt;Everything I've described here is simple to set up. A private repo with markdown files. GitHub issues with checkboxes. A few custom skills. None of it is technically impressive.&lt;/p&gt;

&lt;p&gt;The value isn't in the mechanism. It's in knowing what to put in a CLAUDE.md file, how to break a task into steps that Claude can execute reliably, and when to let Claude work autonomously vs. when to stop and redirect. That judgment comes from using these patterns daily and refining them based on what actually works.&lt;/p&gt;

&lt;p&gt;If you try scribe coding, you'll develop your own version of that judgment. The config files are the starting point, not the destination.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>github</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
