<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Seb</title>
    <description>The latest articles on DEV Community by Seb (@gurkiu).</description>
    <link>https://dev.to/gurkiu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gurkiu"/>
    <language>en</language>
    <item>
      <title>The Claude Prompt Pattern That Kills Debugging Spirals</title>
      <dc:creator>Seb</dc:creator>
      <pubDate>Thu, 09 Apr 2026 21:16:47 +0000</pubDate>
      <link>https://dev.to/gurkiu/the-claude-prompt-pattern-that-kills-debugging-spirals-260p</link>
      <guid>https://dev.to/gurkiu/the-claude-prompt-pattern-that-kills-debugging-spirals-260p</guid>
      <description>&lt;p&gt;You know the feeling. You've been staring at a bug for 45 minutes. You've console.log'd everything. You've Googled the error message. You're starting to question reality.&lt;/p&gt;

&lt;p&gt;Then you paste the error into Claude and get: &lt;em&gt;"It looks like there might be an issue with your variable scope. Try checking if the variable is defined before use."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Useless.&lt;/p&gt;

&lt;p&gt;Here's why that happens — and a prompt pattern that fixes it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem: Claude defaults to solution mode
&lt;/h2&gt;

&lt;p&gt;When you drop a raw error into Claude, it pattern-matches to the most common fix for that error shape. It's not thinking about &lt;em&gt;your&lt;/em&gt; system. It's retrieving a generic answer.&lt;/p&gt;

&lt;p&gt;What you actually need is a structured diagnostic — not a guess. You need Claude to reason through the problem the way a senior engineer would: gathering information, forming hypotheses, ranking them by likelihood, and telling you &lt;em&gt;how to verify&lt;/em&gt; each one before touching any code.&lt;/p&gt;

&lt;p&gt;The fix is shifting Claude from &lt;em&gt;answer mode&lt;/em&gt; into &lt;em&gt;hypothesis mode&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The prompt
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;debugging_session&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;error&amp;gt;&lt;/span&gt;
    [Paste the full error message and stack trace]
  &lt;span class="nt"&gt;&amp;lt;/error&amp;gt;&lt;/span&gt;

  &lt;span class="nt"&gt;&amp;lt;context&amp;gt;&lt;/span&gt;
    [What were you doing when this broke? Recent changes?]
    [What's the component/system involved?]
    [What have you already tried?]
  &lt;span class="nt"&gt;&amp;lt;/context&amp;gt;&lt;/span&gt;

  &lt;span class="nt"&gt;&amp;lt;code&amp;gt;&lt;/span&gt;
    [Paste the relevant code — function, component, or module]
  &lt;span class="nt"&gt;&amp;lt;/code&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/debugging_session&amp;gt;&lt;/span&gt;

Generate 3-5 hypotheses for what's causing this bug. For each:
- State the hypothesis clearly
- Explain the reasoning (why this would produce this error)
- Give me a specific verification step BEFORE any code change
- Rate likelihood: High / Medium / Low

Do not suggest fixes yet. I want to understand the problem first.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last line is the key instruction. It forces Claude to slow down and &lt;em&gt;think&lt;/em&gt; instead of reaching for a solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you get back
&lt;/h2&gt;

&lt;p&gt;Instead of a generic fix, you get something like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hypothesis 1 (High):&lt;/strong&gt; The &lt;code&gt;userId&lt;/code&gt; claim is being read from the JWT before the token has been verified. If middleware order changed recently, the route handler might be executing before auth validation completes...&lt;br&gt;
&lt;em&gt;Verify: Log &lt;code&gt;req.user&lt;/code&gt; at the top of the handler and check if it's undefined on first request.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hypothesis 2 (Medium):&lt;/strong&gt; Race condition between session creation and the database write. The session object exists in memory but the DB commit hasn't resolved...&lt;br&gt;
&lt;em&gt;Verify: Add a 200ms artificial delay after login and see if the error disappears.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now you're debugging with a map instead of a metal detector.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why XML structure matters here
&lt;/h2&gt;

&lt;p&gt;Claude handles XML-tagged input differently than plain text. When you wrap context in &lt;code&gt;&amp;lt;error&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;context&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;code&amp;gt;&lt;/code&gt; tags, Claude treats each section as a distinct input — it doesn't conflate your error message with your code or your code with your description of what you tried.&lt;/p&gt;

&lt;p&gt;With plain prose, Claude often fixates on whichever detail appears most prominent. With tagged input, it reasons across all of them.&lt;/p&gt;
&lt;h2&gt;
  
  
  Adding your CLAUDE.md context
&lt;/h2&gt;

&lt;p&gt;If you've set up a &lt;code&gt;CLAUDE.md&lt;/code&gt; file at your project root (the persistent context file Claude reads at the start of every session), your system context is already loaded. That means Claude knows your stack, your auth setup, your conventions — and the hypotheses it generates will be specific to &lt;em&gt;your&lt;/em&gt; codebase, not a generic Node.js app.&lt;/p&gt;

&lt;p&gt;When your CLAUDE.md includes something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Auth&lt;/span&gt;
JWT via jose library. Tokens validated in middleware/auth.ts before routes.
Session stored in Redis with 24h TTL.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...Claude's first hypothesis won't be "maybe you forgot to validate the token." It already knows your auth layer. It'll reason at a higher level.&lt;/p&gt;

&lt;h2&gt;
  
  
  The discipline: verify before you fix
&lt;/h2&gt;

&lt;p&gt;The other thing this prompt enforces is the one habit that separates senior engineers from junior ones: &lt;strong&gt;verify your hypothesis before touching code&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Every time you add the instruction &lt;em&gt;"do not suggest fixes yet"&lt;/em&gt;, you're forcing yourself into that discipline too. You run the verification step. You confirm which hypothesis is correct. Then you ask Claude to help you fix &lt;em&gt;that specific thing&lt;/em&gt; — with full context — instead of shotgunning random solutions.&lt;/p&gt;

&lt;p&gt;It's slower for the first five minutes. It's faster for the next hour.&lt;/p&gt;




&lt;p&gt;I put this prompt (with a few more variations — the flaky test fixer, the log analyzer, the incident responder) into a kit I built for my own workflow. If you want the full set: &lt;a href="https://gurki.gumroad.com/l/50-claude-prompts" rel="noopener noreferrer"&gt;50 Claude Prompts for Developers&lt;/a&gt; — 50 XML-structured prompts + a 20-page guide on using Claude as a daily engineering partner.&lt;/p&gt;

&lt;p&gt;But the prompt above works on its own. Try it on the next bug you hit.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>debugging</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How I Actually Use Claude as a Senior Dev Partner (Not Just a Code Generator)</title>
      <dc:creator>Seb</dc:creator>
      <pubDate>Thu, 09 Apr 2026 20:48:09 +0000</pubDate>
      <link>https://dev.to/gurkiu/how-i-actually-use-claude-as-a-senior-dev-partner-not-just-a-code-generator-3ii6</link>
      <guid>https://dev.to/gurkiu/how-i-actually-use-claude-as-a-senior-dev-partner-not-just-a-code-generator-3ii6</guid>
      <description>&lt;p&gt;Most developers I know use Claude the same way: paste some code, ask a question, get an answer, move on. It works. But it's leaving most of the value on the table.&lt;/p&gt;

&lt;p&gt;After a few months of treating Claude as a genuine engineering partner — not a smarter autocomplete — here's the actual workflow that changed how I ship.&lt;/p&gt;




&lt;h2&gt;
  
  
  The setup that makes everything else work: CLAUDE.md
&lt;/h2&gt;

&lt;p&gt;The single highest-leverage thing I did was create a &lt;code&gt;CLAUDE.md&lt;/code&gt; file at my project root.&lt;/p&gt;

&lt;p&gt;Claude Code reads it automatically at the start of every session. That means I never re-explain my stack, conventions, or current focus. Every session starts with Claude already knowing what I'm working on.&lt;/p&gt;

&lt;p&gt;Mine looks roughly like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Project: [My App]&lt;/span&gt;

&lt;span class="gu"&gt;## Stack&lt;/span&gt;
TypeScript · Next.js · Prisma · PostgreSQL · Vercel

&lt;span class="gu"&gt;## Conventions&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Naming: camelCase functions, PascalCase classes, kebab-case files
&lt;span class="p"&gt;-&lt;/span&gt; Error handling: throw typed errors, never swallow
&lt;span class="p"&gt;-&lt;/span&gt; Testing: Vitest + RTL, co-located __tests__ folders
&lt;span class="p"&gt;-&lt;/span&gt; Imports: absolute paths from src/

&lt;span class="gu"&gt;## Always&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Add JSDoc to public functions
&lt;span class="p"&gt;-&lt;/span&gt; Handle null/undefined explicitly
&lt;span class="p"&gt;-&lt;/span&gt; Separate route handlers from business logic

&lt;span class="gu"&gt;## Never&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; No any types
&lt;span class="p"&gt;-&lt;/span&gt; No console.log in production code

&lt;span class="gu"&gt;## Current focus&lt;/span&gt;
Auth refactor — migrating from JWT to session-based auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seven lines that save five minutes of context-setting every session, compounded across every working day.&lt;/p&gt;




&lt;h2&gt;
  
  
  XML structure: why it matters and why most people skip it
&lt;/h2&gt;

&lt;p&gt;Here's the thing about Claude that most generic prompt packs miss: Claude is trained to treat XML-tagged input as structured data. It's in Anthropic's own prompting documentation. The difference in output quality is real.&lt;/p&gt;

&lt;p&gt;Compare these two prompts for a security review:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generic:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Review this code for security issues&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;XML-structured:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;task&amp;gt;&lt;/span&gt;
Security review this code. Check for:
- Injection: SQL, command, XSS
- Authentication and authorization flaws
- Sensitive data exposure

Rate each finding: Critical / High / Medium / Low.
Suggest exact fixes. Rank by severity.
&lt;span class="nt"&gt;&amp;lt;/task&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;{{PASTE CODE HERE}}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first gets a paragraph of vague suggestions. The second gets a structured report with severity ratings, affected lines, and copy-paste fixes. Same model, different output — because of how the input is structured.&lt;/p&gt;

&lt;p&gt;This is the pattern I use for basically every non-trivial prompt now.&lt;/p&gt;




&lt;h2&gt;
  
  
  The feature development loop
&lt;/h2&gt;

&lt;p&gt;My actual workflow for building a feature:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Orient Claude to the relevant code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before writing anything, I run the codebase orientation prompt against the files I'll be touching:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;task&amp;gt;&lt;/span&gt;
You are a senior engineer onboarding me to this codebase.
Analyze the structure and produce:
1. High-level architecture summary (2-3 sentences)
2. Key entry points and primary data flow
3. Conventions and patterns used throughout
4. Non-obvious decisions or gotchas I should know
&lt;span class="nt"&gt;&amp;lt;/task&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;codebase&amp;gt;&lt;/span&gt;
{{PASTE FILE TREE OR KEY FILES HERE}}
&lt;span class="nt"&gt;&amp;lt;/codebase&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This takes 30 seconds and means Claude's first suggestion is grounded in how the code actually works — not how it guesses it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Generate with constraints, not just descriptions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I ask for code, I include: language, framework, validation library, auth approach, layer separation, and TypeScript mode. "Explain your design decisions before writing code" is always in there — it stops Claude from jumping to implementation before understanding the problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Review before I trust&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I don't just ship Claude's output. I run a structured code review prompt against it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;task&amp;gt;&lt;/span&gt;
Review this code as a senior engineer.
For each issue found:
- File and line number
- Issue type: bug / security / performance / style / maintainability
- Severity: Critical / High / Medium / Low
- Specific fix with code

Only flag real issues. Do not pad the review.
&lt;span class="nt"&gt;&amp;lt;/task&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;{{PASTE CODE}}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "only flag real issues, do not pad" instruction is important. Without it, you get a mix of legitimate concerns and nitpicky style suggestions with no way to tell them apart.&lt;/p&gt;




&lt;h2&gt;
  
  
  Debugging: hypotheses, not guesses
&lt;/h2&gt;

&lt;p&gt;When I'm stuck on a bug, the most useful prompt I have is the debugging hypothesis generator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;task&amp;gt;&lt;/span&gt;
I have a bug. Generate 5 hypotheses about the root cause.
For each:
1. Hypothesis (what you think is happening)
2. Confidence: High / Medium / Low
3. Test plan (exactly how to verify or rule it out)
4. Fix if confirmed

Rank by confidence. Most likely first.
&lt;span class="nt"&gt;&amp;lt;/task&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;bug_description&amp;gt;&lt;/span&gt;
{{DESCRIBE THE BUG AND SYMPTOMS}}
&lt;span class="nt"&gt;&amp;lt;/bug_description&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;relevant_code&amp;gt;&lt;/span&gt;
{{PASTE RELEVANT CODE}}
&lt;span class="nt"&gt;&amp;lt;/relevant_code&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This changed how I debug. Instead of going with my first instinct, I get 5 ranked hypotheses with test plans. The right answer is usually in there, and it's often not the one I would have checked first.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture decisions: getting real recommendations
&lt;/h2&gt;

&lt;p&gt;Most AI tools hedge on architecture questions. You ask "should I use a message queue or direct API calls?" and you get "it depends" followed by a balanced list of both options.&lt;/p&gt;

&lt;p&gt;The fix is to tell Claude explicitly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;task&amp;gt;&lt;/span&gt;
I need a recommendation, not a balanced comparison.
Analyze the options and recommend one.
Be direct. If the answer is obvious given my constraints, say so.
&lt;span class="nt"&gt;&amp;lt;/task&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;context&amp;gt;&lt;/span&gt;
{{YOUR ARCHITECTURE QUESTION AND CONSTRAINTS}}
&lt;span class="nt"&gt;&amp;lt;/context&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding "be direct" and "if the answer is obvious, say so" consistently produces actual recommendations over diplomatic non-answers.&lt;/p&gt;




&lt;h2&gt;
  
  
  The part most developers skip: PRs and docs
&lt;/h2&gt;

&lt;p&gt;I generate PR descriptions from git diff now. It takes 30 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&amp;lt;task&amp;gt;
&lt;span class="p"&gt;Write a PR description for this diff.
Include:
&lt;/span&gt;&lt;span class="gd"&gt;- What changed (1-2 sentences)
- Why (the problem being solved)
- How (approach taken)
- Testing done
- Any gotchas for reviewers
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="p"&gt;Tone: direct, no marketing language.
&lt;/span&gt;&amp;lt;/task&amp;gt;
&amp;lt;diff&amp;gt;
{{PASTE GIT DIFF}}
&amp;lt;/diff&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "no marketing language" instruction is doing a lot of work there. Without it, you get PR descriptions full of "this robust solution elegantly addresses the underlying concern."&lt;/p&gt;




&lt;h2&gt;
  
  
  What I actually ship vs. what I used to ship
&lt;/h2&gt;

&lt;p&gt;The workflow above didn't come from one insight. It came from noticing that Claude's output quality varied enormously based on how I asked — and slowly building a set of prompts that consistently produced good output.&lt;/p&gt;

&lt;p&gt;If you want the full set — 50 prompts across codebase onboarding, code generation, review, debugging, architecture, and docs — I packaged everything into a prompt kit with a companion workflow guide. It's at &lt;a href="https://gurki.gumroad.com/l/50-claude-prompts" rel="noopener noreferrer"&gt;gumroad.com/l/50-claude-prompts&lt;/a&gt; if you want to skip the experimentation and start with what works.&lt;/p&gt;

&lt;p&gt;Otherwise, start with CLAUDE.md and XML structure. Those two changes alone will noticeably improve every session.&lt;/p&gt;

</description>
      <category>claude</category>
    </item>
  </channel>
</rss>
