<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tamas</title>
    <description>The latest articles on DEV Community by Tamas (@tamaspiros).</description>
    <link>https://dev.to/tamaspiros</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tamaspiros"/>
    <language>en</language>
    <item>
      <title>10 Claude Code Tips You Didn't Know</title>
      <dc:creator>Tamas</dc:creator>
      <pubDate>Fri, 20 Mar 2026 16:08:21 +0000</pubDate>
      <link>https://dev.to/triggerdotdev/10-claude-code-tips-you-didnt-know-38b0</link>
      <guid>https://dev.to/triggerdotdev/10-claude-code-tips-you-didnt-know-38b0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zxnl8zfmyqrg1ek1sds.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zxnl8zfmyqrg1ek1sds.jpeg" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Claude Code started as a terminal assistant. It's now an agentic system that reads entire codebases, executes commands, manages git workflows, and spawns subagents. If you're still using it as a chat interface with a shell wrapper bolted on, you're barely touching it.&lt;/p&gt;

&lt;p&gt;Features like &lt;code&gt;CLAUDE.md&lt;/code&gt; and MCP servers dominate the conversation. The CLI itself, though, has a deep set of power-user capabilities that mostly go ignored. These are features built for parallelised, production-grade engineering workflows.&lt;/p&gt;

&lt;p&gt;Here are 10 patterns worth knowing.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Context Pre-Warming via Session Forking
&lt;/h2&gt;

&lt;p&gt;Resuming a session across multiple terminals interleaves the history. That corrupts the context window in ways you won't notice until the model starts hallucinating about files that don't exist.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-fork-session&lt;/code&gt; solves this. It duplicates the full session lineage at that exact moment and produces a clean, completely independent branch. Think of it as &lt;code&gt;git branch&lt;/code&gt; for your LLM context window.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The workflow is "pre-warming." Load a master session with 40k+ tokens of architectural context, API documentation, and coding standards, then fork it for each new feature rather than rebuilding from scratch every time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the heavy context session once&lt;/span&gt;
claude
&lt;span class="s2"&gt;"Read the architecture docs and prepare for feature work"&lt;/span&gt;
/rename master-context

&lt;span class="c"&gt;# Fork it for specific tasks without polluting the original&lt;/span&gt;
claude &lt;span class="nt"&gt;--resume&lt;/span&gt; master-context &lt;span class="nt"&gt;--fork-session&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nxkz0pnimahwkema1fv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nxkz0pnimahwkema1fv.png" alt=" " width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is also the right way to A/B test implementation strategies. Fork the same master session twice, let both branches diverge, and any differences in output are down to the approach, not context drift.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Seamless Code Review Loops
&lt;/h2&gt;

&lt;p&gt;Context switching during code review is one of the most underrated productivity killers in engineering. You wrote the code 3 days ago, the reviewer left comments this morning, and now you need to mentally reconstruct the entire decision space before you can say anything useful.&lt;/p&gt;

&lt;p&gt;If you created a pull request during a Claude session using &lt;code&gt;gh pr create&lt;/code&gt;, the tool links the session ID to that PR automatically. When changes are requested, you rehydrate the exact state of the agent that wrote the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude &lt;span class="nt"&gt;--from-pr&lt;/span&gt; 447
&lt;span class="c"&gt;# or&lt;/span&gt;
claude &lt;span class="nt"&gt;--from-pr&lt;/span&gt; https://github.com/org/repo/pull/447
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent resumes with the full conversation history of the original session: the files it read, the trade-offs it considered, the constraints it was working within&lt;/p&gt;

&lt;p&gt;For teams with multi-reviewer sign-off, this compresses the feedback loop from "context-switch, re-read, re-understand, respond" to "resume, address, push."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08n4hztg1innbloaxxac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08n4hztg1innbloaxxac.png" alt=" " width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Compose Prompts in Your Editor
&lt;/h2&gt;

&lt;p&gt;The single-line REPL is hostile to complex prompt engineering. Pasting a 50-line stack trace, wrapping it in XML tags, and appending a multi-paragraph constraint inline is a fight against your own terminal.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Ctrl+G&lt;/code&gt; intercepts the input stream and opens your system's default &lt;code&gt;$EDITOR&lt;/code&gt; (Vim, Neovim, VS Code, whatever you've configured). You get macros, syntax highlighting, multiple cursors, snippet expansion, and proper multi-line editing. Compose the prompt, save and quit, and the full buffer flushes directly into Claude's execution loop.&lt;/p&gt;

&lt;p&gt;Prompt quality goes up noticeably when you can actually see and edit what you're writing. Small feature, outsized effect.&lt;/p&gt;



&lt;h2&gt;
  
  
  4. Execution via Inline Shell
&lt;/h2&gt;

&lt;p&gt;Prefix any input with &lt;code&gt;!&lt;/code&gt; and it bypasses the LLM entirely, executing the command directly in your shell. Useful on its own. What makes it powerful is what happens to the output: &lt;code&gt;stdout&lt;/code&gt; and &lt;code&gt;stderr&lt;/code&gt; are automatically appended to the LLM's context window.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt; npm run &lt;span class="nb"&gt;test&lt;/span&gt;:e2e
&lt;span class="o"&gt;!&lt;/span&gt; git log &lt;span class="nt"&gt;--oneline&lt;/span&gt; &lt;span class="nt"&gt;-10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the command, output lands in context, then ask Claude to reason about it. No copy-pasting, no "here's the error I'm seeing" preamble. The model already has it.&lt;/p&gt;



&lt;h2&gt;
  
  
  5. Opus 4.6 Effort Levels
&lt;/h2&gt;

&lt;p&gt;Not every task warrants deep reasoning. Burning heavy compute on boilerplate generation is wasteful; using lightweight inference on a complex architectural decision produces poor results.&lt;/p&gt;

&lt;p&gt;Opus 4.6's Adaptive Thinking exposes a compute-scaling mechanism via the &lt;code&gt;/model&lt;/code&gt; command: an effort slider across 4 tiers (Low, Medium, High, Max).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Low&lt;/strong&gt; is fast, cheap, and essentially deterministic. Boilerplate generation, variable renaming, JSDoc comments. &lt;strong&gt;Max&lt;/strong&gt; is high latency, high cost, deep reasoning chains: debugging race conditions, designing schemas for complex domains, resolving gnarly merge conflicts.&lt;/p&gt;

&lt;p&gt;For headless scripts, you can enforce this programmatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLAUDE_CODE_EFFORT_LEVEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;low
claude &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Add JSDoc comments to src/utils.ts"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Being intentional about compute allocation across hundreds of automated invocations adds up fast, both in cost and pipeline speed. This is the kind of feature that doesn't sound useful until you check your API bill.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uoap5dmrrsbznsj8o3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uoap5dmrrsbznsj8o3i.png" alt=" " width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Parallel Worktrees
&lt;/h2&gt;

&lt;p&gt;Running multiple Claude sessions against the same repository without isolation produces race conditions: agents trampling each other's file edits, creating impossible merge states.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--worktree&lt;/code&gt; uses native &lt;code&gt;git worktree&lt;/code&gt; under the hood. It carves out a completely isolated physical directory (defaulting to &lt;code&gt;.claude/worktrees/&amp;lt;branch-name&amp;gt;&lt;/code&gt;) that shares the same git history but maintains an independent working tree. Each agent gets its own sandbox.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 1&lt;/span&gt;
claude &lt;span class="nt"&gt;--worktree&lt;/span&gt; feature/auth-refactor

&lt;span class="c"&gt;# Terminal 2&lt;/span&gt;
claude &lt;span class="nt"&gt;--worktree&lt;/span&gt; feature/dashboard-ui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both agents work the same repo, share the same commit history, and cannot interfere with each other's file changes. When they're done, you merge through normal git workflows.&lt;/p&gt;



&lt;h2&gt;
  
  
  7. Structured JSON Output
&lt;/h2&gt;

&lt;p&gt;Conversational output is useless in automation pipelines. You need guaranteed, parseable, machine-readable output every time. Combining &lt;code&gt;-p&lt;/code&gt;, &lt;code&gt;--output-format json&lt;/code&gt;, and &lt;code&gt;--json-schema&lt;/code&gt; transforms the LLM from a conversational agent into a strictly typed function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output-format&lt;/span&gt; json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--json-schema&lt;/span&gt; ./schemas/security-audit.schema.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Audit src/ for vulnerabilities"&lt;/span&gt; | jq &lt;span class="s1"&gt;'.high_severity[]'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You define the output shape; the model is constrained to produce exactly that. Chain it with &lt;code&gt;jq&lt;/code&gt;, pipe it into downstream services, feed it into dashboards. The output is predictable and machine-consumable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff98h1nm60smgxjngv57n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff98h1nm60smgxjngv57n.png" alt=" " width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Surgical Context Compaction
&lt;/h2&gt;

&lt;p&gt;Long debugging sessions fill context windows with dead weight. Every "try this, nope, try that, also broken" cycle adds tokens that actively degrade the model's performance. Context gets noisy, the model loses track of earlier decisions, and costs climb.&lt;/p&gt;

&lt;p&gt;Double-tapping &lt;code&gt;Esc&lt;/code&gt; opens the rewind menu. Most people use it to revert code changes. The real value is "Summarise from here."&lt;/p&gt;

&lt;p&gt;Select a message midway through your session. Claude preserves everything before that point perfectly (initial system prompts, architectural rules, early context), then compresses all the messy trial-and-error &lt;em&gt;after&lt;/em&gt; it into a dense summary. The dead ends get distilled into their key lessons without consuming context real estate.&lt;/p&gt;

&lt;p&gt;You reclaim your token budget without losing the narrative thread. The model retains awareness of what was tried and why it failed, at a fraction of the token cost.&lt;/p&gt;



&lt;h2&gt;
  
  
  9. Dynamic Multi-Agent Orchestration
&lt;/h2&gt;

&lt;p&gt;Hardcoding subagents into &lt;code&gt;.claude/agents/&lt;/code&gt; markdown files works for stable, long-lived agent definitions. For ad-hoc workflows, you can define and inject session-scoped subagents on the fly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude &lt;span class="nt"&gt;--agents&lt;/span&gt; &lt;span class="s1"&gt;'{
  "test-engineer": {
    "description": "Writes unit tests for modified files.",
    "prompt": "You are a strict SDET. Write tests using Vitest. Cover edge cases.",
    "model": "haiku",
    "tools": ["Read", "Write", "Glob"]
  }
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The real unlock is model routing. Your main session runs on Opus for complex reasoning. Repetitive tasks get delegated to Haiku, which handles them perfectly well at a fraction of the cost. When Claude detects a modified file, it spawns the &lt;code&gt;test-engineer&lt;/code&gt; subagent to backfill tests while the main agent continues uninterrupted.&lt;/p&gt;

&lt;p&gt;Add &lt;code&gt;isolation: worktree&lt;/code&gt; to a subagent's definition and it spins up its own git worktree automatically, combining with the pattern from section 6. Genuinely concurrent multi-agent work, each agent fully isolated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovbuk166y2sbsrkd511p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovbuk166y2sbsrkd511p.png" alt=" " width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Headless CI/CD with Hard Budget Caps
&lt;/h2&gt;

&lt;p&gt;Putting an autonomous agent into a CI/CD pipeline without strict boundaries is, frankly, terrifying. An agent that loops endlessly or starts "fixing" things it shouldn't touch can cause real damage, and it'll happily drain your API credits while doing so.&lt;/p&gt;

&lt;p&gt;Three flags together make this safe: &lt;code&gt;-p&lt;/code&gt; (non-interactive print mode), &lt;code&gt;--max-turns&lt;/code&gt; (prevents infinite agentic loops), &lt;code&gt;--max-budget-usd&lt;/code&gt; (hard financial ceiling).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh &lt;span class="nb"&gt;pr &lt;/span&gt;diff &lt;span class="nv"&gt;$PR_NUMBER&lt;/span&gt; | claude &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-turns&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-budget-usd&lt;/span&gt; 1.50 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Review this diff for security flaws. Output only actionable feedback."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-max-turns&lt;/code&gt; catches runaway logic. &lt;code&gt;-max-budget-usd&lt;/code&gt; acts as a circuit breaker on everything else, killing the process before it burns through your Anthropic credits. You need both. Either one alone has gaps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scaling this across multiple repositories or running it on every PR also forces prompt discipline. You learn quickly which prompts produce useful output within the budget and which waste tokens on preamble. That discipline feeds back into better prompts everywhere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flefd6sepxvbujybo2m1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flefd6sepxvbujybo2m1u.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These aren't novelty features. Session forking, parallel worktrees, dynamic subagents, and budget-capped CI/CD represent a genuine shift from "AI as a chat partner" to "AI as a managed fleet of specialised workers." The gap between "I use Claude Code" and "I orchestrate Claude Code" is wide, and it's widening.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
