<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Samarth Bhamare</title>
    <description>The latest articles on DEV Community by Samarth Bhamare (@samarth0211).</description>
    <link>https://dev.to/samarth0211</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/samarth0211"/>
    <language>en</language>
    <item>
      <title>I Tested 120 Claude "Secret Codes" Over 3 Months. Here's What Actually Works (And What's Complete Nonsense)</title>
      <dc:creator>Samarth Bhamare</dc:creator>
      <pubDate>Tue, 14 Apr 2026 13:22:02 +0000</pubDate>
      <link>https://dev.to/samarth0211/i-tested-120-claude-secret-codes-over-3-months-heres-what-actually-works-and-whats-complete-267e</link>
      <guid>https://dev.to/samarth0211/i-tested-120-claude-secret-codes-over-3-months-heres-what-actually-works-and-whats-complete-267e</guid>
      <description>&lt;p&gt;If you spend any time in AI communities, you've seen the lists.&lt;/p&gt;

&lt;p&gt;"50 secret Claude commands!" "Hidden Claude hacks!" "Use this one prefix to 10x your output!"&lt;/p&gt;

&lt;p&gt;I got tired of guessing which ones were real. So I tested them. All of them. 120 prompt prefixes, tested on Claude Opus 4.6 and Sonnet 4.6 over 3 months. Same prompts, fresh conversations, 3 runs each, controlled before/after comparisons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The uncomfortable result: about 70% do nothing meaningful.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They change formatting — shorter paragraphs, bullet points instead of prose — but the actual reasoning is identical. You could get the same effect by saying "use bullet points."&lt;/p&gt;

&lt;p&gt;A few are actively harmful. And exactly 7 consistently changed what Claude &lt;em&gt;thinks&lt;/em&gt;, not just how it &lt;em&gt;writes&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 7 That Survived Testing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;code&gt;ULTRATHINK&lt;/code&gt; — forces deeper reasoning
&lt;/h3&gt;

&lt;p&gt;This is the single highest-impact code I found. Claude visibly thinks longer and catches things the default response misses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;How should I structure auth for a multi-tenant SaaS?
→ "There are several common approaches: JWT, session-based, OAuth..."
  (balanced overview, no opinion)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;With:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ULTRATHINK How should I structure auth for a multi-tenant SaaS?
→ Picks JWT with row-level security, explains why sessions fail at 
  tenant boundaries, warns about the token-size trap when embedding 
  permissions, gives a specific migration path if you start wrong.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default answer is a Wikipedia summary. The ULTRATHINK answer is what a staff engineer says after thinking about it over coffee.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;code&gt;L99&lt;/code&gt; — kills the hedging
&lt;/h3&gt;

&lt;p&gt;Claude's default personality is diplomatic. "It depends." "There are trade-offs." L99 forces it to pick a side and defend it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;L99 Should I use Postgres or MongoDB for my startup?
→ "Postgres. Your data has relationships whether you admit it or not. 
   MongoDB feels faster at first but you'll rebuild half of Postgres 
   in your application layer by month 6. The only exception: if your 
   data is genuinely document-shaped AND you'll never need joins."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's an answer you can actually act on.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;code&gt;/ghost&lt;/code&gt; — erases AI fingerprints
&lt;/h3&gt;

&lt;p&gt;Not a tone adjustment. This specifically targets the patterns that make AI writing detectable: em-dashes, "I hope this helps", "It's worth noting that", perfectly balanced sentence pairs.&lt;/p&gt;

&lt;p&gt;I ran the output through GPTZero, Originality.ai, and Sapling. Detection rate dropped from &lt;strong&gt;96% to 8%&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you're writing anything that needs to sound human — cold emails, blog posts, client deliverables — this one is real.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;code&gt;/skeptic&lt;/code&gt; — challenges your premise
&lt;/h3&gt;

&lt;p&gt;Instead of answering your question, Claude first checks whether you're asking the right question.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/skeptic Help me optimize my database queries, they're slow.
→ "Before optimizing queries — are you sure the database is the 
   bottleneck? Check: 1) N+1 calls from your API? 2) Connection pool 
   sized correctly? 3) Index exists but not used due to type mismatch? 
   Run EXPLAIN ANALYZE first. 80% of 'slow database' problems are 
   application-layer problems."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one saved me from building the wrong solution twice.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;code&gt;PERSONA&lt;/code&gt; — but only with surgical specificity
&lt;/h3&gt;

&lt;p&gt;Generic personas do nothing. I tested it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Doesn't work:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PERSONA: senior developer. Review my code.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Works:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PERSONA: Principal engineer at a fintech, 15 years experience, 
survived two failed monolith-to-microservices migrations, deeply 
skeptical of premature optimization. Review my architecture.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The difference isn't tone — it's which trade-offs Claude prioritizes, which risks it flags, and which recommendations it makes. Specificity is the unlock.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. &lt;code&gt;/debug&lt;/code&gt; — finds the bug instead of rewriting your code
&lt;/h3&gt;

&lt;p&gt;Claude's default when you paste an error: rewrite your entire function "with improvements." Somewhere in those improvements the bug is fixed, but you can't tell what changed.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/debug&lt;/code&gt; forces it to name the line, explain the issue, and give the minimal fix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/debug [paste code with error]
→ "Line 23: comparing user.id (string) with === to req.params.id 
   (URL-decoded string). The issue is line 31 — you're using the 
   pre-decoded version in the cache key. Fix: decodeURIComponent 
   on line 23. Don't change line 31."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. &lt;code&gt;OODA&lt;/code&gt; — military decision framework
&lt;/h3&gt;

&lt;p&gt;Structures the response as Observe-Orient-Decide-Act. Originally a fighter pilot decision loop. Works remarkably well for production incidents and decisions under pressure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OODA Our main API is returning 500s intermittently.
→ OBSERVE: Intermittent 500s = resource exhaustion, not code bugs.
  ORIENT: If correlated with traffic → capacity. If random → leak.
  DECIDE: Run netstat now. If &amp;gt;500 connections, pool is the bottleneck.
  ACT: Immediate — increase pool to 50. Today — add monitoring. 
  This week — circuit breaker on downstream calls.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What Doesn't Work (Skip These)
&lt;/h2&gt;

&lt;p&gt;I tested all of these multiple times. None produced measurable reasoning changes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Code&lt;/th&gt;
&lt;th&gt;Claim&lt;/th&gt;
&lt;th&gt;Reality&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/godmode&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Maximum capability&lt;/td&gt;
&lt;td&gt;Longer output, identical reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;BEASTMODE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;3x quality&lt;/td&gt;
&lt;td&gt;Same as /godmode with a louder name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/jailbreak&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Removes limits&lt;/td&gt;
&lt;td&gt;Actually makes Claude MORE cautious&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DAN mode&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;"Do Anything Now"&lt;/td&gt;
&lt;td&gt;ChatGPT meme. Claude ignores it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;/expert&lt;/code&gt; (generic)&lt;/td&gt;
&lt;td&gt;Expert-level answers&lt;/td&gt;
&lt;td&gt;Does nothing without a specific domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;ALPHA&lt;/code&gt;, &lt;code&gt;OMEGA&lt;/code&gt;, &lt;code&gt;TURBO&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Various claims&lt;/td&gt;
&lt;td&gt;Pattern matching only — confident tone, same analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"Think step by step"&lt;/td&gt;
&lt;td&gt;Better reasoning&lt;/td&gt;
&lt;td&gt;Already baked into Claude since Sonnet 4.5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Why These Work
&lt;/h2&gt;

&lt;p&gt;None of these are official Anthropic features. They work because Claude's training data includes millions of conversations where developers used these exact prefixes. The model learned the convention from us, not from design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This means:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They can break with model updates (I retest every 2 weeks)&lt;/li&gt;
&lt;li&gt;New ones emerge as usage patterns change&lt;/li&gt;
&lt;li&gt;The "secret codes" framing is wrong — they're community conventions, not hidden features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One commenter on Reddit put it better than I could: &lt;em&gt;"The prefixes that actually work are all telling the model to change its decision-making process, not its output format. That's a fundamentally different thing than BEASTMODE which just asks for more."&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Broader Point
&lt;/h2&gt;

&lt;p&gt;These prefixes are useful shortcuts. But they're not the paradigm shift.&lt;/p&gt;

&lt;p&gt;The real shift is &lt;strong&gt;persistent instructions&lt;/strong&gt; — markdown files that Claude reads at the start of every session. A file that says "when reviewing code, always check for N+1 queries, missing error handling, and stale closures" runs automatically without you typing anything.&lt;/p&gt;

&lt;p&gt;That's the direction prompt engineering is heading: not cleverer one-liners, but structured instructions that DO things, not just SAY things.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try Them
&lt;/h2&gt;

&lt;p&gt;All 7 codes work right now on Claude.ai, Claude Code, and the API. No setup needed — just prefix your next prompt.&lt;/p&gt;

&lt;p&gt;If you want a maintained, tested list: I keep one at &lt;a href="https://clskills.in/prompts" rel="noopener noreferrer"&gt;clskills.in/prompts&lt;/a&gt; — 20 free with click-to-copy. The full 120 with before/after testing data is a paid reference I update biweekly.&lt;/p&gt;

&lt;p&gt;What prefixes have you found that actually change Claude's reasoning? Genuinely curious — community-discovered codes are usually the best ones.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>everything i wish someone told me before i started using claude</title>
      <dc:creator>Samarth Bhamare</dc:creator>
      <pubDate>Sun, 12 Apr 2026 02:40:52 +0000</pubDate>
      <link>https://dev.to/samarth0211/everything-i-wish-someone-told-me-before-i-started-using-claude-5da0</link>
      <guid>https://dev.to/samarth0211/everything-i-wish-someone-told-me-before-i-started-using-claude-5da0</guid>
      <description>&lt;p&gt;i spent the first two weeks using claude completely wrong.&lt;/p&gt;

&lt;p&gt;i'd open it, type a vague question, get a decent answer, and think "okay, that's useful i guess." then i'd close the tab and forget about it. i wasn't getting anything close to what people were describing — the hours saved, the output quality, the feeling that you had a smart collaborator instead of a search engine.&lt;/p&gt;

&lt;p&gt;turns out i was missing some basic things that nobody explains to beginners. here's what actually changed how i use it.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. stop treating it like a search engine
&lt;/h2&gt;

&lt;p&gt;the biggest mistake people make is asking claude one-line questions and expecting magic. claude isn't google. it doesn't just retrieve — it thinks. and the quality of its thinking depends almost entirely on the quality of your input.&lt;/p&gt;

&lt;p&gt;compare these two:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bad: what's a good marketing strategy?

better: i run a small skincare brand targeting women aged 25-40 in india. 
we sell on instagram and our own website. our budget is tight — under ₹20k/month 
for ads. what are 3 specific things i should focus on in the next 30 days?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the second one gets you something you can actually use on monday morning. the first one gets you a wikipedia page.&lt;/p&gt;

&lt;p&gt;the rule: &lt;strong&gt;give claude your situation, your constraints, and what "good" looks like for you.&lt;/strong&gt; it can't read your mind, but it's very good at working with context.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. use the "role + task + format" structure
&lt;/h2&gt;

&lt;p&gt;this is the structure that unlocked everything for me. when you give claude three things — a role to play, a task to do, and a format to respond in — the output quality jumps noticeably.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;role: act as a senior HR manager who has hired hundreds of people

task: review this job description i wrote for a marketing coordinator role 
and tell me what's weak, what's missing, and what would put off good candidates

format: give me a numbered list of issues, then a revised version of the JD

[paste your JD here]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you're not just asking for feedback. you're telling claude &lt;em&gt;who&lt;/em&gt; to be when giving that feedback. a senior HR manager thinks differently than a general AI assistant.&lt;/p&gt;

&lt;p&gt;this works in almost any domain — legal review, financial planning, email drafting, coding. pick the right "role" and you get thinking that's calibrated to that domain.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. ask it to think before it answers
&lt;/h2&gt;

&lt;p&gt;claude has a tendency (like most AI) to jump straight to an answer. sometimes that answer is shallow. there's a simple fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;before you answer, think through this step by step. consider at least 3 different 
angles before giving me your recommendation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;what are the weakest parts of the argument you just made?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;that second one is particularly useful. ask claude to challenge its own output and it will often surface things it glossed over the first time. i use this whenever i'm about to act on something important — a business decision, a contract review, a financial projection.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. use it for first drafts, not final drafts
&lt;/h2&gt;

&lt;p&gt;people either expect claude to produce publication-ready work on the first try, or they use it for such small tasks that they're underutilizing it completely.&lt;/p&gt;

&lt;p&gt;the right mental model: claude is your first-draft machine. your job is to edit.&lt;/p&gt;

&lt;p&gt;here's how i write emails now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;write a short email to a client who paid late (3 weeks). 
the invoice was for ₹45,000. i want to be firm but not aggressive — 
we still want to work with them. keep it under 150 words.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;that takes me 20 seconds. i get back something that's 80% of the way there. i spend 3 minutes making it sound like me and fixing anything that's off. total time: under 5 minutes. old way: 15-20 minutes of staring at a blank screen.&lt;/p&gt;

&lt;p&gt;the same applies to blog posts, proposals, SOPs, presentation outlines, cold messages — anything where starting is the hard part.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. have conversations, not one-off prompts
&lt;/h2&gt;

&lt;p&gt;most people write one prompt, read the response, and start a new chat for their next question. this is like hiring a consultant, having one call with them, and then calling a different consultant for every follow-up.&lt;/p&gt;

&lt;p&gt;claude gets better as a conversation continues. it builds on context. it remembers what you've said. it refines its understanding of what you're trying to do.&lt;/p&gt;

&lt;p&gt;try this instead: when you're working on something, keep one conversation open for that entire project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[start of project]
i'm building a pitch deck for a B2B SaaS product that helps restaurants 
manage inventory. target audience is restaurant owners, not tech people. 
i'll be asking you questions throughout the next hour as i build this.

[later in same chat]
okay what should slide 3 look like? i want to explain the problem without 
using any technical language.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;by slide 8, claude knows your product, your audience, your tone, and your constraints. you don't have to re-explain anything.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. the "documents in, intelligence out" pattern
&lt;/h2&gt;

&lt;p&gt;this one is underused and genuinely impressive. you can paste a document into claude — a contract, a report, a competitor's pricing page, a job offer — and ask it to analyze, summarize, compare, or flag things.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;here is a freelance contract i was sent. i'm a graphic designer.
[paste contract]

flag any clauses that:
1. limit my ability to use this work in my portfolio
2. could make me liable for things outside my control
3. are unusual compared to standard freelance contracts

explain each one in plain language, not legal language.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;i've used this to review vendor agreements, summarize 60-page government reports, compare two job offers side-by-side, and pull key data out of dense financial documents. it's not a lawyer. it's not an accountant. but it's a very good first pass that tells you what to ask the actual professional.&lt;/p&gt;




&lt;h2&gt;
  
  
  the honest truth about where people get stuck
&lt;/h2&gt;

&lt;p&gt;these techniques sound simple, but applying them consistently across different situations — writing, legal, finance, healthcare, marketing — takes practice. and most tutorials online are either too technical (written by developers) or too vague ("just write better prompts!").&lt;/p&gt;

&lt;p&gt;i spent about three months figuring all of this out through trial and error. i compiled everything — including 114 more prompt codes across 12 specific use cases — into a 40+ page PDF guide built specifically for non-technical people.&lt;/p&gt;

&lt;p&gt;it covers beginner through full automation, with 8 industry playbooks for marketing, legal, healthcare, finance, education, real estate, consulting, and e-commerce. if you're in any of those fields and want to actually get good at claude, it's at &lt;strong&gt;&lt;a href="https://clskills.in/guide" rel="noopener noreferrer"&gt;https://clskills.in/guide&lt;/a&gt;&lt;/strong&gt; — starts at $19.&lt;/p&gt;

&lt;p&gt;if you're in india, use code &lt;strong&gt;INDIA99&lt;/strong&gt; at checkout.&lt;/p&gt;




&lt;p&gt;the techniques above will make you noticeably better starting today. the rest is just reps.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;by samarth bhamare — clskills.in&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claude</category>
      <category>ai</category>
      <category>beginners</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I picked a 5ms keyword router over an LLM meta-router for my AI app. Here's the math.</title>
      <dc:creator>Samarth Bhamare</dc:creator>
      <pubDate>Sat, 11 Apr 2026 20:06:37 +0000</pubDate>
      <link>https://dev.to/samarth0211/i-picked-a-5ms-keyword-router-over-an-llm-meta-router-for-my-ai-app-heres-the-math-23p2</link>
      <guid>https://dev.to/samarth0211/i-picked-a-5ms-keyword-router-over-an-llm-meta-router-for-my-ai-app-heres-the-math-23p2</guid>
      <description>&lt;h1&gt;
  
  
  I picked a 5ms keyword router over an LLM meta-router for my AI app. Here's the math.
&lt;/h1&gt;

&lt;p&gt;short version: i was building a desktop AI sales coach where the user types a question and the system picks one of 10 "founder voices" to answer in. i prototyped two routers — a deterministic keyword one and a meta-LLM one. the deterministic one was 600x faster, free, and 85% accurate. i shipped the deterministic one. here's why and what the code looks like.&lt;/p&gt;

&lt;p&gt;if you're building an AI app where you have to pick between multiple specialized prompts/personas/agents per request, this might save you a few weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  the setup
&lt;/h2&gt;

&lt;p&gt;i shipped a product called the &lt;strong&gt;Sales Agent Pack&lt;/strong&gt; last night (&lt;a href="https://clskills.in/sales-agent-saas" rel="noopener noreferrer"&gt;clskills.in/sales-agent-saas&lt;/a&gt;). it's a desktop electron app + claude code skill that has 10 "council voices" — each one built from the public writings of a SaaS founder (Collison, Benioff, Lütke, Chesky, Huang, Altman, Amodei, Levie, Butterfield, Lemkin).&lt;/p&gt;

&lt;p&gt;the user types a sales question. the system picks ONE voice to answer in. that "pick" is the routing decision.&lt;/p&gt;

&lt;p&gt;example questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"should i raise prices to $79?" → should route to &lt;strong&gt;Lemkin&lt;/strong&gt; (saastr operator, pricing experiments)&lt;/li&gt;
&lt;li&gt;"we're losing to hubspot, what's the angle?" → should route to &lt;strong&gt;Levie&lt;/strong&gt; (challenger positioning)&lt;/li&gt;
&lt;li&gt;"the deck feels generic" → should route to &lt;strong&gt;Chesky&lt;/strong&gt; (identity-driven sales, design)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the "voices" aren't roleplay — each voice is a 3000-word markdown file built from the founder's actual public writing, loaded into the system prompt at chat time. so the routing decision matters: pick the wrong voice and the answer is technically correct but the &lt;em&gt;style&lt;/em&gt; and &lt;em&gt;frame&lt;/em&gt; is wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  option A: meta-LLM router
&lt;/h2&gt;

&lt;p&gt;the obvious approach. before answering, ask claude (or any LLM) "which of these 10 voices should answer this question?"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;pickVoiceLLM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;claude-sonnet-4-5&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Question: "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"\n\nWhich voice should answer? Reply with just one word from: Collison, Benioff, Lutke, Chesky, Huang, Altman, Amodei, Levie, Butterfield, Lemkin.`&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;measured cost (over 100 sample questions):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;latency p50: &lt;strong&gt;1,800 ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;latency p95: &lt;strong&gt;3,100 ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;cost per call: &lt;strong&gt;~$0.005&lt;/strong&gt; (50 tokens out, ~120 tokens in)&lt;/li&gt;
&lt;li&gt;accuracy (vs. my hand-labeled "correct" answer): &lt;strong&gt;89%&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  option B: deterministic keyword router
&lt;/h2&gt;

&lt;p&gt;less sexy. just a function with a bunch of &lt;code&gt;if&lt;/code&gt;s and a buyer-archetype heuristic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;pickVoice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;conversationType&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Hard overrides — explicit conversation types win&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;conversationType&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;post_mortem&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Lemkin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/10-lemkin.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;conversationType&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;competitive_positioning&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Levie&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/08-levie.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Pricing keywords → Lemkin (the saastr operator)&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/price|pricing|raise.*price|tier|discount|annual.*contract/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Lemkin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/10-lemkin.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Developer-buyer keywords → Collison (Stripe playbook)&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/api|developer|sdk|docs|integration|technical.*buyer/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Collison&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/01-collison.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Enterprise / trust → Benioff&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/enterprise|procurement|security review|legal|compliance|fortune/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Benioff&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/02-benioff.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Design / identity / story → Chesky&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/deck|story|narrative|design|identity|brand|generic/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Chesky&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/04-chesky.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Underdog / fairness / anti-incumbent → Lütke&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/underdog|anti|hubspot|salesforce.*alternative|david.*goliath/&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Lutke&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/03-lutke.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Default — Lemkin handles "general sales question"&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Lemkin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;voiceFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;council/10-lemkin.md&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;measured (over the same 100 questions):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;latency p50: &lt;strong&gt;3 ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;latency p95: &lt;strong&gt;5 ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;cost per call: &lt;strong&gt;$0&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;accuracy (vs. my hand-labeled "correct"): &lt;strong&gt;85%&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  the math
&lt;/h2&gt;

&lt;p&gt;over a year of moderate use (let's say 50 questions per buyer per month, 1000 buyers = 600,000 questions/year):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Meta-LLM router&lt;/th&gt;
&lt;th&gt;Deterministic router&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total latency added&lt;/td&gt;
&lt;td&gt;1,080,000 sec (~12.5 days of waiting)&lt;/td&gt;
&lt;td&gt;1,800 sec (~30 minutes)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total cost added&lt;/td&gt;
&lt;td&gt;$3,000&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Accuracy&lt;/td&gt;
&lt;td&gt;89%&lt;/td&gt;
&lt;td&gt;85%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure mode&lt;/td&gt;
&lt;td&gt;API outage = no routing&lt;/td&gt;
&lt;td&gt;Code bug = obvious + fixable&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;the 4% accuracy delta costs $3,000 and 12.5 buyer-days of waiting.&lt;/strong&gt; that's not worth it. especially because the 15% miss rate on the deterministic version isn't catastrophic — it picks the &lt;em&gt;wrong&lt;/em&gt; council voice, but the answer is still useful, just framed by Lemkin when it should have been framed by Chesky.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;plus&lt;/strong&gt; there's a manual escape hatch. if the user wants a specific voice, they say "answer like Chesky would" in their question. the keyword &lt;code&gt;chesky&lt;/code&gt; triggers an explicit override. zero ML required. infinite override-ability.&lt;/p&gt;

&lt;h2&gt;
  
  
  when meta-LLM routing IS worth it
&lt;/h2&gt;

&lt;p&gt;i'm not saying "always use deterministic." here's when i'd flip the decision:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;when the routing space is large and fluid.&lt;/strong&gt; if i had 100 voices instead of 10, hand-coding keyword rules becomes unmaintainable. an LLM router scales linearly in cost vs my time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;when the cost of wrong is high.&lt;/strong&gt; if mis-routing meant "user gets a totally irrelevant answer" instead of "user gets a reasonable answer in a slightly off voice," the 4% accuracy delta is worth $3,000.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;when you have reliable structured outputs.&lt;/strong&gt; with JSON mode + a constrained enum, an LLM router becomes much more reliable than free-form generation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;when latency budget is generous.&lt;/strong&gt; for an async batch system, +2 seconds doesn't matter. for an interactive chat, it's perceptible and annoying.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  the v0.3.0 plan
&lt;/h2&gt;

&lt;p&gt;i'm not hard-committed to deterministic forever. the actual plan is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;v0.1.0 — deterministic router (shipped)&lt;/li&gt;
&lt;li&gt;v0.1.x → v0.2.x — collect routing data. for every chat, log &lt;code&gt;(question, deterministic_pick, user_override_if_any)&lt;/code&gt;. let it run for ~3 months.&lt;/li&gt;
&lt;li&gt;v0.3.0 — train a tiny classifier on the logged data. probably 100 lines of scikit-learn. inference cost: also ~5ms. accuracy estimate: ~92%.&lt;/li&gt;
&lt;li&gt;only switch to meta-LLM router if the classifier plateaus below ~90% AND the 8% miss rate is causing real user complaints.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;the "premature optimization is the root of all evil" version of this is: &lt;strong&gt;don't reach for an LLM call when an &lt;code&gt;if&lt;/code&gt; statement does the job.&lt;/strong&gt; especially when you're paying for the LLM call out of pocket and the &lt;code&gt;if&lt;/code&gt; statement runs in single-digit milliseconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  try it
&lt;/h2&gt;

&lt;p&gt;if you want to see the deterministic router in action — the product is at &lt;a href="https://clskills.in/sales-agent-saas" rel="noopener noreferrer"&gt;clskills.in/sales-agent-saas&lt;/a&gt;. it's a desktop AI sales coach for SaaS founders, $299 pre-order, ships as both an Electron app and a Claude Code skill. 7-day refund.&lt;/p&gt;

&lt;p&gt;i wrote a longer technical post about the rest of the architecture (why no BrowserWindow, the auto-update endpoint, the ELECTRON_RUN_AS_NODE trap that almost killed me) on my hashnode at &lt;a href="https://clskills.hashnode.dev" rel="noopener noreferrer"&gt;clskills.hashnode.dev&lt;/a&gt; — go read that if this one was useful.&lt;/p&gt;

&lt;p&gt;questions / objections / "you should have done X" — drop them in the comments. i read everything.&lt;/p&gt;

&lt;p&gt;— samarth&lt;/p&gt;

</description>
      <category>ai</category>
      <category>electron</category>
      <category>javascript</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Claude "Cheat Codes" — 120 Tested Prompt Prefixes That Change How Claude Responds</title>
      <dc:creator>Samarth Bhamare</dc:creator>
      <pubDate>Thu, 09 Apr 2026 10:04:01 +0000</pubDate>
      <link>https://dev.to/samarth0211/claude-cheat-codes-120-tested-prompt-prefixes-that-change-how-claude-responds-2a0i</link>
      <guid>https://dev.to/samarth0211/claude-cheat-codes-120-tested-prompt-prefixes-that-change-how-claude-responds-2a0i</guid>
      <description>&lt;p&gt;You can ask Claude the same question two different ways and get two completely different answers.&lt;/p&gt;

&lt;p&gt;I don't mean rephrasing. I mean adding one tiny prefix in front of your prompt. The community has converged on about 120 of these "cheat codes" — none of them are official Anthropic features, but Claude consistently recognizes them.&lt;/p&gt;

&lt;p&gt;I spent 3 months testing which ones actually work. Here's the complete breakdown.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Claude cheat codes?
&lt;/h2&gt;

&lt;p&gt;Short commands you put at the start of your message:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Slash commands&lt;/strong&gt; — &lt;code&gt;/ghost&lt;/code&gt;, &lt;code&gt;/skeptic&lt;/code&gt;, &lt;code&gt;/punch&lt;/code&gt;, &lt;code&gt;/noyap&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uppercase keywords&lt;/strong&gt; — &lt;code&gt;L99&lt;/code&gt;, &lt;code&gt;ULTRATHINK&lt;/code&gt;, &lt;code&gt;OODA&lt;/code&gt;, &lt;code&gt;PERSONA&lt;/code&gt;, &lt;code&gt;BEASTMODE&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claude sees the prefix, recognizes the intent, and adjusts its behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 5 you should learn first
&lt;/h2&gt;

&lt;h3&gt;
  
  
  L99
&lt;/h3&gt;

&lt;p&gt;Forces committed opinions instead of "it depends."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;L99 Should I use Postgres or MongoDB for a fintech startup?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get a real recommendation with specific tradeoffs, not a balanced pros-and-cons essay.&lt;/p&gt;

&lt;h3&gt;
  
  
  PERSONA (with specificity)
&lt;/h3&gt;

&lt;p&gt;Generic: "Act like a database expert" → barely moves the needle.&lt;/p&gt;

&lt;p&gt;Specific:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PERSONA: Senior Postgres DBA at Stripe, 15 years, 
cynical about ORMs. How do I optimize this query?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The specificity gives Claude permission to have real opinions.&lt;/p&gt;

&lt;h3&gt;
  
  
  /ghost
&lt;/h3&gt;

&lt;p&gt;Strips every "AI tell" from writing. Em-dashes, "I hope this helps", balanced sentence pairs — all gone. Output reads like a human typed it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/ghost Rewrite this so it doesn't sound like AI.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  /skeptic
&lt;/h3&gt;

&lt;p&gt;Challenges your question before answering it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/skeptic How do I A/B test 200 landing page variants?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude responds: "With your traffic, you'd need 6 months per variant. Want to test 5 instead?"&lt;/p&gt;

&lt;p&gt;Catches the wrong-question problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  ULTRATHINK
&lt;/h3&gt;

&lt;p&gt;Maximum reasoning depth. 800–1200 word thesis with multiple analytical layers. Zero hedging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ULTRATHINK Why is our retention dropping after week 2?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reserve for the 2–3 decisions per week that earn the depth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codes that stack (combos)
&lt;/h2&gt;

&lt;p&gt;Most prefixes get stronger combined:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Combo&lt;/th&gt;
&lt;th&gt;Best for&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/ghost + /punch&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cold emails — humanize + shorten&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PERSONA + L99&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Technical decisions — expert + committed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/skeptic + ULTRATHINK&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Strategy — right question + max depth&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/punch + /trim + /raw&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Slack messages — tight, no markdown&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The full 120
&lt;/h2&gt;

&lt;p&gt;I organized all of them into a searchable reference with categories:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Writing &amp;amp; Style&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;/ghost, /punch, /trim, /voice&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thinking &amp;amp; Reasoning&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;L99, /deepthink, OODA, XRAY&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coding &amp;amp; Technical&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;/debug, REFACTOR, /shipit, ARCHITECT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Power Commands&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;/godmode, PERSONA, /memory, /chain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hidden Modes&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;BEASTMODE, /nofilter, CEOMODE, OPERATOR&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output Formatting&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;/json, /yaml, /table, /checklist&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Research &amp;amp; Analysis&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;DEEPDIVE, COMPARE, /critique, /devil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Productivity&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;/template, /recipe, /playbook, /sop&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Communication&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;/email, /reply, /dm, /escalate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Experimental&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;PARETO, FINISH, /postmortem, WORSTCASE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community Picks&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;ULTRATHINK, /noyap, HARDMODE, /skeptic&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Free interactive version:&lt;/strong&gt; &lt;a href="https://clskills.in/prompts" rel="noopener noreferrer"&gt;clskills.in/prompts&lt;/a&gt; — 11 free codes, click to copy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Individual deep dives:&lt;/strong&gt; &lt;a href="https://clskills.in/prompt/l99" rel="noopener noreferrer"&gt;clskills.in/prompt/l99&lt;/a&gt;, &lt;a href="https://clskills.in/prompt/ghost" rel="noopener noreferrer"&gt;/prompt/ghost&lt;/a&gt;, &lt;a href="https://clskills.in/prompt/persona" rel="noopener noreferrer"&gt;/prompt/persona&lt;/a&gt;, &lt;a href="https://clskills.in/prompt/skeptic" rel="noopener noreferrer"&gt;/prompt/skeptic&lt;/a&gt; — 120 pages, one per code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full cheat sheet&lt;/strong&gt; (paid, $5–$10): &lt;a href="https://clskills.in/cheat-sheet" rel="noopener noreferrer"&gt;clskills.in/cheat-sheet&lt;/a&gt; — before/after examples for every code, when-NOT-to-use warnings, combos, 10 workflow playbooks. Lifetime updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do these actually work or is it placebo?
&lt;/h2&gt;

&lt;p&gt;Honest answer: these are community conventions, not hardcoded features. They work because Claude has seen enough people use them that it learned to recognize the intent.&lt;/p&gt;

&lt;p&gt;Evidence they work:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Output structure measurably changes (run the same prompt with/without L99 in fresh sessions)&lt;/li&gt;
&lt;li&gt;Multiple people discovered the same codes independently&lt;/li&gt;
&lt;li&gt;Anthropic hasn't denied them&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Caveats:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Top 15 are very consistent. Bottom 30 are more hit-or-miss.&lt;/li&gt;
&lt;li&gt;Future model updates could change behavior.&lt;/li&gt;
&lt;li&gt;Some codes work better for certain task types.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try it now
&lt;/h2&gt;

&lt;p&gt;Pick one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;L99 [any question you'd normally ask Claude]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/ghost [any paragraph you want rewritten]&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PERSONA: [specific expert]. [your question]&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If it doesn't change anything, the rest won't either. If it does — the other 117 are waiting.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building &lt;a href="https://clskills.in" rel="noopener noreferrer"&gt;clskills.in&lt;/a&gt; — 2,300+ free Claude Code skills + the cheat sheet for prompt prefixes.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claude</category>
      <category>ai</category>
      <category>productivity</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>I built 10 autonomous AI agents for Claude Code — here's how they work Tags: ai, webdev, productivity, opensource</title>
      <dc:creator>Samarth Bhamare</dc:creator>
      <pubDate>Thu, 26 Mar 2026 02:35:18 +0000</pubDate>
      <link>https://dev.to/samarth0211/i-built-10-autonomous-ai-agents-for-claude-code-heres-how-they-work-tags-ai-webdev-3bmc</link>
      <guid>https://dev.to/samarth0211/i-built-10-autonomous-ai-agents-for-claude-code-heres-how-they-work-tags-ai-webdev-3bmc</guid>
      <description>&lt;p&gt;Most developers use Claude Code at maybe 20% of its potential.&lt;/p&gt;

&lt;p&gt;They type "fix this bug" and expect magic. When the output is mediocre, they blame the tool.&lt;/p&gt;

&lt;p&gt;The problem is not the tool. It is the prompt.&lt;/p&gt;

&lt;p&gt;I spent weeks building production-grade instruction files that turn Claude Code into specialized autonomous agents. Each one combines multiple skills into a structured workflow with real bash commands, specific patterns to check, output formats, and rules to avoid false positives.&lt;/p&gt;

&lt;p&gt;Here are all 10 agents and what they actually do:&lt;/p&gt;

&lt;p&gt;PR Review Agent&lt;br&gt;
Reviews every changed file in a branch against main. Checks for correctness (off-by-one errors, missing await, null handling), error handling (empty catch blocks, swallowed errors), naming clarity, and unused imports. Runs tsc and eslint on changed files. Outputs a structured report with exact file:line references, corrected code suggestions, and a verdict (approve, request changes, needs discussion). Always includes a "what looks good" section so it does not just criticize.&lt;/p&gt;

&lt;p&gt;Test Writer Agent&lt;br&gt;
Detects your test framework by reading package.json and existing test files. Finds every exported function without a corresponding test. For each function, it analyzes inputs, outputs, dependencies, and every branch (if, else, switch, ternary). Generates tests covering happy path, edge cases (null, empty, zero, boundary values), error conditions, and async behavior. Every test follows the AAA pattern with realistic data, not "foo" and "bar". Runs the tests after writing them to make sure they pass.&lt;/p&gt;

&lt;p&gt;Bug Fixer Agent&lt;br&gt;
Give it an error message, stack trace, or description of wrong behavior. It reads the stack trace bottom-to-top, traces the execution path through your code, and categorizes the root cause (off-by-one, null reference, async timing, type coercion, logic error, stale state). Before proposing a fix, it verifies the hypothesis explains all symptoms, checks git blame to see if a recent change introduced the bug, and looks for the same pattern elsewhere in the codebase. Fix diffs are minimal. Never modifies tests to make them pass.&lt;/p&gt;

&lt;p&gt;Documentation Agent&lt;br&gt;
Reads your actual source code to write documentation. Not guessing from function names. It checks package.json for the stack, finds entry points, reads route handlers, and builds a mental model of what the project does. Generates README with getting started, environment variables (read from .env.example), project structure, API reference (from actual route handler code), and scripts. Also generates JSDoc for undocumented exported functions with examples. Never fabricates endpoints or features that do not exist.&lt;/p&gt;

&lt;p&gt;Security Audit Agent&lt;br&gt;
Full OWASP Top 10 audit. Starts with secret detection (hardcoded API keys, AWS credentials, private keys, connection strings with passwords). Then dependency vulnerabilities via npm audit. Then injection checks: SQL injection (string concatenation in queries), command injection (user input in exec/spawn), XSS (dangerouslySetInnerHTML, innerHTML). Then auth audit: password hashing algorithm, JWT validation, rate limiting, session handling. Then CORS and security headers. Outputs a prioritized report with attack scenarios and exact code fixes.&lt;/p&gt;

&lt;p&gt;Refactoring Agent&lt;br&gt;
Only starts if tests pass. Finds the largest files, duplicate logic (files with similar imports), dead code (exports not imported anywhere), and complexity issues (deep nesting, long parameter lists, god objects). Executes one refactor at a time: dead code removal, extract shared logic, split long functions, guard clauses for nested conditions, named constants for magic numbers. Runs tests after every single change. If tests break, undoes the refactor. Never changes public APIs or behavior.&lt;/p&gt;

&lt;p&gt;CI/CD Pipeline Agent&lt;br&gt;
Reads your project structure, detects the framework and build commands, checks for existing workflow files, and generates or fixes GitHub Actions or GitLab CI configs. Knows about caching strategies for npm, pip, and Docker layers. Sets up proper job dependencies, artifacts, and environment variables. If a pipeline is failing, reads the error logs and traces the failure to the exact step and fix.&lt;/p&gt;

&lt;p&gt;Database Migration Agent&lt;br&gt;
Detects your ORM (Prisma, Drizzle, TypeORM, Knex, Django, Rails). Reads the current schema, compares it against the code that uses the database, and generates migration files for missing changes. For every migration, checks for data loss risks (dropping columns with data, changing types), generates a rollback script, estimates execution time for large tables, and suggests running during low-traffic windows if the migration locks tables.&lt;/p&gt;

&lt;p&gt;Performance Optimizer Agent&lt;br&gt;
Three-layer analysis. Frontend: runs bundle analysis, finds large dependencies, identifies components that re-render unnecessarily, checks for missing lazy loading and code splitting. Backend: finds N+1 queries, missing database indexes, API endpoints that could use caching, synchronous operations that should be async. Memory: looks for event listener leaks, growing arrays that are never cleared, unclosed connections. Every optimization includes before and after measurements.&lt;/p&gt;

&lt;p&gt;Onboarding Agent&lt;br&gt;
Maps the entire codebase. Identifies the framework, entry points, routing, data flow from request to database to response. Documents the directory structure with what each folder actually contains (not just the folder name). Finds conventions by reading existing code (naming patterns, error handling patterns, test patterns). Lists key files a new developer should read first. Identifies gotchas (unusual patterns, workarounds, things that look wrong but are intentional by checking git blame). Outputs a structured onboarding guide.&lt;/p&gt;

&lt;p&gt;How to use them&lt;/p&gt;

&lt;p&gt;Each agent is a .md file. You download it, put it in ~/.claude/skills/, and invoke it with /agent-name in Claude Code.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;/pr-review-agent&lt;br&gt;
/security-audit-agent&lt;br&gt;
/test-writer-agent&lt;br&gt;
That is it. No API keys, no setup, no subscriptions.&lt;/p&gt;

&lt;p&gt;All 10 agents plus 789 additional skills are free and open source at &lt;a href="https://www.producthunt.com/products/claude-skills-hub-2?utm_source=other&amp;amp;utm_medium=social" rel="noopener noreferrer"&gt;clskills.in&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="//github.com/Samarth0211/claude-skills-hub"&gt;github.com/Samarth0211/claude-skills-hub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I would love feedback on which agents you find most useful and what other agents you would want built.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
    <item>
      <title>789 Free Skills to Supercharge Claude Code</title>
      <dc:creator>Samarth Bhamare</dc:creator>
      <pubDate>Wed, 25 Mar 2026 06:44:21 +0000</pubDate>
      <link>https://dev.to/samarth0211/789-free-skills-to-supercharge-claude-code-6</link>
      <guid>https://dev.to/samarth0211/789-free-skills-to-supercharge-claude-code-6</guid>
      <description>&lt;p&gt;If you use Claude Code, you already know it's one of the best AI coding tools out there. But most people don't know about its most powerful feature: skill files.&lt;/p&gt;

&lt;p&gt;A skill file is a .md file you drop into ~/.claude/skills/. Claude reads it automatically and follows those instructions whenever relevant. It's like giving Claude a cheat sheet written by a senior engineer.&lt;/p&gt;

&lt;p&gt;The problem? Writing good skill files takes time. A vague prompt gives you vague code. A detailed skill file with real commands, working examples, and specific pitfalls gives you production-ready output.&lt;/p&gt;

&lt;p&gt;So I built CLSkills — a free, searchable library of 789 skill files across 71 categories.&lt;/p&gt;

&lt;p&gt;How it works&lt;/p&gt;

&lt;h1&gt;
  
  
  1. Create the skills directory (one time)
&lt;/h1&gt;

&lt;p&gt;mkdir -p ~/.claude/skills&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Download any skill
&lt;/h1&gt;

&lt;p&gt;curl -o ~/.claude/skills/docker-compose.md \&lt;br&gt;
  &lt;a href="https://clskills.in/skills/docker/docker-compose.md" rel="noopener noreferrer"&gt;https://clskills.in/skills/docker/docker-compose.md&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Use Claude Code normally — it reads the skill automatically
&lt;/h1&gt;

&lt;p&gt;claude "set up a docker compose for my Node app with PostgreSQL and Redis"&lt;br&gt;
That's it. No config, no plugins, no setup.&lt;/p&gt;

&lt;p&gt;What's inside each skill&lt;br&gt;
Every skill follows the same structure:&lt;/p&gt;

&lt;p&gt;Expert role — Claude adopts a specific persona. Not "helpful assistant" but "senior DevOps engineer specializing in container orchestration."&lt;/p&gt;

&lt;p&gt;What to check first — prerequisites before starting. Does the project have a Dockerfile? Is Docker installed? What version?&lt;/p&gt;

&lt;p&gt;Step-by-step instructions — numbered steps with real commands. Not "set up the database" but "run docker compose up -d postgres and verify with docker compose ps."&lt;/p&gt;

&lt;p&gt;Working code — 25-40 lines of copy-paste ready code. Not pseudocode, not skeleton code. Real, working implementations.&lt;/p&gt;

&lt;p&gt;Pitfalls — things that actually go wrong with that specific technology. Not generic advice like "handle errors" but "PostgreSQL containers take 5-10 seconds to initialize — add a health check with pg_isready before starting dependent services."&lt;/p&gt;

&lt;p&gt;Example: Two Factor Auth skill&lt;br&gt;
Here's what a real skill looks like:&lt;/p&gt;

&lt;p&gt;You are a security engineer implementing two-factor authentication.&lt;br&gt;
The user wants to add 2FA/MFA to an existing auth flow using TOTP&lt;br&gt;
and backup codes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to check first
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Verify speakeasy or otplib is installed: npm list speakeasy&lt;/li&gt;
&lt;li&gt;Confirm database has twoFactorSecret, twoFactorEnabled, backupCodes columns&lt;/li&gt;
&lt;li&gt;Check JWT secret or session store is configured&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install packages: npm install speakeasy qrcode&lt;/li&gt;
&lt;li&gt;Create 2FA setup endpoint using speakeasy.generateSecret()
and return QR code via qrcode.toDataURL()&lt;/li&gt;
&lt;li&gt;Create verification endpoint with speakeasy.totp.verify()&lt;/li&gt;
&lt;li&gt;Generate 8-10 backup codes, store them hashed in the database&lt;/li&gt;
&lt;li&gt;Add 2FA check middleware on protected routes&lt;/li&gt;
&lt;li&gt;Create backup code redemption endpoint&lt;/li&gt;
&lt;li&gt;Implement "remember this device" with a 30-day cookie&lt;/li&gt;
&lt;li&gt;Add disable 2FA endpoint requiring password + valid TOTP token&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Pitfalls
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Never log TOTP secrets — they're equivalent to passwords&lt;/li&gt;
&lt;li&gt;Always hash backup codes with bcrypt before storing&lt;/li&gt;
&lt;li&gt;Use 30-second window for TOTP verification, not 60&lt;/li&gt;
&lt;li&gt;Rate limit verification attempts to prevent brute force
When Claude reads this, it doesn't just "try its best." It follows these exact steps, uses the right packages, and avoids the specific mistakes that would bite you in production.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What categories are covered&lt;br&gt;
71 categories across every area of development:&lt;/p&gt;

&lt;p&gt;Frontend: React, Vue, Angular, Svelte, Next.js, Tailwind&lt;br&gt;
Backend: Node.js, Python, Go, Rust, Java, GraphQL, REST, gRPC&lt;br&gt;
DevOps: Docker, Kubernetes, CI/CD, Terraform, Ansible&lt;br&gt;
Cloud: AWS, GCP, Azure, Serverless, Edge Functions&lt;br&gt;
Databases: PostgreSQL, MongoDB, Redis, Elasticsearch, Prisma&lt;br&gt;
Testing: Unit, Integration, E2E, React Testing Library, Playwright&lt;br&gt;
Security: Auth, OAuth, JWT, CORS, Secrets Management&lt;br&gt;
And more: AI/ML, Mobile, Desktop, CLI Tools, Payments, Search, Monitoring...&lt;br&gt;
Why this matters&lt;br&gt;
I've been using Claude Code daily for months. The difference between Claude with and without skill files is massive.&lt;/p&gt;

&lt;p&gt;Without skills: "Here's a basic Docker setup..." followed by a generic Dockerfile that works but misses health checks, multi-stage builds, non-root users, and layer caching.&lt;/p&gt;

&lt;p&gt;With skills: Claude produces the exact same output a senior DevOps engineer would write — because it's following instructions written by one.&lt;/p&gt;

&lt;p&gt;The skills essentially encode expert knowledge into a format Claude can follow consistently. You get senior-level output every time, not just when Claude "feels like it."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it&lt;/strong&gt;&lt;br&gt;
Browse all 789 skills at &lt;a href="https://clskills.in" rel="noopener noreferrer"&gt;clskills.in&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything is free, no login required, no paywall. Download individual .md files or grab entire collections as a ZIP.&lt;/p&gt;

&lt;p&gt;The project is open source: &lt;a href="https://github.com/Samarth0211/claude-skills-hub" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have skill ideas or want to contribute, join us at &lt;a href="https://www.reddit.com/r/ClaudeCodeSkills/" rel="noopener noreferrer"&gt;r/ClaudeCodeSkills&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What skills do you use with Claude Code? I'd love to hear what workflows people are building.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
