<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Toruk Makto</title>
    <description>The latest articles on DEV Community by Toruk Makto (@torukmakto2992).</description>
    <link>https://dev.to/torukmakto2992</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/torukmakto2992"/>
    <language>en</language>
    <item>
      <title>I turned 14 business books into Claude Code skills that auto-trigger based on your question</title>
      <dc:creator>Toruk Makto</dc:creator>
      <pubDate>Wed, 29 Apr 2026 02:08:28 +0000</pubDate>
      <link>https://dev.to/torukmakto2992/i-turned-14-business-books-into-claude-code-skills-that-auto-trigger-based-on-your-question-1567</link>
      <guid>https://dev.to/torukmakto2992/i-turned-14-business-books-into-claude-code-skills-that-auto-trigger-based-on-your-question-1567</guid>
      <description>&lt;p&gt;&lt;strong&gt;why this exists&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;been using claude for almost all my business planning - pricing, customer interviews, marketing strategy, sales calls. the problem is claude knows these books from training data but only surface level. ask it about The Mom Test and it'll say "ask open-ended questions." ask it to actually score your customer conversation and it makes up random criteria every time.&lt;/p&gt;

&lt;p&gt;wanted something structured. actual decision trees, scoring rubrics, templates that work the same way every time. started with The Mom Test after someone recommended it to me. turned it into a skill. then couldn't stop. 14 books later here we are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;what's actually inside each skill&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;every skill follows the same structure:&lt;/p&gt;

&lt;p&gt;a decision tree at the top that tells you whether this is even the right framework for your problem. half the time founders think they have a messaging problem when it's actually distribution or pricing. the skill catches that before you waste time.&lt;/p&gt;

&lt;p&gt;scored checklists you can use in real situations. the mom test skill scores your customer conversations on 10 specific criteria. spin selling has a call planning worksheet. $100M offers has an offer scoring rubric.&lt;/p&gt;

&lt;p&gt;honest limitations. every skill tells you what the book got wrong, what's outdated, and when to stop using it. the lean startup skill flags that innovation accounting barely works outside software. crossing the chasm warns you the bowling alley model is mostly theoretical.&lt;/p&gt;

&lt;p&gt;conflict resolution between books. storybrand says position yourself as the guide. obviously awesome is more product-centric. the skills map exactly where two frameworks disagree and how to resolve it depending on your situation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;who this actually helps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;you're about to do customer interviews → mom test skill gives you exact questions to ask and a scoring rubric to evaluate answers&lt;/p&gt;

&lt;p&gt;you're pricing a new product → monetizing innovation walks you through willingness-to-pay research before you build&lt;/p&gt;

&lt;p&gt;you're writing your landing page → storybrand gives you a fill-in brandscript template so you stop talking about yourself and start talking about the customer's problem&lt;/p&gt;

&lt;p&gt;your marketing isn't converting → the skill figures out whether it's messaging (storybrand), positioning (obviously awesome), channels (traction), or your offer itself ($100M offers)&lt;/p&gt;

&lt;p&gt;you're preparing for a B2B sales call → spin selling gives you a call planner with situation, problem, implication, and need-payoff questions mapped out&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;how to use&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;clone the repo and symlink into claude code - skills auto-trigger based on your question. or just paste any SKILL.md into chatgpt/gemini/cursor as context. works the same way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/getagentseal/founder-playbook" rel="noopener noreferrer"&gt;https://github.com/getagentseal/founder-playbook&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;free and open source. genuinely curious what books you'd want added next.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>agentskills</category>
      <category>ai</category>
      <category>founder</category>
    </item>
    <item>
      <title>Reading your AI coding logs: cache hits, retry loops, and other signals</title>
      <dc:creator>Toruk Makto</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:58:49 +0000</pubDate>
      <link>https://dev.to/torukmakto2992/reading-your-ai-coding-logs-cache-hits-retry-loops-and-other-signals-2o9p</link>
      <guid>https://dev.to/torukmakto2992/reading-your-ai-coding-logs-cache-hits-retry-loops-and-other-signals-2o9p</guid>
      <description>&lt;p&gt;Last week I checked my AI coding spend and it was higher than my AWS bill. I'm paying for Claude Code, Codex, Cursor, the occasional Opus burst, and I had no visibility into where any of it went. Just a number going up.&lt;/p&gt;

&lt;p&gt;Turns out every AI coding tool already writes session data to disk. Claude Code drops JSONL into &lt;code&gt;~/.claude/projects/&lt;/code&gt;. Codex writes to &lt;code&gt;~/.codex/sessions/YYYY/MM/DD/&lt;/code&gt;. Cursor uses a SQLite database. OpenCode uses SQLite. Pi uses JSONL. All of it is sitting there waiting to be read.&lt;/p&gt;

&lt;p&gt;I started reading mine and the patterns are obvious once you look.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the data shows
&lt;/h2&gt;

&lt;p&gt;This is one week of my actual AI coding usage:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/your-screenshot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/your-screenshot.png" alt="codeburn dashboard showing $1274 weekly cost, 13634 calls across 1821 sessions, 98.3% cache hit, broken down by project, model, activity, tools, and shell commands" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few things jumped out immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache hit rate matters more than I thought.&lt;/strong&gt; Claude prices cache reads at 1/10th the cost of fresh input. Opus came in at 98.8% cache hits, which sounds great until I noticed Sonnet 4.6 was at 77.1%. That gap is real money. If your system prompt or the first few files in context are unstable, you're paying full price for the same tokens every turn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool counts tell you the agent's mood.&lt;/strong&gt; 2,126 Bash calls, 990 Reads, 742 Edits in a week. The Read:Edit ratio is roughly 1.3, which is fine. If Read had been 4x higher, I'd know the agent was spelunking instead of executing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One-shot rate is brutal honesty.&lt;/strong&gt; Coding shows 88% one shot. The other 12% needed retries (Edit → Bash → Edit). That's where time and tokens leak silently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model mix reveals overspending.&lt;/strong&gt; Opus 4.6 cost $1219 this week. Sonnet 4.6 cost $38. Some of those Opus turns were small Q&amp;amp;A that Sonnet would have handled fine. I haven't run the experiment of routing them yet, but the gap suggests there's real money on the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patterns worth watching for
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;What it usually means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cache hit &amp;lt; 80%&lt;/td&gt;
&lt;td&gt;System prompt or context unstable, caching not configured&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lots of Read calls per session&lt;/td&gt;
&lt;td&gt;Agent re-reading files, missing context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Low one-shot rate (Coding &amp;lt; 30%)&lt;/td&gt;
&lt;td&gt;Retry loops, agent struggling with edits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Opus dominating cost on small turns&lt;/td&gt;
&lt;td&gt;Overpowered for the task&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;dispatch_agent&lt;/code&gt; heavy&lt;/td&gt;
&lt;td&gt;Sub-agent fan-out, expected or excessive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No MCP usage&lt;/td&gt;
&lt;td&gt;Either you don't use MCP, or your config is broken&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bash dominated by &lt;code&gt;git status&lt;/code&gt;, &lt;code&gt;ls&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Agent exploring instead of executing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Conversation category dominant&lt;/td&gt;
&lt;td&gt;Agent talking instead of doing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't verdicts, just starting points. A 60% cache hit on a one-off experiment is fine. A persistent 60% across weeks is a config issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I'm reading this data
&lt;/h2&gt;

&lt;p&gt;There's a tool called &lt;code&gt;codeburn&lt;/code&gt; that reads all the session formats and renders this dashboard in your terminal. It supports Claude Code, Codex, Cursor, OpenCode, and Pi. No proxy, no API keys just reads the local files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx codeburn report &lt;span class="nt"&gt;--period&lt;/span&gt; week
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repo: &lt;a href="https://github.com/AgentSeal/codeburn" rel="noopener noreferrer"&gt;https://github.com/AgentSeal/codeburn&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Open source, MIT.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;We obsess over model choice and pricing tier. We argue about Opus vs Sonnet vs GLM. The discussion online is almost entirely about which model to use, never about what your agent is actually doing once it's running.&lt;/p&gt;

&lt;p&gt;The session files have the answer. Every retry, every redundant Read, every cache miss, every misrouted model,it's all there. Looking at it once a week takes ten minuts and tells you more about your spend than any pricing comparison.&lt;/p&gt;

&lt;p&gt;Try reading your own sessions for a week. Even if you don't use any tool, just &lt;code&gt;cat&lt;/code&gt; a few JSONL files and look at the usage blocks. You'll spot at least one pattern you didn't expect.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>opensource</category>
      <category>cli</category>
    </item>
    <item>
      <title>Where do your AI coding tokens actually go?</title>
      <dc:creator>Toruk Makto</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:49:05 +0000</pubDate>
      <link>https://dev.to/torukmakto2992/where-do-your-ai-coding-tokens-actually-go-3ggg</link>
      <guid>https://dev.to/torukmakto2992/where-do-your-ai-coding-tokens-actually-go-3ggg</guid>
      <description>&lt;p&gt;Last week I checked my AI coding spend and it was higher than my AWS bill. I'm paying for Claude Code, Codex, Cursor, the occasional Opus burst, and I had no visibility into where any of it went. Just a number going up.&lt;/p&gt;

&lt;p&gt;Turns out every AI coding tool already writes session data to disk. Claude Code drops JSONL into &lt;code&gt;~/.claude/projects/&lt;/code&gt;. Codex writes to &lt;code&gt;~/.codex/sessions/YYYY/MM/DD/&lt;/code&gt;. Cursor uses a SQLite database. OpenCode uses SQLite. Pi uses JSONL. All of it is sitting there waiting to be read.&lt;/p&gt;

&lt;p&gt;I started reading mine and the patterns are obvious once you look.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the data shows
&lt;/h2&gt;

&lt;p&gt;This is one week of my actual AI coding usage:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/your-screenshot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/your-screenshot.png" alt="codeburn dashboard showing $1274 weekly cost, 13634 calls across 1821 sessions, 98.3% cache hit, broken down by project, model, activity, tools, and shell commands" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few things jumped out immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache hit rate matters more than I thought.&lt;/strong&gt; Claude prices cache reads at 1/10th the cost of fresh input. Opus came in at 98.8% cache hits, which sounds great until I noticed Sonnet 4.6 was at 77.1%. That gap is real money. If your system prompt or the first few files in context are unstable, you're paying full price for the same tokens every turn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool counts tell you the agent's mood.&lt;/strong&gt; 2,126 Bash calls, 990 Reads, 742 Edits in a week. The Read:Edit ratio is roughly 1.3, which is fine. If Read had been 4x higher, I'd know the agent was spelunking instead of executing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One-shot rate is brutal honesty.&lt;/strong&gt; Coding shows 88% one shot. The other 12% needed retries (Edit → Bash → Edit). That's where time and tokens leak silently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model mix reveals overspending.&lt;/strong&gt; Opus 4.6 cost $1219 this week. Sonnet 4.6 cost $38. Some of those Opus turns were small Q&amp;amp;A that Sonnet would have handled fine. I haven't run the experiment of routing them yet, but the gap suggests there's real money on the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patterns worth watching for
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;What it usually means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cache hit &amp;lt; 80%&lt;/td&gt;
&lt;td&gt;System prompt or context unstable, caching not configured&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lots of Read calls per session&lt;/td&gt;
&lt;td&gt;Agent re-reading files, missing context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Low one-shot rate (Coding &amp;lt; 30%)&lt;/td&gt;
&lt;td&gt;Retry loops, agent struggling with edits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Opus dominating cost on small turns&lt;/td&gt;
&lt;td&gt;Overpowered for the task&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;dispatch_agent&lt;/code&gt; heavy&lt;/td&gt;
&lt;td&gt;Sub-agent fan-out, expected or excessive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No MCP usage&lt;/td&gt;
&lt;td&gt;Either you don't use MCP, or your config is broken&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bash dominated by &lt;code&gt;git status&lt;/code&gt;, &lt;code&gt;ls&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Agent exploring instead of executing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Conversation category dominant&lt;/td&gt;
&lt;td&gt;Agent talking instead of doing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't verdicts, just starting points. A 60% cache hit on a one-off experiment is fine. A persistent 60% across weeks is a config issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I'm reading this data
&lt;/h2&gt;

&lt;p&gt;There's a tool called &lt;code&gt;codeburn&lt;/code&gt; that reads all the session formats and renders this dashboard in your terminal. It supports Claude Code, Codex, Cursor, OpenCode, and Pi. No proxy, no API keys—just reads the local files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx codeburn report &lt;span class="nt"&gt;--period&lt;/span&gt; week
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repo: &lt;a href="https://github.com/AgentSeal/codeburn" rel="noopener noreferrer"&gt;https://github.com/AgentSeal/codeburn&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Open source, MIT.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;We obsess over model choice and pricing tier. We argue about Opus vs Sonnet vs GLM. The discussion online is almost entirely about which model to use, never about what your agent is actually doing once it's running.&lt;/p&gt;

&lt;p&gt;The session files have the answer. Every retry, every redundant Read, every cache miss, every misrouted model, it's all there. Looking at it once a week takes ten minutes and tells you more about your spend than any pricing comparison.&lt;/p&gt;

&lt;p&gt;Try reading your own sessions for a week. Even if you don't use any tool, just &lt;code&gt;cat&lt;/code&gt; a few JSONL files and look at the usage blocks. You'll spot at least one pattern you didn't expect.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>cli</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
