<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: naoki_JPN</title>
    <description>The latest articles on DEV Community by naoki_JPN (@bokuno_log).</description>
    <link>https://dev.to/bokuno_log</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bokuno_log"/>
    <language>en</language>
    <item>
      <title>I Built a Claude Code Plugin That Simultaneously Posts to Zenn and dev.to With Just "Publish the article"</title>
      <dc:creator>naoki_JPN</dc:creator>
      <pubDate>Fri, 17 Apr 2026 04:31:47 +0000</pubDate>
      <link>https://dev.to/bokuno_log/i-built-a-claude-code-plugin-that-simultaneously-posts-to-zenn-and-devto-with-just-publish-the-4pmj</link>
      <guid>https://dev.to/bokuno_log/i-built-a-claude-code-plugin-that-simultaneously-posts-to-zenn-and-devto-with-just-publish-the-4pmj</guid>
      <description>&lt;p&gt;I built a plugin for Claude Code that simultaneously posts to Zenn (in Japanese) and dev.to (in English translation) just by saying "publish the article."&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;zenn-post&lt;/strong&gt; — Claude Code Plugin / Skill&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/bokuno-studio/zenn-post-cc-plugin" rel="noopener noreferrer"&gt;https://github.com/bokuno-studio/zenn-post-cc-plugin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This plugin handles the entire article publishing workflow — creating drafts, formatting Markdown, running git push, and calling the dev.to API — all delegated to Claude Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Can Do
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;"Publish the article" → &lt;strong&gt;simultaneously posts&lt;/strong&gt; to Zenn (Japanese) and dev.to (English translation)&lt;/li&gt;
&lt;li&gt;Pass a Notion page URL → reads the content, converts it to an article, and posts it&lt;/li&gt;
&lt;li&gt;Just describe a topic verbally → fully automated from writing to git push&lt;/li&gt;
&lt;li&gt;Automatically converts Mermaid diagrams to PNG locally (no external services)&lt;/li&gt;
&lt;li&gt;Preview before publishing → runs &lt;code&gt;git push&lt;/code&gt; after your OK
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Upload this Notion page to Zenn"
"Publish the article"          ← Posts to both Zenn + dev.to
"Post to Zenn only"            ← Zenn only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why I Built It
&lt;/h2&gt;

&lt;p&gt;The Zenn posting workflow was quietly tedious:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Markdown file&lt;/li&gt;
&lt;li&gt;Write the frontmatter&lt;/li&gt;
&lt;li&gt;Write the content&lt;/li&gt;
&lt;li&gt;Change &lt;code&gt;published: true&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;git commit &amp;amp; push&lt;/li&gt;
&lt;li&gt;(If also posting to dev.to) Translate to English and call the API&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Doing this every time felt like a chore, so I thought "let Claude Code handle all of it."&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Highlights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Implemented as a Claude Code Skill (SKILL.md)
&lt;/h3&gt;

&lt;p&gt;Claude Code has a plugin/skill feature where you can register a skill just by writing a prompt in &lt;code&gt;SKILL.md&lt;/code&gt;. No code needed at all — you just define "how it should behave" in natural language.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;skills/zenn-post/SKILL.md  ← that's it
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Direct POST to dev.to via curl
&lt;/h3&gt;

&lt;p&gt;The dev.to API is simple — a single curl request does the job. Claude Code reads the API key from &lt;code&gt;.env&lt;/code&gt; and assembles the request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://dev.to/api/articles &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"api-key: &lt;/span&gt;&lt;span class="nv"&gt;$DEVTO_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{ "article": { "title": "...", "published": true, "body_markdown": "..." } }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Mermaid Diagrams Converted to PNG Locally
&lt;/h3&gt;

&lt;p&gt;Since dev.to doesn't render Mermaid, this plugin uses &lt;code&gt;mermaid-cli&lt;/code&gt; for local conversion into images. The converted PNG is stored in the &lt;code&gt;images/&lt;/code&gt; directory of the zenn-content repository and served via &lt;code&gt;raw.githubusercontent.com&lt;/code&gt; — no external services involved.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @mermaid-js/mermaid-cli &lt;span class="nt"&gt;-i&lt;/span&gt; diagram.mmd &lt;span class="nt"&gt;-o&lt;/span&gt; diagram.png &lt;span class="nt"&gt;-t&lt;/span&gt; default &lt;span class="nt"&gt;-b&lt;/span&gt; white
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude Code installed&lt;/li&gt;
&lt;li&gt;A Zenn account and zenn-content repository (public) on GitHub&lt;/li&gt;
&lt;li&gt;A dev.to account and API key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/bokuno-studio/zenn-post-cc-plugin ~/.claude/plugins/zenn-post
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit the environment info section in &lt;code&gt;skills/zenn-post/SKILL.md&lt;/code&gt; with your own paths, then register it with Claude Code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude plugin marketplace add ~/.claude/plugins/zenn-post
claude plugin &lt;span class="nb"&gt;install &lt;/span&gt;zenn-post
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Changelog
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;th&gt;Changes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;v0.1.0&lt;/td&gt;
&lt;td&gt;2026-04-16&lt;/td&gt;
&lt;td&gt;Initial release (Zenn posting only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;v0.2.0&lt;/td&gt;
&lt;td&gt;2026-04-17&lt;/td&gt;
&lt;td&gt;Added dev.to simultaneous posting and Mermaid CLI support&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Using Claude Code's Skill feature, you can take a new approach to automation: "define Claude's behavior without writing code." Extracting routine tasks like article publishing into a skill makes it satisfying to complete everything with just a verbal instruction.&lt;/p&gt;

&lt;p&gt;Feel free to give it a try — feedback welcome!&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>zenn</category>
      <category>devto</category>
      <category>ai</category>
    </item>
    <item>
      <title>I Built a Sales Prep AI and It Went Deeper Than Expected</title>
      <dc:creator>naoki_JPN</dc:creator>
      <pubDate>Fri, 17 Apr 2026 01:55:37 +0000</pubDate>
      <link>https://dev.to/bokuno_log/i-built-a-sales-prep-ai-and-it-went-deeper-than-expected-4bcd</link>
      <guid>https://dev.to/bokuno_log/i-built-a-sales-prep-ai-and-it-went-deeper-than-expected-4bcd</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;"Before a first sales meeting, you always research the other company. That part is kind of a pain, right?"&lt;/p&gt;

&lt;p&gt;That thought is where this started. I wanted something that would take a company name and automatically research it, then return a report.&lt;/p&gt;

&lt;p&gt;I figured I could get something working in 2–3 days. But getting it to a genuinely usable level turned out to be much deeper than expected. This is the story of that process.&lt;/p&gt;

&lt;p&gt;What I built: &lt;strong&gt;Sales Prep AI&lt;/strong&gt; (LINE bot)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pre-talk.vercel.app" rel="noopener noreferrer"&gt;https://pre-talk.vercel.app&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz8xv2dav882571wngml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz8xv2dav882571wngml.png" alt="Tech Stack" width="784" height="300"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Came Together
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Phase 1: Just Get It Working
&lt;/h3&gt;

&lt;p&gt;I started with a simple web form. Enter company name, department, and contact name → it searches the web → GPT-4o-mini analyzes the results → returns a report.&lt;/p&gt;

&lt;p&gt;As I worked on improving reasoning quality, I switched from GPT-4o-mini to &lt;strong&gt;Claude Sonnet&lt;/strong&gt; for the analysis layer. Light input-interpretation tasks go to &lt;strong&gt;Claude Haiku&lt;/strong&gt;; heavy analysis and OCR go to &lt;strong&gt;Claude Sonnet&lt;/strong&gt;. That division of labor stuck.&lt;/p&gt;

&lt;p&gt;For search, I started with DuckDuckGo, but the quality wasn't great, so I switched to Tavily. That one change made a noticeable difference in search quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Fighting Vercel Hobby's 10-Second Timeout
&lt;/h3&gt;

&lt;p&gt;Research involves multiple steps — search, then AI analysis — and it realistically takes 1–2 minutes. Vercel's Hobby plan times out at 10 seconds.&lt;/p&gt;

&lt;p&gt;The solution: streaming responses. By returning a response while continuing to process, you keep the function alive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReadableStream&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;heartbeat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setInterval&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enqueue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextEncoder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Heavy processing happens here&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;runResearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;input&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;clearInterval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;heartbeat&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sending a blank space every 5 seconds keeps the connection alive. Brute-force, but it works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Slack Bot → LINE Bot
&lt;/h3&gt;

&lt;p&gt;A web form creates friction — you have to actively open it when you need it. It's better to use it from a tool you already have open.&lt;/p&gt;

&lt;p&gt;I built a Slack bot first. But when I ended up canceling the paid Slack plan I was using, I migrated to LINE.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: Fighting Hallucinations
&lt;/h3&gt;

&lt;p&gt;This was the hardest part.&lt;/p&gt;

&lt;p&gt;The AI was confidently returning information that sounded plausible but wasn't true. Specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Asserting fabricated problems as "challenges faced by [department]"&lt;/li&gt;
&lt;li&gt;Returning outdated information as if it were current&lt;/li&gt;
&lt;li&gt;Filling in gaps with information not in any search result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I approached this on two axes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Axis 1: Improve output accuracy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I built in mechanisms to prevent unsupported information from slipping through.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fact/inference separation&lt;/strong&gt;: The AI explicitly labels each piece of information as either a verified fact (from official sources) or an inference (from surrounding context). The report displays these separately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output gate&lt;/strong&gt;: Items that fail conditions like "only contains generalities with no specifics" or "no source URL exists" are filtered out before output.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Axis 2: Make it human-verifiable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Improving accuracy alone isn't enough. Whether done by humans or AI, mistakes happen. What matters is making the process transparent.&lt;/p&gt;

&lt;p&gt;So I designed each report item to include both "the facts recognized" and "the reasoning path to the conclusion." Showing what evidence led to what conclusion lets humans catch reasoning that doesn't hold up.&lt;/p&gt;

&lt;p&gt;The goal is to save prep time, not to replace human judgment. That's fine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 5: The Official Website Detection Rabbit Hole
&lt;/h3&gt;

&lt;p&gt;Search results mix "official company sites" with "everything else" (news, Wikipedia, etc.).&lt;/p&gt;

&lt;p&gt;I started with simple domain matching, but group companies, subsidiaries, and subdomains made that fall apart quickly.&lt;/p&gt;

&lt;p&gt;I eventually settled on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Return multiple official domain candidates&lt;/li&gt;
&lt;li&gt;Normalize to base domain (including subdomain matching)&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;.some()&lt;/code&gt; to check against the array&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 6: Business Card Scanning
&lt;/h3&gt;

&lt;p&gt;"Wouldn't it be great if you could start researching the moment you get someone's card?"&lt;/p&gt;

&lt;p&gt;I added business card scanning using Claude's Vision capability. Send a photo of a card to LINE → it extracts company name, department, and contact name → triggers research automatically. OCR quality mattered, so I used Claude Sonnet here.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Lesson in Agent Sprawl
&lt;/h2&gt;

&lt;p&gt;At one point I tried to improve the reasoning logic by spinning up five agents simultaneously (field-sales / info-architect / reasoning-designer / impl-designer / critic).&lt;/p&gt;

&lt;p&gt;They went into an endless loop of spec discussion, autonomously generating 154 tasks. When I told them to stop, they kept going. I had to force-shutdown. Almost no actual code was written — "improving the spec" had become the goal in itself.&lt;/p&gt;

&lt;p&gt;The root cause: I hadn't defined what they were allowed to decide or when they were done.&lt;/p&gt;

&lt;p&gt;After that, I redesigned the agent structure. Instead of everyone chiming in freely, I cut it down to 3 roles and explicitly defined what each role was &lt;strong&gt;not&lt;/strong&gt; allowed to do.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Responsibility&lt;/th&gt;
&lt;th&gt;What they must NOT do&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;team-lead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Routing and task management&lt;/td&gt;
&lt;td&gt;Write code, generate summaries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;product&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Decide implementation approach and implement&lt;/td&gt;
&lt;td&gt;Create tasks themselves&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;auditor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pass/fail judgment only&lt;/td&gt;
&lt;td&gt;Write improvement suggestions, act unless called&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Defining "what not to do" alongside "what to do" made role boundaries much cleaner.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cost
&lt;/h2&gt;

&lt;h3&gt;
  
  
  API cost per research run (measured)
&lt;/h3&gt;

&lt;p&gt;Varies by company size and available information.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude Sonnet (analysis)&lt;/td&gt;
&lt;td&gt;~$0.35&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tavily (web search)&lt;/td&gt;
&lt;td&gt;~$0.05&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$0.40/run (range: $0.24–$0.52)&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;At 100 runs/month that's ~$40; at 500 runs it's ~$200. It's currently free to use, so I'm entirely out of pocket. I'm in a "prove the value first" phase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixed costs (monthly)
&lt;/h3&gt;

&lt;p&gt;Hosting, DB, LINE, domain, etc. I've minimized these by combining free tiers, but it's not zero.&lt;/p&gt;




&lt;h2&gt;
  
  
  Current Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LINE bot
  ↓ business card image or text
Claude Haiku (input interpretation)
Claude Sonnet (business card OCR)
  ↓
Tavily (parallel web search: 12–15 queries)
  ↓
Claude Sonnet (fact extraction → issue inference → proposal generation)
  ↓
Supabase (report storage) ← auto-deleted after 30 days (personal data compliance)
  ↓
Report URL pushed to LINE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;A product that started from "this sounds fun" made it to something I could actually publish.&lt;/p&gt;

&lt;p&gt;Hallucination mitigation, timeout workarounds, official site detection — making something genuinely usable turned out to be deeper than I expected.&lt;/p&gt;

&lt;p&gt;If you're curious, add it as a friend on LINE. Just send a photo of a business card and it runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pre-talk.vercel.app" rel="noopener noreferrer"&gt;https://pre-talk.vercel.app&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>nextjs</category>
      <category>linebot</category>
    </item>
    <item>
      <title>Three Ways to Call Codex from Claude Code — A Practical Breakdown</title>
      <dc:creator>naoki_JPN</dc:creator>
      <pubDate>Fri, 17 Apr 2026 01:54:38 +0000</pubDate>
      <link>https://dev.to/bokuno_log/three-ways-to-call-codex-from-claude-code-a-practical-breakdown-d8n</link>
      <guid>https://dev.to/bokuno_log/three-ways-to-call-codex-from-claude-code-a-practical-breakdown-d8n</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; A breakdown of the three methods for calling OpenAI Codex from within a Claude Code session — their characteristics and when to use each. Researched on 2026-04-15.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Key Discovery: The Plugin Does NOT Call &lt;code&gt;codex --full-auto&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The most important finding: codex-plugin-cc (used in Methods 2 and 3) internally uses the &lt;strong&gt;App Server Protocol (ASP)&lt;/strong&gt; — not the raw CLI.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;CLI Mode (Method 1)&lt;/th&gt;
&lt;th&gt;ASP Mode (Methods 2 &amp;amp; 3)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Launch style&lt;/td&gt;
&lt;td&gt;One-shot process&lt;/td&gt;
&lt;td&gt;app-server-broker.mjs runs as a daemon&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Protocol&lt;/td&gt;
&lt;td&gt;stdin/stdout&lt;/td&gt;
&lt;td&gt;JSON-RPC 2.0 over stdio / WebSocket&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thread continuation&lt;/td&gt;
&lt;td&gt;Not supported&lt;/td&gt;
&lt;td&gt;Possible via threadId&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup cost&lt;/td&gt;
&lt;td&gt;High (every time)&lt;/td&gt;
&lt;td&gt;Low (broker stays resident)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Detailed Comparison of the Three Methods
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Method 1: &lt;code&gt;codex --full-auto&lt;/code&gt; (Raw CLI)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;codex &lt;span class="nt"&gt;--full-auto&lt;/span&gt; &lt;span class="s2"&gt;"fix src/foo.ts"&lt;/span&gt;
&lt;span class="c"&gt;# = syntactic sugar for --sandbox workspace-write --ask-for-approval on-request&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Internal protocol&lt;/td&gt;
&lt;td&gt;CLI (one-shot)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auth / plan&lt;/td&gt;
&lt;td&gt;ChatGPT subscription &lt;strong&gt;or&lt;/strong&gt; OpenAI API key (either works)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Job tracking&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Background&lt;/td&gt;
&lt;td&gt;Manual &lt;code&gt;&amp;amp;&lt;/code&gt; only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thread continuation&lt;/td&gt;
&lt;td&gt;Not supported&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prompt optimization&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subscription-only features&lt;/td&gt;
&lt;td&gt;Fast Mode (unavailable with API key)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API key limitations&lt;/td&gt;
&lt;td&gt;No Fast Mode; may have delayed access to new models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Quick experiments, interactive use, CI/batch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gotcha&lt;/td&gt;
&lt;td&gt;Cannot write outside the workspace (sandbox restriction)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Method 2: &lt;code&gt;codex-companion.mjs task&lt;/code&gt; (Indirect via Bash)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLAUDE_PLUGIN_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/scripts/codex-companion.mjs"&lt;/span&gt; task &lt;span class="nt"&gt;--write&lt;/span&gt; &lt;span class="s2"&gt;"..."&lt;/span&gt;
node &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CLAUDE_PLUGIN_ROOT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/scripts/codex-companion.mjs"&lt;/span&gt; task &lt;span class="nt"&gt;--background&lt;/span&gt; &lt;span class="nt"&gt;--write&lt;/span&gt; &lt;span class="s2"&gt;"..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Internal protocol&lt;/td&gt;
&lt;td&gt;ASP (via resident broker)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auth / plan&lt;/td&gt;
&lt;td&gt;ChatGPT subscription &lt;strong&gt;or&lt;/strong&gt; OpenAI API key (either works)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Job tracking&lt;/td&gt;
&lt;td&gt;Yes (job-id / state.json persistence)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Background&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;--background&lt;/code&gt; spawns a detached process&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thread continuation&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;--resume-last&lt;/code&gt; continues the previous thread&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prompt optimization&lt;/td&gt;
&lt;td&gt;None (passes raw text as-is)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subscription-only features&lt;/td&gt;
&lt;td&gt;Fast Mode (unavailable with API key)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API key limitations&lt;/td&gt;
&lt;td&gt;No Fast Mode; delayed new model access&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subcommands&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;task&lt;/code&gt; / &lt;code&gt;review&lt;/code&gt; / &lt;code&gt;adversarial-review&lt;/code&gt; / &lt;code&gt;status&lt;/code&gt; / &lt;code&gt;result&lt;/code&gt; / &lt;code&gt;cancel&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Long-running tasks, external job monitoring, job management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gotcha&lt;/td&gt;
&lt;td&gt;None — the most straightforward of the three&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Method 3: &lt;code&gt;codex:rescue&lt;/code&gt; Sub-agent (Agent tool)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;subagent_type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;codex:codex-rescue&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;On branch feat/xxx, ...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;run_in_background&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Internal protocol&lt;/td&gt;
&lt;td&gt;ASP (via companion.mjs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auth / plan&lt;/td&gt;
&lt;td&gt;ChatGPT subscription &lt;strong&gt;or&lt;/strong&gt; OpenAI API key (either works)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Job tracking&lt;/td&gt;
&lt;td&gt;Yes (via companion)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Background&lt;/td&gt;
&lt;td&gt;&lt;code&gt;run_in_background: true&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thread continuation&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;--resume&lt;/code&gt; flag in prompt&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prompt optimization&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Auto-improved via gpt-5.4-prompting skill&lt;/strong&gt; (not available in Methods 1 &amp;amp; 2)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subscription-only features&lt;/td&gt;
&lt;td&gt;Fast Mode (unavailable with API key)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API key limitations&lt;/td&gt;
&lt;td&gt;No Fast Mode; delayed new model access&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Delegating implementation to Codex from within Claude (recommended default)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gotchas&lt;/td&gt;
&lt;td&gt;① Cannot write outside workspace  ② False positives when Bash is denied (Issue #158)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Auth &amp;amp; Plan Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Auth method&lt;/th&gt;
&lt;th&gt;Available features&lt;/th&gt;
&lt;th&gt;Unavailable features&lt;/th&gt;
&lt;th&gt;Billing&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT Plus ($20/mo) ~ Pro ($100–$200/mo)&lt;/td&gt;
&lt;td&gt;All features, Fast Mode, latest models (GPT-5.4 / GPT-5.3-Codex), cloud integrations (GitHub, Slack)&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Fixed monthly (rate limits apply)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI API key&lt;/td&gt;
&lt;td&gt;CLI, IDE, ASP execution&lt;/td&gt;
&lt;td&gt;Fast Mode, cloud integrations, immediate access to new models&lt;/td&gt;
&lt;td&gt;Token-based pay-as-you-go&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;⚠️ Important:&lt;/strong&gt; All three methods use the same authentication. There is no method that exclusively requires a subscription or an API key. The difference only appears in Fast Mode availability and how quickly you get access to new models.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Decision Flowchart
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt0gbu5wt0ztogk7o45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt0gbu5wt0ztogk7o45.png" alt="Decision Flowchart" width="600" height="556"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Known Issues &amp;amp; Gotchas
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue #158: False Positives in codex:rescue
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Symptom:&lt;/strong&gt; When the Bash tool is denied, the sub-agent silently reads files, performs its own analysis, and falsely reports that "Codex executed it."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expected behavior:&lt;/strong&gt; Bash denied → &lt;code&gt;return nothing&lt;/code&gt; (return nothing at all)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact:&lt;/strong&gt; In environments where Bash is restricted, you cannot tell whether &lt;code&gt;codex:rescue&lt;/code&gt; output was genuinely produced by Codex.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cannot Write Outside the Workspace
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;sandbox workspace-write&lt;/code&gt; blocks writes to directories outside the launch directory. Delegating from an ops session to a dev directory will fail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workaround:&lt;/strong&gt; Call from a Claude session inside the target workspace, or switch to DEV agent + direct Edit tool.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/openai/codex-plugin-cc" rel="noopener noreferrer"&gt;GitHub: openai/codex-plugin-cc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://deepwiki.com/openai/codex-plugin-cc/3.2-rescue-and-task-delegation" rel="noopener noreferrer"&gt;Rescue &amp;amp; Task Delegation | DeepWiki&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.openai.com/codex/app-server" rel="noopener noreferrer"&gt;App Server – Codex | OpenAI Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.openai.com/codex/auth" rel="noopener noreferrer"&gt;Authentication – Codex | OpenAI Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.openai.com/codex/pricing" rel="noopener noreferrer"&gt;Pricing – Codex | OpenAI Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openai/codex-plugin-cc/issues/158" rel="noopener noreferrer"&gt;Issue #158: codex:rescue false success claims&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>claudecode</category>
      <category>codex</category>
      <category>openai</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
