<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Teruo Kunihiro</title>
    <description>The latest articles on DEV Community by Teruo Kunihiro (@trknhr).</description>
    <link>https://dev.to/trknhr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/trknhr"/>
    <language>en</language>
    <item>
      <title>Building a Home Personal Assistant with Claude Managed Agents</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Mon, 13 Apr 2026 07:46:16 +0000</pubDate>
      <link>https://dev.to/trknhr/building-a-home-personal-assistant-with-claude-managed-agents-5a8f</link>
      <guid>https://dev.to/trknhr/building-a-home-personal-assistant-with-claude-managed-agents-5a8f</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Claude Managed Agents was just announced, so I tried using it to build a personal assistant for household tasks.&lt;/p&gt;

&lt;p&gt;What I wanted was pretty simple: an AI I can call from Slack that can handle family notes, tasks, reminders, and schedules without too much ceremony. Things like birthdays, what gifts I bought last year, school handouts, grocery co-op deadlines, and small day-to-day household tasks.&lt;/p&gt;

&lt;p&gt;My first impression was very positive. Claude Managed Agents solves a lot of the annoying parts up front:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I do not have to host the execution environment myself&lt;/li&gt;
&lt;li&gt;Vaults and sandboxes are built in from the start&lt;/li&gt;
&lt;li&gt;MCP and custom tools make it easier to build a safer architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That said, it does not eliminate the need for surrounding application code. I still needed a Slack event endpoint, persistent task state, and scheduled execution. In the end, I landed on an architecture centered on Claude Managed Agents, with &lt;code&gt;Lambda + DynamoDB + EventBridge Scheduler&lt;/code&gt; around it.&lt;/p&gt;

&lt;p&gt;My app is like this.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7xqb1nb6rndum75152m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7xqb1nb6rndum75152m.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What I wanted to build
&lt;/h2&gt;

&lt;p&gt;These were the rough requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Trigger the AI from Slack mentions for household tasks&lt;/li&gt;
&lt;li&gt;Let the AI take notes and transcribe things&lt;/li&gt;
&lt;li&gt;Connect with Google Calendar and Drive so important things are not missed&lt;/li&gt;
&lt;li&gt;Have the AI send a daily reminder about household tasks&lt;/li&gt;
&lt;li&gt;Let me send rough notes about finished tasks or recurring events and have the AI remember them in a useful way&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So far, the parts that are actually working are mainly &lt;code&gt;1 / 2 / 4 / 5&lt;/code&gt;. Calendar and Drive integration are next.&lt;/p&gt;
&lt;h2&gt;
  
  
  Quickstart was genuinely useful
&lt;/h2&gt;

&lt;p&gt;I started from the Claude Console Quickstart:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.claude.com/workspaces/default/agents/" rel="noopener noreferrer"&gt;https://platform.claude.com/workspaces/default/agents/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is a good way to get an initial agent configuration in place. You can shape the setup through conversation instead of writing everything from scratch. Japanese IME input still felt a little awkward, and Enter could fire too early, but overall it was fast enough to be useful.&lt;/p&gt;
&lt;h3&gt;
  
  
  Slack MCP
&lt;/h3&gt;

&lt;p&gt;On the Slack side, I created a bot account and added the scopes I needed. The main ones ended up being:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;app_mentions:read&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;chat:write&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;files:read&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Slack MCP lives on the Managed Agent side, but actual event ingestion and attachment retrieval are handled by Lambda. In practice, that split felt better than trying to force everything through MCP alone.&lt;/p&gt;
&lt;h3&gt;
  
  
  Sandbox
&lt;/h3&gt;

&lt;p&gt;Claude Managed Agents also gives you a managed execution environment. In this project I used a sandbox configured for Slack MCP calls and custom tool usage.&lt;/p&gt;

&lt;p&gt;I did &lt;strong&gt;not&lt;/strong&gt; let the agent touch DynamoDB directly. Instead, DynamoDB access goes through custom tools, and Lambda performs the actual reads and writes. That keeps the permission boundary clear and makes the update rules easier to control from the application side.&lt;/p&gt;

&lt;p&gt;In Anthropic's docs, this execution environment is modeled as an &lt;code&gt;Environment&lt;/code&gt;. An Environment is basically the container configuration where the agent runs. You create it once and refer to it by ID. Multiple sessions can reuse the same Environment definition, but each session gets its own isolated container instance, and filesystem state is not shared across sessions. In other words, configuration is reusable, but runtime state is isolated per session.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.claude.com/docs/en/managed-agents/overview" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.claude.com/docs/en/managed-agents/environments" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/environments&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That matters a lot. Even for a personal or family assistant, it means each run starts from a clean, isolated environment instead of inheriting leftovers from the previous run. Network settings are also part of the Environment, and Anthropic recommends using &lt;code&gt;limited&lt;/code&gt; networking with explicit &lt;code&gt;allowed_hosts&lt;/code&gt; for production. So the sandbox is not just “a safe box for Claude.” It is the unit that bundles isolation, dependency setup, and network permissions together.&lt;/p&gt;
&lt;h3&gt;
  
  
  Vault
&lt;/h3&gt;

&lt;p&gt;I stored the Slack MCP credentials in a Vault. Not having to place raw credentials directly into the agent configuration is a big win.&lt;/p&gt;

&lt;p&gt;The value of Vaults is pretty clear in Anthropic's docs. Vaults and credentials are treated as reusable authentication primitives that you register once and reference by ID. That means you do not need to run your own secret store for this part, pass tokens around on every request, or lose track of which credentials a session is using.&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.claude.com/docs/en/managed-agents/vaults" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/vaults&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another important point is that MCP server definitions and authentication are separated. When you create the agent, you declare which MCP servers it can connect to. When you create a session, you pass &lt;code&gt;vault_ids&lt;/code&gt; to resolve authentication. Anthropic explicitly calls out that this separation keeps secrets out of reusable agent definitions while still letting each session authenticate with different credentials if needed. For a setup like this, where Slack MCP exists alongside application-managed Slack event handling, that split is very helpful.&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.claude.com/docs/en/managed-agents/mcp-connector" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/mcp-connector&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  I still needed regular application code
&lt;/h2&gt;

&lt;p&gt;At first I thought Managed Agents might cover most of it. In practice, I still needed surrounding application code for three reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an HTTP endpoint for Slack Events API&lt;/li&gt;
&lt;li&gt;asynchronous processing to stay within Slack’s 3-second response limit&lt;/li&gt;
&lt;li&gt;application state such as memory, tasks, sessions, and idempotency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the architecture ended up looking like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Slack mention
  -&amp;gt; API Gateway
  -&amp;gt; Lambda (ingress)
  -&amp;gt; SQS
  -&amp;gt; Lambda (worker)
  -&amp;gt; Claude Managed Agent
  -&amp;gt; Slack reply

Daily reminder
  -&amp;gt; EventBridge Scheduler
  -&amp;gt; Lambda (scheduled runner)
  -&amp;gt; Claude Managed Agent
  -&amp;gt; Slack post

State
  -&amp;gt; DynamoDB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Slack mentions flow through &lt;code&gt;ingress Lambda -&amp;gt; SQS -&amp;gt; worker Lambda&lt;/code&gt;. Slack gets an immediate ACK, and the Claude interaction happens asynchronously in the background.&lt;/p&gt;

&lt;p&gt;The daily reminder is triggered by EventBridge Scheduler. Right now it runs every day at &lt;code&gt;09:00 JST&lt;/code&gt; and posts a reminder for unfinished tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What gets stored where
&lt;/h2&gt;

&lt;p&gt;This setup currently uses seven DynamoDB tables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;SlackThreadSessionsTable&lt;/code&gt;: mapping between Slack threads and Claude sessions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ProcessedEventsTable&lt;/code&gt;: Slack event deduplication&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ScheduledTasksTable&lt;/code&gt;: scheduled task definitions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;UserMemoriesTable&lt;/code&gt;: mapping to Claude memory stores&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;MemoryItemsTable&lt;/code&gt;: semi-structured memory persisted through custom tools&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TasksTable&lt;/code&gt;: current task state&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TaskEventsTable&lt;/code&gt;: task history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;MemoryItemsTable&lt;/code&gt; and &lt;code&gt;TasksTable&lt;/code&gt; / &lt;code&gt;TaskEventsTable&lt;/code&gt; are the important ones here.&lt;/p&gt;

&lt;p&gt;For household use, the data I actually care about looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;whose birthday it is&lt;/li&gt;
&lt;li&gt;what I gave them last year&lt;/li&gt;
&lt;li&gt;what tasks are still unfinished&lt;/li&gt;
&lt;li&gt;whether a task is already done&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That kind of information is easier to manage if it lives in DynamoDB as the source of truth, with Claude pulling it through tools only when needed. That is the approach I took.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using custom tools for memory and tasks
&lt;/h2&gt;

&lt;p&gt;I ended up defining these five tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;search_memories&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;save_memory&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;list_tasks&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;upsert_task&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;mark_task_done&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the Managed Agent calls one of them, it emits &lt;code&gt;agent.custom_tool_use&lt;/code&gt;. Lambda receives that request, updates DynamoDB, and returns the result via &lt;code&gt;user.custom_tool_result&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I like this pattern a lot. The agent never needs direct DynamoDB IAM permissions, which makes the boundary safer and gives the application control over how updates are applied.&lt;/p&gt;

&lt;p&gt;I verified the flow end to end:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;save_memory&lt;/code&gt; stored “Hanako’s birthday is 8/12”&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;upsert_task&lt;/code&gt; created a task for buying a birthday gift&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mark_task_done&lt;/code&gt; updated that task to done&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TaskEventsTable&lt;/code&gt; recorded &lt;code&gt;created&lt;/code&gt; and &lt;code&gt;marked_done&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Slack mentions work naturally
&lt;/h2&gt;

&lt;p&gt;When I mention &lt;code&gt;@AI&lt;/code&gt; in Slack, the conversation continues in the same thread.&lt;/p&gt;

&lt;p&gt;What made this feel right was treating &lt;code&gt;Slack thread = Claude session&lt;/code&gt;. That aligns the Slack UX with the conversation context in a very natural way.&lt;/p&gt;

&lt;p&gt;I also added attachment handling on the Lambda side. With &lt;code&gt;files:read&lt;/code&gt;, Lambda can fetch PDFs or images from Slack’s &lt;code&gt;url_private&lt;/code&gt; endpoints and pass them to Claude as &lt;code&gt;document&lt;/code&gt; or &lt;code&gt;image&lt;/code&gt; blocks.&lt;/p&gt;

&lt;p&gt;That makes flows like this possible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;upload a school or daycare PDF&lt;/li&gt;
&lt;li&gt;let the AI read it&lt;/li&gt;
&lt;li&gt;extract tasks if needed&lt;/li&gt;
&lt;li&gt;save important details into memory&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Daily reminders also worked well
&lt;/h2&gt;

&lt;p&gt;For scheduled execution, I used EventBridge Scheduler rather than the older CloudWatch Events style rules.&lt;/p&gt;

&lt;p&gt;The current setup stores a &lt;code&gt;daily-summary&lt;/code&gt; task definition in DynamoDB. Every morning at 9 AM, the scheduled runner loads that definition, starts Claude, calls &lt;code&gt;list_tasks&lt;/code&gt; to fetch unfinished tasks, and posts a short reminder to Slack.&lt;/p&gt;

&lt;p&gt;What I like about this is that the reminder is not a fixed template. Claude can shape the wording based on the unfinished tasks in DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Letting it read PDFs and remember things is surprisingly good
&lt;/h2&gt;

&lt;p&gt;This turned out to be one of the most promising parts for household use.&lt;/p&gt;

&lt;p&gt;If I can just upload a PDF to Slack and say &lt;code&gt;@AI take a look at this&lt;/code&gt;, the system can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extract dates&lt;/li&gt;
&lt;li&gt;turn them into tasks&lt;/li&gt;
&lt;li&gt;save names or events into memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is exactly the kind of workflow that matters for family operations, where the problem is usually not a lack of information but forgetting things at the wrong time.&lt;/p&gt;

&lt;p&gt;In that sense, &lt;code&gt;save_memory&lt;/code&gt; and &lt;code&gt;search_memories&lt;/code&gt; seem especially useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;Cost is obviously a concern.&lt;/p&gt;

&lt;p&gt;According to Anthropic’s pricing page, the model I am using here, &lt;code&gt;Claude Sonnet 4.6&lt;/code&gt;, is priced at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input: &lt;code&gt;$3 / MTok&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Output: &lt;code&gt;$15 / MTok&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Session runtime: &lt;code&gt;$0.08 / session-hour&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.claude.com/docs/en/about-claude/pricing" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/about-claude/pricing&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For household use, a rough estimate still puts this in a pretty reasonable range, around &lt;code&gt;$10/month&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I used these assumptions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5 Slack mentions per day&lt;/li&gt;
&lt;li&gt;1 daily reminder per day&lt;/li&gt;
&lt;li&gt;per mention: 12k input tokens / 1.2k output tokens / 20 seconds runtime&lt;/li&gt;
&lt;li&gt;per reminder: 15k input tokens / 1.5k output tokens / 15 seconds runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That gives roughly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mentions: &lt;code&gt;about $8.4 / month&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;reminders: &lt;code&gt;about $2.1 / month&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;total: &lt;code&gt;about $10.5 / month&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will go up quickly if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you read a lot of long PDFs&lt;/li&gt;
&lt;li&gt;you use web search or extra tools heavily&lt;/li&gt;
&lt;li&gt;conversations get long and context keeps expanding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Still, for a personal household assistant with a small number of daily interactions, AWS costs are likely minor compared to Claude token costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things that were tricky
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Slack Events configuration
&lt;/h3&gt;

&lt;p&gt;At first, I had the classic problem where the &lt;code&gt;Request URL&lt;/code&gt; was verified but no events were arriving. In the end, I had to carefully make sure that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event Subscriptions were enabled&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;app_mention&lt;/code&gt; was added&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;files:read&lt;/code&gt; was added&lt;/li&gt;
&lt;li&gt;the Slack app was reinstalled after changing scopes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Splitting responsibility between Slack MCP and Lambda
&lt;/h2&gt;

&lt;p&gt;Slack MCP is useful, but once you need external event ingestion, attachment handling, threaded replies, and idempotency, it is easier to keep Slack input/output under application control.&lt;/p&gt;

&lt;p&gt;The split that worked best here was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda handles input and delivery&lt;/li&gt;
&lt;li&gt;Managed Agent handles reasoning and tool usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That division felt clean.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do not start with fully automatic memory saving
&lt;/h3&gt;

&lt;p&gt;This is more of an operational lesson than a technical one. Memory gets messy fast. Birthdays and gift history are good durable facts, but if you save every temporary request automatically, the memory store becomes noisy very quickly.&lt;/p&gt;

&lt;p&gt;For now, I prefer having an explicit &lt;code&gt;save_memory&lt;/code&gt; entry point. The agent can decide what looks durable, but the application still controls how it is persisted.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I want to do next
&lt;/h2&gt;

&lt;p&gt;These are the next things I want to add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;register events in Google Calendar and link the returned event IDs to tasks&lt;/li&gt;
&lt;li&gt;read Google Drive documents and turn them into tasks or memories&lt;/li&gt;
&lt;li&gt;run weekly summaries of completed tasks&lt;/li&gt;
&lt;li&gt;add reminders like “a birthday is coming up” or “the co-op deadline is close”&lt;/li&gt;
&lt;li&gt;refine the memory persistence policy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Calendar integration feels especially important. The shape I want is: Claude registers something in Calendar, returns structured JSON, and the application syncs that result into DynamoDB task state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing thoughts
&lt;/h3&gt;

&lt;p&gt;I came away with a very good impression.&lt;/p&gt;

&lt;p&gt;The managed aspect matters a lot. Availability, execution environments, credentials, and permission boundaries are all expensive to get right on your own. Claude Managed Agents makes that much easier to control.&lt;/p&gt;

&lt;p&gt;The pattern that currently feels best to me is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reasoning and sandboxing in Managed Agents&lt;/li&gt;
&lt;li&gt;webhooks, state, and integration glue in Lambda&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That split worked well for a household assistant too. At this point I can already see a path where I throw rough notes into Slack and get “remember this,” “remind me later,” and “what is still unfinished?” out of the same system.&lt;/p&gt;

&lt;p&gt;The next step is to connect Calendar and Drive and see how far this can go in real day-to-day use.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Claude Managed Agents Quickstart: &lt;a href="https://platform.claude.com/docs/en/managed-agents/quickstart" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/quickstart&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Claude Managed Agents Environments: &lt;a href="https://platform.claude.com/docs/en/managed-agents/environments" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/environments&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Claude Managed Agents Events and Streaming: &lt;a href="https://platform.claude.com/docs/en/managed-agents/events-and-streaming" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/events-and-streaming&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Claude Managed Agents Memory: &lt;a href="https://platform.claude.com/docs/en/managed-agents/memory" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/memory&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Claude Managed Agents Vaults: &lt;a href="https://platform.claude.com/docs/en/managed-agents/vaults" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/vaults&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Claude Managed Agents MCP Connector: &lt;a href="https://platform.claude.com/docs/en/managed-agents/mcp-connector" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/managed-agents/mcp-connector&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Claude Pricing: &lt;a href="https://platform.claude.com/docs/en/about-claude/pricing" rel="noopener noreferrer"&gt;https://platform.claude.com/docs/en/about-claude/pricing&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>claude</category>
    </item>
    <item>
      <title>Semver in Retrograde</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Wed, 08 Apr 2026 15:02:07 +0000</pubDate>
      <link>https://dev.to/trknhr/semver-in-retrograde-1oj3</link>
      <guid>https://dev.to/trknhr/semver-in-retrograde-1oj3</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;I built a dependency analysis tool that delivers executive-grade reports about your project's emotional state.&lt;br&gt;
It just happens to be astrology. So I built Semver in Retrograde.&lt;/p&gt;

&lt;p&gt;You paste a &lt;code&gt;package.json&lt;/code&gt;, click "Analyze my dependency aura", and get a straight-faced executive report about the project's emotional state. It gives you Aura Stability, Chaos Index, Peer Dependency Tension, Mercury Status, the dependency Big 3, a prophecy, a lucky command, and a share card that looks ready for an internal quarterly review.&lt;/p&gt;

&lt;p&gt;That contrast is the joke. The interface looks like a serious dashboard. The output is dependency mysticism delivered in the tone of an operations meeting.&lt;/p&gt;

&lt;p&gt;I also added one feature that makes me disproportionately happy: if you paste something that looks like &lt;code&gt;requirements.txt&lt;/code&gt; or a &lt;code&gt;Gemfile&lt;/code&gt;, the app returns &lt;strong&gt;418 I'm a teapot&lt;/strong&gt;. Wrong ecosystem, wrong beverage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65tywvg645npj5xc4sjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65tywvg645npj5xc4sjs.png" alt=" " width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Live demo: &lt;a href="https://semver-in-retrograde.vercel.app/" rel="noopener noreferrer"&gt;https://semver-in-retrograde.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/trknhr/semver-in-retrograde" rel="noopener noreferrer"&gt;trknhr/semver-in-retrograde&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One practical note: the public deployment does not call Gemini in production. I turned that off to keep the joke within budget, so the hosted version runs in a fixed "budget committee safe mode" for the narrative copy. The full Gemini path is what I used in local development and in the eval run.&lt;/p&gt;

&lt;p&gt;This is the demo flow I used:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Paste a package.json&lt;/li&gt;
&lt;li&gt;Click "Analyze my dependency aura"&lt;/li&gt;
&lt;li&gt;Watch the dashboard appear like it's about to audit your org&lt;/li&gt;
&lt;li&gt;Then realize it's talking about your emotional instability&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The code is here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/trknhr/semver-in-retrograde" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app has a clean split. Local code parses and scores the manifest. Gemini writes the executive reading. So the same manifest always produces the same numbers, while the model handles the polished nonsense.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I built it
&lt;/h2&gt;

&lt;p&gt;I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next.js&lt;/li&gt;
&lt;li&gt;TypeScript&lt;/li&gt;
&lt;li&gt;Tailwind CSS&lt;/li&gt;
&lt;li&gt;server-side Gemini API&lt;/li&gt;
&lt;li&gt;Zod&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture is more serious than the premise. That felt appropriate.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Deterministic manifest analysis
&lt;/h3&gt;

&lt;p&gt;The first step is completely local.&lt;/p&gt;

&lt;p&gt;The app parses &lt;code&gt;package.json&lt;/code&gt;, flattens the dependency sections, inspects the scripts block, and turns the manifest into a feature set. It looks at things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dependency counts&lt;/li&gt;
&lt;li&gt;&lt;code&gt;peerDependencies&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;overrides&lt;/code&gt; / &lt;code&gt;resolutions&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;wildcard and &lt;code&gt;latest&lt;/code&gt; versions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pre*&lt;/code&gt; / &lt;code&gt;post*&lt;/code&gt; scripts&lt;/li&gt;
&lt;li&gt;&lt;code&gt;postinstall&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;package manager hints&lt;/li&gt;
&lt;li&gt;framework / test / build tool fingerprints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those features feed a weighted scoring model. I wanted the joke to start from real manifest behavior, not from a model improvising a vibe.&lt;/p&gt;

&lt;p&gt;Pinned versions help Aura Stability. Wildcards, &lt;code&gt;latest&lt;/code&gt;, extra scripts, and override-heavy manifests drag it down. Chaos Index climbs when the project has loose version ranges, lifecycle scripts, &lt;code&gt;postinstall&lt;/code&gt;, suspicious script names, or workspace sprawl. Peer Dependency Tension rises when the package asks other people to satisfy more of its needs. Boundary Issues is really a score for governance by exception, so &lt;code&gt;overrides&lt;/code&gt;, &lt;code&gt;resolutions&lt;/code&gt;, and workspace hints push it upward. Trust Issues gets worse when the manifest is private, carries a &lt;code&gt;postinstall&lt;/code&gt;, or leans on suspicious scripts and &lt;code&gt;latest&lt;/code&gt; tags. Mercury Status comes from lifecycle-script severity, especially &lt;code&gt;pre*&lt;/code&gt;, &lt;code&gt;post*&lt;/code&gt;, and &lt;code&gt;postinstall&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So yes, the result is silly. But it is silly in a deterministic way.&lt;/p&gt;

&lt;p&gt;Those signals show up in the product as Aura Stability, Chaos Index, Peer Dependency Tension, Boundary Issues, Trust Issues, and Mercury Status.&lt;/p&gt;

&lt;p&gt;All of this is computed locally so the core behavior stays deterministic.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Gemini for the narrative layer
&lt;/h3&gt;

&lt;p&gt;I used Gemini on the server for the parts that needed tone rather than math:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;executive summary&lt;/li&gt;
&lt;li&gt;sun / moon / rising interpretations&lt;/li&gt;
&lt;li&gt;red flags&lt;/li&gt;
&lt;li&gt;prophecy&lt;/li&gt;
&lt;li&gt;lucky command&lt;/li&gt;
&lt;li&gt;share caption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Gemini does not decide the scores. It gets the extracted features and the computed numbers, then turns them into a dead-serious reading.&lt;/p&gt;

&lt;p&gt;The app asks for structured JSON and validates the result with Zod before rendering anything. That kept the product funny without handing core logic to the model.&lt;/p&gt;

&lt;p&gt;The public deployment does not hit Gemini live. I disabled that in production because paying for unlimited dependency clairvoyance for strangers seemed like a bad financial habit. So production serves a fixed, intentionally budget-conscious executive statement, while local development and evals use the real Gemini path.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. UI direction
&lt;/h3&gt;

&lt;p&gt;I did not want this to look like a horoscope app. I wanted it to look like a corporate audit dashboard that had developed a spiritual problem.&lt;/p&gt;

&lt;p&gt;The design goal was:&lt;/p&gt;

&lt;p&gt;"This should look like a compliance product that got trapped in a spiritual crisis."&lt;/p&gt;

&lt;h3&gt;
  
  
  4. My favorite April Fools detail
&lt;/h3&gt;

&lt;p&gt;If the input looks like Python or Ruby dependency files, the app returns &lt;strong&gt;418&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That part is useless, correct, and deeply satisfying.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Eval, because the joke works better if the nonsense is measured
&lt;/h3&gt;

&lt;p&gt;I did not want the AI layer to run on hope.&lt;/p&gt;

&lt;p&gt;So I added a small &lt;code&gt;promptfoo&lt;/code&gt; harness around the reading endpoint and treated it like a real structured-output feature.&lt;/p&gt;

&lt;p&gt;The eval setup has two layers. The first is deterministic and checks response contract, writing constraints, and fixture-specific signal coverage. The second uses LLM-as-a-judge rubrics for tone and grounding.&lt;/p&gt;

&lt;p&gt;The deterministic checks cover things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the endpoint returns the full expected JSON shape&lt;/li&gt;
&lt;li&gt;the response stays in &lt;code&gt;live&lt;/code&gt; mode for the eval fixtures&lt;/li&gt;
&lt;li&gt;the copy does not drift into practical engineering advice&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;luckyCommand&lt;/code&gt; still looks like a shell command&lt;/li&gt;
&lt;li&gt;the response actually reflects the manifest signals it was supposed to notice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I added judge-based checks for the harder-to-measure parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;does this still sound polished, dead-serious, and vaguely B2B?&lt;/li&gt;
&lt;li&gt;is it funny through sincerity rather than random nonsense?&lt;/li&gt;
&lt;li&gt;does it stay grounded in the fixture instead of inventing facts?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That gave me a cleaner contract for the product:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;local code owns the real scoring logic&lt;/li&gt;
&lt;li&gt;Gemini owns the tone&lt;/li&gt;
&lt;li&gt;evals make sure those boundaries do not blur&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The runner hits the local Next.js app over HTTP, so the eval path matches the real product path instead of a helper in isolation.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Eval results
&lt;/h3&gt;

&lt;p&gt;The saved run I kept for the project was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;eval-qw8-2026-04-08T00:18:21&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;public report: &lt;a href="https://semver-in-retrograde.vercel.app/evals/eval-qw8-2026-04-08T00:18:21" rel="noopener noreferrer"&gt;semver-in-retrograde.vercel.app/evals/eval-qw8-2026-04-08T00:18:21&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;raw JSON: &lt;a href="https://semver-in-retrograde.vercel.app/evals/eval-qw8-2026-04-08T00-18-21.json" rel="noopener noreferrer"&gt;semver-in-retrograde.vercel.app/evals/eval-qw8-2026-04-08T00-18-21.json&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That run used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;promptfoo&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;4 manifest fixtures&lt;/li&gt;
&lt;li&gt;8 expanded test cases&lt;/li&gt;
&lt;li&gt;concurrency set to &lt;code&gt;1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;light retrying around transient model-availability issues&lt;/li&gt;
&lt;li&gt;Gemini as the judge model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;8 / 8 passing&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;0 failures&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;0 errors&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;runtime: about &lt;strong&gt;133 seconds&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fixtures cover four different dependency personalities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a mildly over-governed Next.js workspace&lt;/li&gt;
&lt;li&gt;a commitment-avoidant Vite app with &lt;code&gt;latest&lt;/code&gt; and wildcard ranges&lt;/li&gt;
&lt;li&gt;a haunted library with overrides, resolutions, and lifecycle weirdness&lt;/li&gt;
&lt;li&gt;a relatively boring steady package that should not be over-dramatized&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last case mattered. A joke product can always get louder. The harder part is keeping it funny without inventing drama the manifest did not earn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prize category
&lt;/h2&gt;

&lt;p&gt;I am submitting this for &lt;strong&gt;Best Google AI Usage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Google AI is central to the project. Gemini runs the narrative layer on the server, returns structured JSON instead of free-form prose, gets validated before display, and sits behind evals that check both hard constraints and tone. The product only works because of that split between deterministic scoring and AI-generated corporate mysticism.&lt;/p&gt;

&lt;p&gt;That is the role I wanted the model to play. It does not own the critical logic. It owns the polished nonsense.&lt;/p&gt;

&lt;p&gt;If your JavaScript project has unresolved dependency feelings, Semver in Retrograde is ready to misinterpret them at enterprise scale.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Lessons from the Spring 2026 OSS Incidents: Hardening npm, pnpm, and GitHub Actions Against Supply-Chain Attacks</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Thu, 02 Apr 2026 05:00:50 +0000</pubDate>
      <link>https://dev.to/trknhr/lessons-from-the-spring-2026-oss-incidents-hardening-npm-pnpm-and-github-actions-against-1jnp</link>
      <guid>https://dev.to/trknhr/lessons-from-the-spring-2026-oss-incidents-hardening-npm-pnpm-and-github-actions-against-1jnp</guid>
      <description>&lt;p&gt;March 2026 saw a rapid succession of OSS supply-chain incidents.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Trivy, an attacker repointed 76 of the 77 version tags for &lt;code&gt;trivy-action&lt;/code&gt; and 7 tags for &lt;code&gt;setup-trivy&lt;/code&gt; to a malicious commit, and a tampered &lt;code&gt;v0.69.4&lt;/code&gt; binary was released.&lt;/li&gt;
&lt;li&gt;In LiteLLM, malicious &lt;code&gt;1.82.7&lt;/code&gt; and &lt;code&gt;1.82.8&lt;/code&gt; packages were uploaded to PyPI, and the maintainers later identified &lt;code&gt;1.83.0&lt;/code&gt; as the clean release.&lt;/li&gt;
&lt;li&gt;In axios, &lt;code&gt;1.14.1&lt;/code&gt; and &lt;code&gt;0.30.4&lt;/code&gt; were briefly published to npm, and the hidden dependency &lt;code&gt;plain-crypto-js&lt;/code&gt; used &lt;code&gt;postinstall&lt;/code&gt; to distribute a cross-platform RAT (remote access trojan that allows attackers to remotely control infected machines). (&lt;a href="https://www.aquasec.com/blog/trivy-supply-chain-attack-what-you-need-to-know/" rel="noopener noreferrer"&gt;Aqua&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A common recommendation for preventing incidents like these is to enable npm’s &lt;code&gt;min-release-age&lt;/code&gt; or pnpm’s &lt;code&gt;minimumReleaseAge&lt;/code&gt;.&lt;br&gt;
npm’s &lt;code&gt;min-release-age&lt;/code&gt; prevents versions newer than a specified number of days from being installed, while pnpm’s &lt;code&gt;minimumReleaseAge&lt;/code&gt; applies the same idea in minutes.&lt;br&gt;
Both are highly effective at reducing the chance of immediately picking up a freshly published malicious release. But they only protect you at the &lt;strong&gt;moment of dependency resolution&lt;/strong&gt;. They do not stop automatic install script execution, CI pipelines that reference mutable tags, or long-lived publish tokens lingering in your environment. pnpm itself makes this distinction explicit: compromised packages are often detected relatively quickly, but there is still an unavoidable exposure window between publication and detection. (&lt;a href="https://docs.npmjs.com/cli/v11/using-npm/config/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;One screenshot captured the direction of travel perfectly. In the current stable pnpm release, both &lt;code&gt;blockExoticSubdeps&lt;/code&gt; and &lt;code&gt;strictDepBuilds&lt;/code&gt; default to &lt;code&gt;false&lt;/code&gt;, but in the next docs and the v11 release notes, both move to &lt;code&gt;true&lt;/code&gt;. &lt;code&gt;blockExoticSubdeps&lt;/code&gt; prevents transitive dependencies from pulling from exotic sources such as git repos or tarball URLs, while &lt;code&gt;strictDepBuilds&lt;/code&gt; can fail installation when unreviewed build scripts are present.&lt;br&gt;
pnpm is clearly steering toward a security-first model: away from “install anything” and toward “resolve and execute only what has been explicitly trusted.” (&lt;a href="https://pnpm.io/settings" rel="noopener noreferrer"&gt;pnpm&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This post breaks the defense surface into four layers:&lt;br&gt;
&lt;strong&gt;dependency resolution&lt;/strong&gt;, &lt;strong&gt;install-time execution&lt;/strong&gt;, &lt;strong&gt;CI execution&lt;/strong&gt;, and the &lt;strong&gt;publish path&lt;/strong&gt;.&lt;br&gt;
&lt;code&gt;min-release-age&lt;/code&gt; belongs primarily to the dependency-resolution layer.&lt;/p&gt;
&lt;h2&gt;
  
  
  Delay and lock dependency resolution
&lt;/h2&gt;

&lt;p&gt;The first thing to stabilize is &lt;strong&gt;which versions get resolved&lt;/strong&gt;. npm’s &lt;code&gt;min-release-age&lt;/code&gt; works in days, while pnpm’s &lt;code&gt;minimumReleaseAge&lt;/code&gt; works in minutes, allowing you to let newly published versions “cool off” before they are eligible for installation.&lt;br&gt;
In practice, though, you will eventually want exceptions for emergency security fixes or dependencies that you need to update immediately.&lt;/p&gt;

&lt;p&gt;pnpm also provides &lt;code&gt;minimumReleaseAgeExclude&lt;/code&gt;, which lets you carve out exceptions for specific packages or versions.&lt;br&gt;
Dependabot has &lt;code&gt;cooldown&lt;/code&gt;, a grace-period setting that delays version update PRs even after a new dependency version has been published. That grace period applies only to version updates, not to security updates.&lt;br&gt;
So an operating model like “delay routine upgrades, but fast-track urgent security fixes” is perfectly workable in production. (&lt;a href="https://docs.npmjs.com/cli/v11/using-npm/config/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;That said, delaying upgrades is not enough on its own. If the dependency graph resolved at one point in time cannot be reproduced consistently across your team and CI, different environments will drift onto different versions. That is where the lockfile becomes critical.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;package-lock.json&lt;/code&gt; records the exact dependency graph and versions that were actually resolved. Committing it makes it much easier to reproduce the same dependency set in development and CI. &lt;code&gt;npm ci&lt;/code&gt; is designed around the lockfile: it fails if &lt;code&gt;package.json&lt;/code&gt; and the lockfile are out of sync, and it never rewrites the lockfile. In CI, that makes &lt;code&gt;npm ci&lt;/code&gt; safer than &lt;code&gt;npm install&lt;/code&gt; from a reproducibility standpoint, and it also makes unintended dependency changes easier to spot in diffs. (&lt;a href="https://docs.npmjs.com/cli/v8/configuring-npm/package-lock-json/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Lockfiles matter for security, too. In GitHub’s dependency graph, a lockfile gives GitHub a much more accurate picture of the dependencies you actually resolved than a manifest alone. Indirect dependencies inferred only from the manifest may be excluded from vulnerability checks. (&lt;a href="https://docs.github.com/en/code-security/concepts/supply-chain-security/dependency-graph-data" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;There is one more risk in a different category worth calling out: dependency confusion. As a mitigation against public packages colliding with private package names, npm strongly recommends scoped packages. Managing internal packages under a namespace like &lt;code&gt;@your-org/foo&lt;/code&gt; is not flashy, but it is effective. (&lt;a href="https://docs.npmjs.com/threats-and-mitigations/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# .npmrc
&lt;/span&gt;&lt;span class="py"&gt;min-release-age&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;3&lt;/span&gt;
&lt;span class="py"&gt;ignore-scripts&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pnpm-workspace.yaml&lt;/span&gt;
&lt;span class="na"&gt;minimumReleaseAge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1440&lt;/span&gt;
&lt;span class="na"&gt;minimumReleaseAgeExclude&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;@your-org/*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using npm’s &lt;code&gt;min-release-age&lt;/code&gt; or pnpm’s &lt;code&gt;minimumReleaseAge&lt;/code&gt; helps you avoid immediately consuming newly published versions. npm configures this in days, pnpm in minutes, and pnpm also applies it to transitive dependencies.&lt;/p&gt;

&lt;p&gt;But this is only a mechanism for delaying the adoption of new releases. It does not guarantee reproducibility by itself. If you want stable, repeatable installs, the baseline is still to commit the lockfile and enforce strict lockfile-based installs in CI with commands like &lt;code&gt;npm ci&lt;/code&gt; or &lt;code&gt;pnpm install --frozen-lockfile&lt;/code&gt;. (&lt;a href="https://docs.npmjs.com/cli/v11/using-npm/config/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Treat install as code execution, not just downloading packages
&lt;/h2&gt;

&lt;p&gt;The axios incident is a perfect example. The problem was not the Axios code itself, but the &lt;code&gt;postinstall&lt;/code&gt; hook in the hidden package &lt;code&gt;plain-crypto-js&lt;/code&gt;. In other words, &lt;code&gt;npm install&lt;/code&gt; is not just artifact retrieval. Through dependency scripts, it is also &lt;strong&gt;code execution at install time&lt;/strong&gt;. (&lt;a href="https://snyk.io/blog/axios-npm-package-compromised-supply-chain-attack-delivers-cross-platform/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;npm has &lt;code&gt;ignore-scripts&lt;/code&gt;, and when set to &lt;code&gt;true&lt;/code&gt;, it suppresses automatic script execution from &lt;code&gt;package.json&lt;/code&gt; during installation. Explicitly invoked scripts such as &lt;code&gt;npm run&lt;/code&gt; or &lt;code&gt;npm test&lt;/code&gt; still work, but at minimum, you are no longer running every dependency’s &lt;code&gt;preinstall&lt;/code&gt; / &lt;code&gt;install&lt;/code&gt; / &lt;code&gt;postinstall&lt;/code&gt; hook by default. (&lt;a href="https://docs.npmjs.com/cli/v11/using-npm/config/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;pnpm pushes this idea further. In its supply-chain security guidance, pnpm notes that many past compromised packages abused &lt;code&gt;postinstall&lt;/code&gt;, and that v10 stopped automatically executing dependency &lt;code&gt;postinstall&lt;/code&gt; hooks. The recommended model is to explicitly allow only trusted packages via &lt;code&gt;allowBuilds&lt;/code&gt;. In the stable docs, &lt;code&gt;allowBuilds&lt;/code&gt; supports per-package allow/deny rules, and with &lt;code&gt;strictDepBuilds&lt;/code&gt; enabled, installation can fail the moment an unreviewed build script appears. (&lt;a href="https://pnpm.io/supply-chain-security" rel="noopener noreferrer"&gt;pnpm&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;On top of that, enabling &lt;code&gt;blockExoticSubdeps&lt;/code&gt; prevents transitive dependencies from pulling from exotic sources such as git repositories or tarball URLs. &lt;code&gt;trustPolicy: no-downgrade&lt;/code&gt; can reject artifacts whose trust evidence is weaker than what was seen in earlier versions.&lt;br&gt;
All of these are ways to ensure that even if you do pull something bad, it does not automatically spread or execute. (&lt;a href="https://pnpm.io/settings" rel="noopener noreferrer"&gt;pnpm&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pnpm-workspace.yaml&lt;/span&gt;
&lt;span class="na"&gt;minimumReleaseAge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1440&lt;/span&gt;
&lt;span class="na"&gt;blockExoticSubdeps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;strictDepBuilds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;allowBuilds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;esbuild&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;trustPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;no-downgrade&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In short, &lt;code&gt;min-release-age&lt;/code&gt; makes it less likely that you will ingest a freshly compromised release, while &lt;code&gt;ignore-scripts&lt;/code&gt; and &lt;code&gt;strictDepBuilds&lt;/code&gt; are about preventing it from executing automatically even if it does get in. (&lt;a href="https://docs.npmjs.com/cli/v11/using-npm/config/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Run GitHub Actions with immutable refs and least privilege
&lt;/h2&gt;

&lt;p&gt;In GitHub Actions, the first rule is to &lt;strong&gt;pin workflow code to immutable references&lt;/strong&gt;. Tag references such as &lt;code&gt;@v1&lt;/code&gt; or &lt;code&gt;@v1.2.3&lt;/code&gt; are convenient, but tags can be retargeted after the fact. GitHub explicitly states that the only way to reference an Action immutably is to pin it to a &lt;strong&gt;full-length commit SHA&lt;/strong&gt;. So instead of &lt;code&gt;uses: owner/action@v1&lt;/code&gt;, the safer baseline is &lt;code&gt;uses: owner/action@&amp;lt;commit SHA&amp;gt;&lt;/code&gt;. If your workflow depends on a moving reference like a tag, the code that runs later can change even when the workflow file itself does not. (&lt;a href="https://docs.github.com/en/actions/reference/security/secure-use" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The next step is to &lt;strong&gt;minimize runtime privileges&lt;/strong&gt;. Keep &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; permissions to the bare minimum, with defaults as narrow as &lt;code&gt;contents: read&lt;/code&gt;, and grant additional permissions only to the specific jobs that need them. Protect workflow files themselves with &lt;code&gt;CODEOWNERS&lt;/code&gt;, so changes to &lt;code&gt;.github/workflows&lt;/code&gt; require review. And for jobs that need cloud access, use OIDC instead of storing long-lived secrets in GitHub. Importantly, &lt;code&gt;permissions: id-token: write&lt;/code&gt; is only for minting an OIDC token to authenticate to an external service. It does not expand the workflow’s GitHub-side privileges. (&lt;a href="https://docs.github.com/en/actions/reference/security/secure-use" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;From there, the next defensive layer is to &lt;strong&gt;gate dependency changes at the PR boundary&lt;/strong&gt;. GitHub’s dependency review action checks dependencies added or updated in a pull request and can block merges when known vulnerabilities are introduced. In the review UI, you can inspect newly added or updated dependencies alongside release dates and vulnerability data. For example, the following workflow fails when the PR includes dependency changes with vulnerabilities rated high severity or above. (&lt;a href="https://docs.github.com/en/code-security/how-tos/secure-your-supply-chain/manage-your-dependency-security/configuring-the-dependency-review-action" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dependency-review&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;review&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@&amp;lt;FULL_LENGTH_SHA&amp;gt;&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/dependency-review-action@&amp;lt;FULL_LENGTH_SHA&amp;gt;&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fail-on-severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;high&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is an important nuance here. The dependency review action is primarily a mechanism for checking the safety of &lt;strong&gt;dependency changes introduced via PRs&lt;/strong&gt;. GitHub also recognizes &lt;code&gt;uses:&lt;/code&gt; references in &lt;code&gt;.github/workflows/&lt;/code&gt; as dependencies in the dependency graph, but &lt;strong&gt;Dependabot alerts for Actions are only generated automatically for semver-based references&lt;/strong&gt;. &lt;strong&gt;SHA-pinned Actions do not receive those alerts&lt;/strong&gt;. In practice, that means external Actions should be pinned by SHA for safety, and then reviewed on a schedule as part of deliberate update work. The operating model becomes: stay safe by default with immutable references, and review upgrades intentionally when you choose to move them. (&lt;a href="https://docs.github.com/en/code-security/how-tos/secure-your-supply-chain/manage-your-dependency-security/configuring-the-dependency-review-action" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Protect the publish path itself
&lt;/h2&gt;

&lt;p&gt;If you publish npm packages yourself, the publish path can become the source of upstream compromise. npm’s trusted publishing uses OIDC so you do not need to keep long-lived npm tokens in CI. After you configure a trusted publisher, npm strongly recommends restricting legacy token-based publishing and enabling &lt;strong&gt;“Require two-factor authentication and disallow tokens”&lt;/strong&gt;. The docs even walk through revoking old automation tokens after the migration. (&lt;a href="https://docs.npmjs.com/trusted-publishers/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;When trusted publishing is used from GitHub Actions or GitLab CI/CD, npm also generates provenance attestations automatically. npm provenance makes it publicly verifiable where a package was built and who published it. In other words, if you publish from GitHub Actions with a trusted publisher configured, you usually do not need to explicitly add &lt;code&gt;npm publish --provenance&lt;/code&gt;; provenance is attached automatically. (&lt;a href="https://docs.npmjs.com/generating-provenance-statements/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;publish&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;published&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
  &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;publish&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@&amp;lt;FULL_LENGTH_SHA&amp;gt;&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@&amp;lt;FULL_LENGTH_SHA&amp;gt;&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;24"&lt;/span&gt;
          &lt;span class="na"&gt;registry-url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://registry.npmjs.org"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm publish&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is worth separating signatures from provenance here. npm’s ECDSA registry signatures are designed to verify that the distributed tarball was not tampered with in transit. For example, they can detect whether package contents were altered somewhere along the way by a mirror or proxy.&lt;/p&gt;

&lt;p&gt;Provenance, on the other hand, captures &lt;strong&gt;where a package came from, how it was built, and from which source code it was published&lt;/strong&gt;. So while signatures answer “Was the package that arrived here modified?”, provenance answers “Where did this package come from, and how was it produced?”&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm audit signatures&lt;/code&gt; can verify both registry signatures and provenance attestations. But it is best thought of as a complementary integrity-and-origin check, not the primary mechanism for day-to-day vulnerability detection. (&lt;a href="https://docs.npmjs.com/about-registry-signatures/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;pnpm takes a slightly different posture. In addition to “verify later” mechanisms like npm’s signatures and provenance, pnpm can proactively block untrusted dependencies at install time with settings like &lt;code&gt;blockExoticSubdeps&lt;/code&gt; and &lt;code&gt;strictDepBuilds&lt;/code&gt;. In that sense, npm focuses more on verification, while pnpm also leans into prevention through install-time policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-cutting controls: detect with SCA, block with package-manager policy
&lt;/h2&gt;

&lt;p&gt;This is where SCA becomes important. SCA (Software Composition Analysis) is the practice of enumerating the libraries your project depends on and continuously checking them for known vulnerabilities and license issues. It is the foundation for understanding what is actually in your stack and whether any of it is already known to be risky.&lt;/p&gt;

&lt;p&gt;In GitHub, that role is largely filled by the dependency graph. The dependency graph ingests dependencies from manifests and lockfiles, and dependencies that land in the graph can receive Dependabot alerts and security updates. GitHub also explicitly recommends lockfiles for building a more trustworthy graph. The flip side is that transitive dependencies resolved only at build time, or indirect dependencies inferred only from the manifest, can still be missed. (&lt;a href="https://docs.github.com/en/code-security/concepts/supply-chain-security/dependency-graph-data" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;That is what automatic dependency submission and the dependency submission API are for. They let you send not just lockfile-declared dependencies, but also the dependencies actually resolved by a real build, into the dependency graph. GitHub provides built-in workflows for this, and external CI/CD systems or custom build pipelines can also push dependency snapshots through the API. In other words, you can reflect not only &lt;strong&gt;statically visible dependencies&lt;/strong&gt;, but also &lt;strong&gt;the dependencies that were actually resolved at runtime&lt;/strong&gt;. (&lt;a href="https://docs.github.com/en/code-security/how-tos/secure-your-supply-chain/secure-your-dependencies/configuring-automatic-dependency-submission-for-your-repository" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;External tools are easier to reason about when you split them by role. Snyk Open Source is a classic SCA tool for open-source dependency vulnerabilities and license issues. OSV-Scanner supports major JavaScript lockfiles including &lt;code&gt;package-lock.json&lt;/code&gt;, &lt;code&gt;pnpm-lock.yaml&lt;/code&gt;, &lt;code&gt;yarn.lock&lt;/code&gt;, and &lt;code&gt;bun.lock&lt;/code&gt;. Trivy can emit GitHub dependency snapshots with &lt;code&gt;--format github&lt;/code&gt;, which makes it useful as a bridge for feeding dependencies observed from images or artifacts back into GitHub’s dependency graph. (&lt;a href="https://docs.snyk.io/scan-with-snyk/snyk-open-source" rel="noopener noreferrer"&gt;Snyk User Docs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Many of these tools are strongest at known vulnerabilities, advisories, and license metadata. Socket is addressing a different problem: through static analysis, it looks for suspicious behavior such as install scripts, network requests, environment variable access, telemetry, and obfuscated code, including cases that have not yet become formal advisories.&lt;/p&gt;

&lt;p&gt;The key point is that SCA alone is not enough. It can catch known vulnerabilities, but there is always a lag for freshly published malware or suspicious packages that have not yet been assigned an advisory. As pnpm points out, there is an unavoidable gap between the publication of malware and its detection. In practice, that is why you should not rely on &lt;strong&gt;detection&lt;/strong&gt; alone. You also need &lt;strong&gt;preventive controls&lt;/strong&gt; at the package-manager level—such as &lt;code&gt;minimumReleaseAge&lt;/code&gt;, &lt;code&gt;ignore-scripts&lt;/code&gt;, &lt;code&gt;blockExoticSubdeps&lt;/code&gt;, and &lt;code&gt;strictDepBuilds&lt;/code&gt;—to make risky dependencies both harder to ingest and harder to execute in the first place. (&lt;a href="https://pnpm.io/supply-chain-security" rel="noopener noreferrer"&gt;pnpm&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  The minimum baseline to put in place today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Add &lt;code&gt;min-release-age=3&lt;/code&gt; and &lt;code&gt;ignore-scripts=true&lt;/code&gt; to &lt;code&gt;.npmrc&lt;/code&gt;. npm provides the former as a day-based maturity window and the latter as a way to suppress automatic script execution. (&lt;a href="https://docs.npmjs.com/cli/v11/using-npm/config/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Always commit the lockfile, and use &lt;code&gt;npm ci&lt;/code&gt; in CI. &lt;code&gt;npm ci&lt;/code&gt; fails on lockfile mismatch and never rewrites the lockfile. (&lt;a href="https://docs.npmjs.com/cli/v11/commands/npm-ci/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Scope private packages. It is a basic but effective mitigation against dependency confusion. (&lt;a href="https://docs.npmjs.com/threats-and-mitigations/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;If you use pnpm, enable &lt;code&gt;minimumReleaseAge&lt;/code&gt;, &lt;code&gt;blockExoticSubdeps&lt;/code&gt;, &lt;code&gt;strictDepBuilds&lt;/code&gt;, and &lt;code&gt;allowBuilds&lt;/code&gt;, and consider going as far as &lt;code&gt;trustPolicy: no-downgrade&lt;/code&gt; if appropriate. (&lt;a href="https://pnpm.io/settings" rel="noopener noreferrer"&gt;pnpm&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;In GitHub Actions, combine full-length commit SHA pinning, least-privilege &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; settings, and &lt;code&gt;CODEOWNERS&lt;/code&gt; review requirements for workflow changes. (&lt;a href="https://docs.github.com/en/actions/reference/security/secure-use" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Move cloud authentication to OIDC, and grant &lt;code&gt;id-token: write&lt;/code&gt; only to the jobs that need it. (&lt;a href="https://docs.github.com/actions/security-for-github-actions/security-hardening-your-deployments/about-security-hardening-with-openid-connect" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Add the dependency review action to PRs so dependency diffs are reviewed before merge. Use GitHub dependency graph / Dependabot as the baseline monitoring layer for dependency visibility. (&lt;a href="https://docs.github.com/en/code-security/how-tos/secure-your-supply-chain/manage-your-dependency-security/configuring-the-dependency-review-action" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;If you publish packages, migrate to trusted publishing, disable legacy tokens, and revoke the ones you no longer need. (&lt;a href="https://docs.npmjs.com/trusted-publishers/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Delay resolution. Prevent install-time auto-execution. Pin references and permissions in CI. Eliminate long-lived credentials from the publish path, attach provenance, and verify what you ship. Then use SCA to monitor dependency drift and known risk.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Only when these controls are combined can you say you have actually started defending against supply-chain attacks. (&lt;a href="https://docs.npmjs.com/cli/v11/using-npm/config/" rel="noopener noreferrer"&gt;npm Docs&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>security</category>
    </item>
    <item>
      <title>What I Learned from Reading Claude Code’s Reconstructed Source</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Thu, 02 Apr 2026 01:45:41 +0000</pubDate>
      <link>https://dev.to/trknhr/what-i-learned-from-reading-claude-codes-reconstructed-source-1ebf</link>
      <guid>https://dev.to/trknhr/what-i-learned-from-reading-claude-codes-reconstructed-source-1ebf</guid>
      <description>&lt;h2&gt;
  
  
  What I Learned from Reading Claude Code’s Reconstructed Source
&lt;/h2&gt;

&lt;p&gt;Around March 31, 2026, it became widely known that parts of Claude Code CLI’s implementation could be reconstructed from source maps that had remained in the npm package. A public mirror circulated for a while, but it was not an official open-source release by Anthropic, and it has since turned into a different project.&lt;/p&gt;

&lt;p&gt;This post is a memo of my own impressions after reading a reconstructed copy of the source that I had saved locally at the time. Rather than discussing the current state of any public mirror, I want to focus on the design characteristics that became visible from actually tracing through the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  My first impression: this is a much larger product codebase than I expected
&lt;/h2&gt;

&lt;p&gt;The first thing that surprised me was the sheer size of the codebase. In the reconstructed source I had on hand, there were roughly 1,900 files and about 510,000 lines of code. This is not a small single-purpose CLI. It is a fairly large product codebase that bundles terminal UI, tool execution, safety controls, IDE integration, memory, and extension mechanisms into one system.&lt;/p&gt;

&lt;p&gt;Technically, the project appears to be centered on TypeScript, with Bun as the runtime and a React/Ink-style stack for the terminal UI. In other words, it felt less like “a small CLI with some AI added on top” and more like “a substantial TypeScript product with an AI experience layered into it.”&lt;/p&gt;

&lt;h2&gt;
  
  
  The prompts live on the client side more than I expected
&lt;/h2&gt;

&lt;p&gt;One of the easiest things to start tracing in this codebase is prompt construction. At least within the portion that could be reconstructed, a surprisingly large part of the system instruction layer is present in the client-side code, where runtime context is then injected into it.&lt;/p&gt;

&lt;p&gt;That runtime context includes things like the current date, Git state, recent commits, Git user information, and the contents of local instruction files. On top of that foundation, additional instructions and memory-related text are composed into something close to the final system prompt.&lt;/p&gt;

&lt;p&gt;What I found especially interesting was that the intuitive assumption that “the real prompt must be assembled as a black box on the server side” did not seem to hold very well here, at least not within the portion of the code I could inspect. That does not prove there is no additional server-side processing, of course. But it does show that a significant amount of the prompt logic also exists on the client side.&lt;/p&gt;

&lt;h2&gt;
  
  
  In tool design, what matters is not the number of tools but how they are exposed and controlled
&lt;/h2&gt;

&lt;p&gt;Another striking part of the design is the layer that decides which tools are visible to the model and the separate layer that manages execution permissions. The system is clearly feature-rich, but there is a fairly sharp distinction between tools that are exposed routinely and tools that are internal, behind feature flags, or otherwise conditionally enabled.&lt;/p&gt;

&lt;p&gt;My impression was fairly simple: this codebase does not look like it was built around the idea that “more tools automatically make the system stronger.” If anything, it seems closer to the opposite view: the surface that is exposed to the model in normal operation should be kept as narrow as possible.&lt;/p&gt;

&lt;p&gt;There are also implementation details suggesting that the tool list itself has to stay aligned with prompt caching. That means the number of tools and their schemas are not just implementation details; they appear to be part of stable prompt operation as well.&lt;/p&gt;

&lt;p&gt;This lines up quite well with the increasingly common intuition that “fewer tools often lead to more stable behavior.” That said, this is my interpretation of the code, not an explicit principle written down in those exact words.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash is not “just a way to run shell commands”
&lt;/h2&gt;

&lt;p&gt;The shell execution layer was one of the most memorable parts of the codebase for me. What is going on there is not simply command execution.&lt;/p&gt;

&lt;p&gt;Commands are categorized into groups such as search-oriented commands, read-oriented commands, listing commands, and commands where silence on success is the natural behavior. Exit codes are also normalized in command-specific ways. For example, the &lt;code&gt;1&lt;/code&gt; returned by grep-like commands is not always treated as a plain error; it can be reinterpreted as “no match found.”&lt;/p&gt;

&lt;p&gt;On top of that, commands that are considered read-only are guarded by allowlist-based flag checks, path validation, sed-specific restrictions, sandbox eligibility checks, and even AST-based safety checks. For more complex compound commands, there are also explicit upper bounds on the fan-out of the safety analysis.&lt;/p&gt;

&lt;p&gt;So while Bash is clearly a powerful general-purpose tool inside Claude Code, it does not look like something the model is given raw. Instead, it seems to sit on top of a fairly thick deterministic scaffold before the model is allowed to use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The comments are unusually good
&lt;/h2&gt;

&lt;p&gt;Another thing that stood out was the quality of the comments. By that, I do not just mean that there are many comments.&lt;/p&gt;

&lt;p&gt;In several places, the comments explain not only what the code is doing but why certain decisions were made: why a heavy operation needs to run before imports, why a given validator is necessary, or why a particular flag should not be treated as safe. They carry background reasoning, not just surface-level description.&lt;/p&gt;

&lt;p&gt;That makes the code easier for humans to follow, of course, but it also felt like the sort of writing that would remain legible to future code-completion systems or coding agents as well.&lt;/p&gt;

&lt;p&gt;People often say these days that comments should be kept to a minimum. But reading code like this is a good reminder that good comments are not clutter. They are part of the design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Even the startup path shows product-level polish
&lt;/h2&gt;

&lt;p&gt;Looking around the entry path, it becomes clear that this product is not only concerned with adding features. It is also carefully tuned around perceived performance. The code is explicit about which side effects should run before heavier imports and what can be parallelized to reduce startup latency.&lt;/p&gt;

&lt;p&gt;When people talk about AI agents, attention tends to go first to prompts and loops. But in practice, details like startup optimization and other non-AI engineering work are often what determine how polished the product feels.&lt;/p&gt;

&lt;h2&gt;
  
  
  “Being visible” is not the same thing as “being open source”
&lt;/h2&gt;

&lt;p&gt;Finally, I want to emphasize the most important point.&lt;/p&gt;

&lt;p&gt;What became visible in this case was that some source code could be read because of the way published artifacts were left exposed. That is not the same thing as Anthropic officially releasing Claude Code as open source.&lt;/p&gt;

&lt;p&gt;Those two things need to be kept clearly separate. Anthropic’s current terms include restrictions aimed at preventing the construction of competing products, service replication, and reverse engineering. So treating this as an interesting code-reading exercise is one thing; assuming that the code can therefore be freely reused or redistributed is something else entirely.&lt;/p&gt;

&lt;p&gt;There is value in reading it. But “readable” and “freely usable” are not the same thing, and it is important not to blur that distinction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;What made this source-reading exercise interesting was not a generic takeaway like “Claude Code runs an agentic loop.” The more interesting part was seeing, in concrete form, which parts were made deterministic, which parts were injected as runtime context, and where the safety mechanisms were made deliberately thick.&lt;/p&gt;

&lt;p&gt;At least within the portion that could be reconstructed, the prompts were more client-side than I expected, Bash was more heavily guarded than I expected, the tool surface was narrower than I expected, and the comments were more thoughtful than I expected. The overall codebase is well organized, but at the same time it still has a little of the human roughness you would expect in a real product—for example, the way prompt construction seems to be spread across multiple layers.&lt;/p&gt;

&lt;p&gt;That mix of order and messiness is part of what makes the codebase interesting to me. In the end, that is what I wanted to capture in this memo.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cli</category>
      <category>javascript</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Code Review with multiple AIs</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Fri, 19 Dec 2025 02:00:48 +0000</pubDate>
      <link>https://dev.to/trknhr/code-review-with-ais-dep</link>
      <guid>https://dev.to/trknhr/code-review-with-ais-dep</guid>
      <description>&lt;p&gt;Hello folks.&lt;br&gt;
Have you ever wanted to quickly run code reviews using multiple AIs? I have. If you really want to do something like this, you can have an AI generate a script and run it locally right away. Problem solved! …But if we stop there, the blog post ends immediately, so please stick with me for a little longer.&lt;/p&gt;
&lt;h2&gt;
  
  
  The problem I want to solve
&lt;/h2&gt;

&lt;p&gt;In most cases, that really does solve it—but scripts created this way often end up calling pay-as-you-go APIs such as the ChatGPT API. Calling APIs isn’t inherently a problem, but I personally wanted to keep these kinds of tasks within a subscription fee if possible. (Subscriptions also have usage limits, so they’re effectively usage-based too but with how I use them, I rarely hit the limit.)&lt;/p&gt;

&lt;p&gt;AI vendors also offer their own coding agents like Codex, Claude Code, Gemini CLI, and so on. By authenticating inside those coding agents, you can use them within your subscription plan. GitHub Copilot doesn’t develop its own models, but it’s appealing because it’s inexpensive and fixed-price, and lets you try a variety of models.&lt;/p&gt;

&lt;p&gt;So it seems promising to delegate code review to these fixed-price coding agents and compare their results. That way, without issuing API keys, you can internally call multiple coding agents you already use and instantly get second opinions on your code review.&lt;/p&gt;

&lt;p&gt;You might also want to use a team-standard prompt for code reviews. Even if you don’t fully standardize, it’s nice to avoid reinventing prompts each time and use a reasonably well-prepared team-specific one.&lt;/p&gt;
&lt;h3&gt;
  
  
  Then why not run the CLIs in CI?
&lt;/h3&gt;

&lt;p&gt;You’re absolutely right. But once you run it in CI, the next questions become bigger ones—how do we handle team-wide subscriptions, or would it be cheaper and higher-performing to adopt a dedicated SaaS, etc.&lt;/p&gt;

&lt;p&gt;So this time, I wanted something that runs locally at a script-like level. If you want to do it properly, as mentioned, moving it into CI is likely better. But even then, it might still be handy to quickly check multiple reviews locally.&lt;/p&gt;
&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;I’d like to introduce &lt;a href="https://github.com/trknhr/ai-utils" rel="noopener noreferrer"&gt;ai-utils&lt;/a&gt;, a tool I made that has made my development flow just a bit more convenient.&lt;/p&gt;
&lt;h3&gt;
  
  
  How it actually behaves
&lt;/h3&gt;

&lt;p&gt;Let’s look at a real review I ran when I added Copilot CLI support to &lt;a href="https://github.com/trknhr/ai-utils" rel="noopener noreferrer"&gt;ai-utils&lt;/a&gt;. Including full results for every AI would be too long, so I’ll show only excerpts.&lt;/p&gt;

&lt;p&gt;I reviewed &lt;a href="https://github.com/trknhr/ai-utils/commit/9c8db862016d71e492367c7d6dc0a16a4abd4480" rel="noopener noreferrer"&gt;this commit&lt;/a&gt;. In practice, this is an AI check against pre-commit (staged) files called &lt;code&gt;staged-check&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The check is performed using a prompt like the one linked here:&lt;br&gt;
&lt;a href="https://github.com/trknhr/ai-utils/blob/main/internal/template/prompts/staged-check.md" rel="noopener noreferrer"&gt;staged-check.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, Codex’s feedback captured the situation most accurately. Since this is a lightweight check, it’s understandable but because the prompt only passes &lt;code&gt;git diff&lt;/code&gt;, the LLM seems to struggle to grasp the repository as a whole. I’d like to improve that at some point.&lt;/p&gt;

&lt;p&gt;Below are the actual review outputs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code (Sonnet 4.5)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     The code quality is very good. The new Copilot provider implementation follows a structure consistent with the existing Codex provider. The `stripFlagWithValue` helper function is implemented in a properly reusable way and adheres to the DRY principle. Naming is clear and follows Go conventions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini CLI (Gemini 2.5)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     * **Code duplication**:
         * **File**: `internal/provider/copilot.go`
         * **Issue**: The `stripFlagWithValue` function is newly defined in `copilot.go`, but this logic is also used for model flag handling in `codex.go`. Currently, `codex.go` cannot access this function defined in `copilot.go`, so similar logic would be (not visible here, but presumably) needed in `codex.go` as well. To avoid code duplication, this helper function should be moved to a shared utility file within the `provider` package (e.g., `provider/utils.go`).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Codex (GPT-5.2)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     - **Fatal timeout unit bug**: In `initConfig()` in `internal/cli/root.go`, `config.ProviderConfig.Timeout` is assigned `120`, but since the type is `time.Duration`, that becomes **120ns** (you likely expected `120*time.Second`). This same value is set when adding `copilot`, so with the default configuration it will time out immediately / become unstable. At minimum, the newly added `copilot` part must be fixed; ideally, fix existing `claude/gemini/codex` similarly.

     - **Where to place the utility function**: `stripFlagWithValue()` is defined in `internal/provider/copilot.go` and is also used from `internal/provider/codex.go` (it works because it’s the same package, but it hurts readability). Moving it to a shared file like `internal/provider/args.go` would make the intent clearer.

     - **Hard-coded strings**: Provider name `"copilot"` and flags like `"--model"` are scattered (e.g., `internal/cli/root.go`, `internal/provider/*`). Even small constants (at least provider names) would improve consistency.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Copilot CLI (Claude Sonnet 4.5)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     **`internal/provider/copilot.go`**: The `stripFlagWithValue` function is newly implemented, but the same logic is also used in `codex.go`. Following the DRY principle, this function should be moved to a common file such as `provider.go` so it can be shared by both providers.

     **`internal/provider/copilot_test.go`**: A trailing newline is missing at the end of the file (line 115). Go formatting conventions expect files to end with a newline.

     **`internal/provider/codex.go`**: An implementation of `stripFlagWithValue` has not been added. With this change, the codex provider may end up with an undefined reference when processing the model flag. The staging may be partial.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Even with some prompt issues, asking multiple AIs to review helped me avoid blindly trusting a single model’s feedback. I didn’t do it this time, but you can also compare SOTA models like Opus 4.5 and Gemini 3.0. In many cases you don’t know which model is best, so being able to benchmark and compare with a single command is very convenient.&lt;/p&gt;
&lt;h2&gt;
  
  
  My OSS project
&lt;/h2&gt;

&lt;p&gt;As mentioned above, &lt;a href="https://github.com/trknhr/ai-utils" rel="noopener noreferrer"&gt;ai-utils&lt;/a&gt; is my own OSS project. It’s small and functionally simple, but seemed useful enough that I decided to build it.&lt;br&gt;
Details are here: &lt;a href="https://github.com/trknhr/ai-utils" rel="noopener noreferrer"&gt;ai-utils&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Concept
&lt;/h3&gt;

&lt;p&gt;Easily run multiple AIs locally within the subscription plans.&lt;/p&gt;
&lt;h3&gt;
  
  
  Problems it solves
&lt;/h3&gt;

&lt;p&gt;There are plenty of OSS tools like this. But the three things I specifically wanted to solve were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I don’t want to issue API keys&lt;/li&gt;
&lt;li&gt;I want to rewrite prompts in my own style&lt;/li&gt;
&lt;li&gt;I want to compare responses from multiple AIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I couldn’t find an OSS project that satisfied all three, so I chose to build one. In the AI era, it’s easy to build what you want, so I was able to overcome the cost of “reinventing the wheel.”&lt;/p&gt;
&lt;h3&gt;
  
  
  How to use
&lt;/h3&gt;

&lt;p&gt;On macOS, you can install easily with Homebrew:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap trknhr/homebrew-tap

brew &lt;span class="nb"&gt;install &lt;/span&gt;aiu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Linux, run the install shell script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sSfL&lt;/span&gt; https://raw.githubusercontent.com/trknhr/ai-utils/main/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can’t use it unless supported coding agents like Claude Code or Codex are installed and ready to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trying it out
&lt;/h3&gt;

&lt;p&gt;Using &lt;code&gt;commit-msg&lt;/code&gt;, you can generate a commit message based on staged files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aiu commit-msg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;-m&lt;/code&gt;, you can run multiple AIs in parallel.&lt;/p&gt;

&lt;p&gt;You can also run your own prompts. Inside a prompt file, &lt;code&gt;{{$ }}&lt;/code&gt; executes a command, so you can dynamically pass the command output to the AI.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Just say {{$ date }}.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This passes the current time to the AI, and it will return only the current time. Using the same mechanism, the review task passes things like &lt;code&gt;git diff&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So if your team wants custom prompts, you can place team-specific prompts under &lt;code&gt;.aiu/prompts/&lt;/code&gt; and run standardized reviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  About development
&lt;/h2&gt;

&lt;p&gt;The implementation required for this app wasn’t challenging. AI is so good at implementing typical CLI applications that there wasn’t much I had to do myself. What I did was mostly defining the spec and writing tests and I found myself thinking “So this is the AI era...” over and over.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;This tool just calls the coding agents provided by each vendor, but wrapping it up as a CLI makes it surprisingly comfortable.&lt;/p&gt;

&lt;p&gt;Because the tool’s functionality is simple, it’s also an application where it’s easy to let AI handle most of the implementation. Probably about 95% of the code was written by AI.&lt;/p&gt;

&lt;p&gt;It won’t dramatically improve something by itself, but it helps you move through small daily tasks a little more smoothly.&lt;/p&gt;

&lt;p&gt;If you’re interested, please refer to the &lt;a href="https://github.com/trknhr/ai-utils" rel="noopener noreferrer"&gt;GitHub page&lt;/a&gt; and install it. If you have complaints or requests, please open an Issue.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Assessing TOON Token Savings in an MCP Server</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Thu, 20 Nov 2025 13:37:09 +0000</pubDate>
      <link>https://dev.to/trknhr/assessing-toon-token-savings-in-an-mcp-server-2b3i</link>
      <guid>https://dev.to/trknhr/assessing-toon-token-savings-in-an-mcp-server-2b3i</guid>
      <description>&lt;p&gt;I have been wiring &lt;a href="https://github.com/toon-format/toon" rel="noopener noreferrer"&gt;TOON&lt;/a&gt; support with &lt;a href="https://www.npmjs.com/package/toon-token-diff" rel="noopener noreferrer"&gt;&lt;code&gt;toon-token-diff&lt;/code&gt;&lt;/a&gt; into &lt;a href="https://github.com/trknhr/toon-token-diff" rel="noopener noreferrer"&gt;this MCP server&lt;/a&gt; to understand whether converting JSON payloads to TOON meaningfully reduces prompt costs. The short answer: TOON is elegant, but in my test harness it delivered microscopic savings for real-world workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Project mode&lt;/strong&gt;: &lt;code&gt;toon-token-diff&lt;/code&gt; in &lt;code&gt;libraryMode&lt;/code&gt; via &lt;code&gt;npm install toon-token-diff&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Models monitored&lt;/strong&gt;: &lt;code&gt;openai&lt;/code&gt; (tiktoken GPT-5 profile) and &lt;code&gt;claude&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration strategy&lt;/strong&gt;: lightweight instrumentation that appends token stats into a JSONL ledger for later analysis
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;estimateAndLog&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;toon-token-diff/libraryMode&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// inside my MCP tool handler&lt;/span&gt;
&lt;span class="nf"&gt;estimateAndLog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;models&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;claude&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./token-logs.jsonl&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mcp_tool_call&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet runs after the MCP tool produces a JSON response. It serializes the payload, estimates TOON vs JSON tokens, and emits a structured record to &lt;code&gt;token-logs.jsonl&lt;/code&gt;. The rest of the MCP server stays untouched—no need to change transport or business logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observations
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Timestamp (UTC)&lt;/th&gt;
&lt;th&gt;openai JSON&lt;/th&gt;
&lt;th&gt;openai TOON&lt;/th&gt;
&lt;th&gt;openai Δ (%)&lt;/th&gt;
&lt;th&gt;claude JSON&lt;/th&gt;
&lt;th&gt;claude TOON&lt;/th&gt;
&lt;th&gt;claude Δ (%)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:16:54.296Z&lt;/td&gt;
&lt;td&gt;127&lt;/td&gt;
&lt;td&gt;126&lt;/td&gt;
&lt;td&gt;0.79&lt;/td&gt;
&lt;td&gt;130&lt;/td&gt;
&lt;td&gt;129&lt;/td&gt;
&lt;td&gt;0.77&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:17:15.720Z&lt;/td&gt;
&lt;td&gt;53,703&lt;/td&gt;
&lt;td&gt;53,702&lt;/td&gt;
&lt;td&gt;0.0019&lt;/td&gt;
&lt;td&gt;54,977&lt;/td&gt;
&lt;td&gt;54,976&lt;/td&gt;
&lt;td&gt;0.0018&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:17:34.988Z&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;14.29&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;14.29&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:17:39.246Z&lt;/td&gt;
&lt;td&gt;53,703&lt;/td&gt;
&lt;td&gt;53,702&lt;/td&gt;
&lt;td&gt;0.0019&lt;/td&gt;
&lt;td&gt;54,977&lt;/td&gt;
&lt;td&gt;54,976&lt;/td&gt;
&lt;td&gt;0.0018&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:17:48.333Z&lt;/td&gt;
&lt;td&gt;29&lt;/td&gt;
&lt;td&gt;29&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;td&gt;28&lt;/td&gt;
&lt;td&gt;28&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:18:13.725Z&lt;/td&gt;
&lt;td&gt;91,729&lt;/td&gt;
&lt;td&gt;91,728&lt;/td&gt;
&lt;td&gt;0.0011&lt;/td&gt;
&lt;td&gt;98,607&lt;/td&gt;
&lt;td&gt;98,606&lt;/td&gt;
&lt;td&gt;0.0010&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:21:19.174Z&lt;/td&gt;
&lt;td&gt;127&lt;/td&gt;
&lt;td&gt;126&lt;/td&gt;
&lt;td&gt;0.79&lt;/td&gt;
&lt;td&gt;130&lt;/td&gt;
&lt;td&gt;129&lt;/td&gt;
&lt;td&gt;0.77&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:21:23.370Z&lt;/td&gt;
&lt;td&gt;91,729&lt;/td&gt;
&lt;td&gt;91,728&lt;/td&gt;
&lt;td&gt;0.0011&lt;/td&gt;
&lt;td&gt;98,607&lt;/td&gt;
&lt;td&gt;98,606&lt;/td&gt;
&lt;td&gt;0.0010&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2025-11-19T14:21:30.314Z&lt;/td&gt;
&lt;td&gt;53,703&lt;/td&gt;
&lt;td&gt;53,702&lt;/td&gt;
&lt;td&gt;0.0019&lt;/td&gt;
&lt;td&gt;54,977&lt;/td&gt;
&lt;td&gt;54,976&lt;/td&gt;
&lt;td&gt;0.0018&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Nine consecutive tool runs told the same story: production payloads barely moved. Only the intentionally tiny sample showed double-digit savings, which is irrelevant for backlog-scale prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Reduction Rate Is Flat
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Content dominates token volume&lt;/strong&gt; – The payload body itself accounts for nearly every token, so TOON’s structural tweaks barely register in the total.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Practical Guidance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Keep TOON handy as a normalization format, but don't promise cost savings without benchmarking your actual payloads.&lt;/li&gt;
&lt;li&gt;Instrument with the libraryMode snippet above before ship time; it gives you historical evidence of whether TOON helps.&lt;/li&gt;
&lt;li&gt;If savings are negligible, redirect effort toward higher-impact tactics: pruning unused fields, batching small tool calls, or applying semantic compression upstream.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Experiments
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Compare with alternative tokenizers (Gemini, Llama) to see whether non-GPT vocabularies respond differently.&lt;/li&gt;
&lt;li&gt;Add diff tooling that highlights specific fields TOON shrinks, so we can manually prune them if needed.&lt;/li&gt;
&lt;li&gt;Explore policy-driven trimming (e.g., dropping debug blobs) prior to TOON conversion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TOON remains a clever serialization trick, but as my MCP experiment showed, it is not an automatic token economy lever. Measure, log, and decide based on real numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/toon-token-diff" rel="noopener noreferrer"&gt;&lt;code&gt;toon-token-diff&lt;/code&gt; on npm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/toon-format/toon" rel="noopener noreferrer"&gt;toon-format/toon on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/trknhr/toon-token-diff" rel="noopener noreferrer"&gt;trknhr/toon-token-diff on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>toon</category>
    </item>
    <item>
      <title>ai-docs managing AI generated context files</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Sun, 06 Jul 2025 14:50:41 +0000</pubDate>
      <link>https://dev.to/trknhr/managing-ai-generated-context-files-with-ai-docs-keep-your-main-branch-clean-1lcl</link>
      <guid>https://dev.to/trknhr/managing-ai-generated-context-files-with-ai-docs-keep-your-main-branch-clean-1lcl</guid>
      <description>&lt;h1&gt;
  
  
  Why I Built &lt;code&gt;ai-docs&lt;/code&gt;: Managing the Growing Chaos of AI Context Files
&lt;/h1&gt;

&lt;p&gt;When developing alongside AI agents, one of the first headaches that arises is how to manage the flood of context files they generate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Here are a few specific challenges I kept facing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As your AI coding assistant evolves, you naturally want to externalize and back up its memory files.&lt;/li&gt;
&lt;li&gt;These files are not deterministic and will inevitably differ across local environments and each developer's.&lt;/li&gt;
&lt;li&gt;Git merges often lead to nasty conflicts.&lt;/li&gt;
&lt;li&gt;During code review, these files just get in the way.&lt;/li&gt;
&lt;li&gt;Yet simply ignoring them with &lt;code&gt;.gitignore&lt;/code&gt; is risky to disappear. You still want to back them up remotely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s when I realized: maybe these files don't belong in the main branch at all. And that's how &lt;code&gt;ai-docs&lt;/code&gt; was born.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/trknhr/ai-docs" rel="noopener noreferrer"&gt;GitHub - trknhr/ai-docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spark
&lt;/h2&gt;

&lt;p&gt;The idea hit me during a casual meeting. What if we isolate AI-related files on a separate Git branch and mount them as a worktree? That way, we could keep them versioned and visible, without polluting the main development flow.&lt;/p&gt;

&lt;p&gt;Two days and one impulsive coding spree later, I had a working prototype. Like any proper AI-era project, I co-built it with ChatGPT and Claude.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I brainstormed the ideal workflow with ChatGPT.&lt;/li&gt;
&lt;li&gt;When the conversation alone didn’t give me clarity, I prototyped locally using Git worktrees.&lt;/li&gt;
&lt;li&gt;I summarized everything into a spec file and let Claude Code scaffold the CLI.&lt;/li&gt;
&lt;li&gt;Then I tested, tweaked, and patched wherever things didn’t behave as expected.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What &lt;code&gt;ai-docs&lt;/code&gt; Does
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;ai-docs&lt;/code&gt; is a CLI tool that helps you manage AI assistant context files by separating them into an isolated Git branch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates an isolated branch named &lt;code&gt;@ai-docs/{username}&lt;/code&gt;, {username} is determined by your name on config file, git user.name or hostname.&lt;/li&gt;
&lt;li&gt;Mounts this branch locally at &lt;code&gt;.ai-docs/&lt;/code&gt; via Git worktree&lt;/li&gt;
&lt;li&gt;Moves files like &lt;code&gt;memory-bank/&lt;/code&gt; and &lt;code&gt;CLAUDE.md&lt;/code&gt; to this branch&lt;/li&gt;
&lt;li&gt;Automatically updates &lt;code&gt;.gitignore&lt;/code&gt; in &lt;code&gt;main&lt;/code&gt; to prevent tracking those files&lt;/li&gt;
&lt;li&gt;Provides &lt;code&gt;pull&lt;/code&gt; and &lt;code&gt;push&lt;/code&gt; commands to sync changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges I Faced
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Claude Code and the Danger of &lt;code&gt;rm -rf&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;The initial versions made liberal use of &lt;code&gt;rm -rf&lt;/code&gt;, which ended up deleting my &lt;code&gt;.git&lt;/code&gt; folder. A brutal reminder that you should &lt;em&gt;never&lt;/em&gt; blindly run AI-generated code.&lt;/p&gt;

&lt;p&gt;I later restricted file deletions to cases where the &lt;code&gt;--force&lt;/code&gt; flag is used, and leaned more heavily on safe &lt;code&gt;git&lt;/code&gt; commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. GitHub Actions: Trial and (Mostly) Error
&lt;/h3&gt;

&lt;p&gt;I wanted to set up automatic releases using GoReleaser + GitHub Actions. But it was a frustrating loop of misconfigurations, outdated AI suggestions, and documentation-diving. I learned a lot, but definitely want to improve my speed here next time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage (macOS Recommended)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap trknhr/homebrew-tap
brew &lt;span class="nb"&gt;install &lt;/span&gt;ai-docs

&lt;span class="c"&gt;# First-time setup (may need to run twice to initialize config)&lt;/span&gt;
ai-docs init &lt;span class="nt"&gt;-v&lt;/span&gt;

&lt;span class="c"&gt;# Push local AI context files to remote&lt;/span&gt;
aI-docs push &lt;span class="nt"&gt;-v&lt;/span&gt;

&lt;span class="c"&gt;# Pull updates from remote&lt;/span&gt;
aI-docs pull &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Options like &lt;code&gt;--dry-run&lt;/code&gt; and &lt;code&gt;--force&lt;/code&gt; are supported and useful during testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary: A Clean Home for Your AI Files
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;ai-docs&lt;/code&gt; helps you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep your working branches clean&lt;/strong&gt;: AI context files live elsewhere&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access files easily&lt;/strong&gt;: via &lt;code&gt;.ai-docs/&lt;/code&gt; worktree&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sync with ease&lt;/strong&gt;: using simple &lt;code&gt;push&lt;/code&gt; and &lt;code&gt;pull&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s still a rough-around-the-edges tool, but it works well enough to use daily.&lt;/p&gt;

&lt;p&gt;If you're building with AI and want to keep things organized, give &lt;code&gt;ai-docs&lt;/code&gt; a try. Feedback on GitHub or X (Twitter) would be amazing!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/trknhr/ai-docs" rel="noopener noreferrer"&gt;GitHub - trknhr/ai-docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy vibe coding!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>coding</category>
      <category>go</category>
    </item>
    <item>
      <title>Cha Cha Chat with AI in Local</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Tue, 19 Dec 2023 04:33:31 +0000</pubDate>
      <link>https://dev.to/trknhr/cha-cha-chat-with-ai-in-local-a37</link>
      <guid>https://dev.to/trknhr/cha-cha-chat-with-ai-in-local-a37</guid>
      <description>&lt;p&gt;Hello everyone. I've recently joined a generative AI team on the &lt;a href="https://nulab.com" rel="noopener noreferrer"&gt;current company&lt;/a&gt;. I don't have much experience with generative AI though, I've been experimenting with running a Large Language Model (LLM) locally to prepare for any future requests to develop AI chat app like ChatGPT. Since I'm a Japanese speaker, I look for LLMs for Japanese one in this article.&lt;/p&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  About PC Specifications
&lt;/h2&gt;

&lt;p&gt;In this article, all tries were on this environment. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model&lt;/strong&gt;: MacBook Pro 14-inch, 2023&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chip&lt;/strong&gt;: Apple M2 Max&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory&lt;/strong&gt;: 64GB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS&lt;/strong&gt;: macOS 14.1 &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  About Large Language Models
&lt;/h1&gt;

&lt;p&gt;There are various types of Large Language Models (LLMs), like the well-known GPT, BERT, LLaMA, etc. I won't dive into their differences or specifics given my current knowledge, but for this endeavor, I chose  LLaMA for this article, which is popular among third parties for its accuracy and commercial viability.&lt;/p&gt;

&lt;h1&gt;
  
  
  Just Want to Get It Running
&lt;/h1&gt;

&lt;p&gt;I knew that publicly available LLMs could be found on a site called Hugging Face, but I had no idea how to run them on the local. My aim was to create something like ChatGPT for future app implementation ideas.&lt;/p&gt;

&lt;p&gt;After some research, I came across an Open Source Software (OSS) called &lt;a href="https://github.com/lm-sys/FastChat/tree/main" rel="noopener noreferrer"&gt;FastChat&lt;/a&gt;, &lt;a href="https://github.com/oobabooga/text-generation-webui" rel="noopener noreferrer"&gt;Text generation web UI&lt;/a&gt;. With this repository, I was able to locally run llama2 and chat with it.&lt;/p&gt;

&lt;p&gt;For those who just want to try llama2, Hugging Face has a demo page, which is probably the quickest way to experience it: &lt;a href="https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI" rel="noopener noreferrer"&gt;Hugging Face Demo for Llama2&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  About Japanese Models
&lt;/h1&gt;

&lt;p&gt;While llama2 performs well in English, it seems far from the level of ChatGPT in Japanese. The responses in Japanese often include English words or are expressed in romanized Japanese. So, I looked for Japanese models.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Youri7B
&lt;/h2&gt;

&lt;p&gt;This is a model pre-trained in Japanese by rinna Co., Ltd., based on llama2. I tried running it using the 'Text generation web UI' mentioned earlier. &lt;a href="https://rinna.co.jp/news/2023/10/20231031.html" rel="noopener noreferrer"&gt;Rinna Youri-7B&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, it didn't work as expected. The model seemed to load correctly in the UI, but all responses were in English. I didn't know the reason why it didn't work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running Python Files
&lt;/h2&gt;

&lt;p&gt;I tried running Python scripts as described on the Hugging Face Youri-7B page. It looked like to be simpler than using third-party UIs and I could embed this to API after it would work, but due to my limited Python knowledge and the script consuming about 30GB of memory, my PC crashed.&lt;/p&gt;

&lt;h1&gt;
  
  
  Discovering Ollama
&lt;/h1&gt;

&lt;p&gt;There were some reasons I couldn't complete to run some LLMs on my local environment. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lack of Python Knowledge&lt;/li&gt;
&lt;li&gt;Many dependencies caused difficulties and frustrations &lt;/li&gt;
&lt;li&gt;Wanted to ignore runtime environments&lt;/li&gt;
&lt;li&gt;Wanted to avoid troubleshooting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Summing up these points, what I'm looking for now is an OSS with a chat UI that doesn't require specific knowledge of Python or understanding of dependencies, and one that has clear documentation on how to apply models from Hugging Face.&lt;/p&gt;

&lt;p&gt;Meanwhile, I was drifting on the internet and I stumbled upon Ollama. Its documentation seemed minimal but sufficient for my needs. Ollama operates like Docker, with model configuration files and instructions for using models downloaded from Hugging Face. That's what I wanted!&lt;/p&gt;

&lt;h2&gt;
  
  
  Trying Ollama
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Run a LLM for Japanese
&lt;/h3&gt;

&lt;p&gt;I wanted to run the Japanese model Youri, so I set up the Modelfile as suggested in the documentation. Like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ./models/rinna-youri-7b-chat-q6_K.gguf

TEMPLATE """[INST] {{ .Prompt }} [/INST] """
PARAMETER num_ctx 4096
PARAMETER stop "[INST]"
PARAMETER stop "[/INST]"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, I used a gguf model converted by a volunteer from this &lt;a href="https://huggingface.co/mmnga/rinna-youri-7b-chat-gguf" rel="noopener noreferrer"&gt;Hugging Face page&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running as a server
&lt;/h3&gt;

&lt;p&gt;Ollama can set up a local server while the app is running and it's totally easy. Let's take a look &lt;a href="https://github.com/jmorganca/ollama?tab=readme-ov-file#start-ollama" rel="noopener noreferrer"&gt;README.md&lt;/a&gt; to launch the server. I tried one of the user-provided UIs called &lt;a href="https://github.com/ollama-ui/ollama-ui" rel="noopener noreferrer"&gt;ollama-ui&lt;/a&gt; and asked it a question about Japanese history. But the quality in Japanese is less than in English.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6i229xxblkx7qqqc8cc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6i229xxblkx7qqqc8cc.png" alt="Ask the history of Japan in Japanese. AI responses short answer." width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca3dnd9h67yn8nmmvirq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca3dnd9h67yn8nmmvirq.png" alt="Ask the history of Japan in English. AI responses with a enough brief overview of Japan's history in English" width="800" height="944"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Insights Gained While Running Ollama
&lt;/h2&gt;

&lt;p&gt;While exploring the Ollama repository, I noticed it was written in Go. It piqued my interest in how it runs LLaMA. It turns out that Ollama uses llama.cpp for execution, which appears to be an app designed to run LLaMA smoothly on Mac. Llama.cpp itself seems not to depend on Python and using C++ instead, which is wrapping up the complex parts and making it accessible even to those with little understanding like myself.&lt;/p&gt;

&lt;h1&gt;
  
  
  Exploring Frontend LLM
&lt;/h1&gt;

&lt;p&gt;I had heard rumors about running LLaMA as WebAssembly (WASM) on the frontend. So, I looked into some ambitious projects like &lt;a href="https://github.com/dmarcos/llama2.c-web" rel="noopener noreferrer"&gt;llama2.c-web&lt;/a&gt; and &lt;a href="https://github.com/mlc-ai/web-llm" rel="noopener noreferrer"&gt;WebLLM&lt;/a&gt;, which run LLMs on WASM. Running LLMs on the frontend is fascinating as it allows immediate responses without network dependency, ideal for quick-response needs like voice input or text summarization. I tried both platforms, and they worked impressively.&lt;br&gt;
This seems particularly useful for immediate responses in cases like voice input or text summarization. A configuration where lightweight and rapid-response tasks are handled at the edge, while relatively heavier tasks are managed by server-based LLMs, appears to have high potential for scalability.&lt;/p&gt;

&lt;p&gt;Chat with llama2 on a web browser.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqah6prokvyu471o565m8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqah6prokvyu471o565m8.png" alt="The image depicts a chat interface where a user is asking about the capital of Japan." width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check those demos out! They are fantastic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://webllm.mlc.ai/#chat-demo" rel="noopener noreferrer"&gt;https://webllm.mlc.ai/#chat-demo&lt;/a&gt;&lt;br&gt;
&lt;a href="https://diegomarcos.com/llama2.c-web" rel="noopener noreferrer"&gt;https://diegomarcos.com/llama2.c-web&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Try WebLLM
&lt;/h2&gt;

&lt;p&gt;WebLLM is one of the MLC-LLM projects that compiles LLMs for web execution. By compiling the models, it enables them to run on various device runtimes prepared by MLC-LLM. This means you can create LLMs that run in the browser's WASM runtime without depending on Python modules. For users, it's quite amazing that simply loading the model in the browser can start a chat like magic.&lt;/p&gt;

&lt;p&gt;Reference:&lt;a href="https://llm.mlc.ai/docs/get_started/project_overview.html" rel="noopener noreferrer"&gt;MLC-LLM Project Overview&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run youri7b-chat, as described above, the model needs to be compiled first. For this, I referred to the following documentation and proceeded with the compilation:&lt;br&gt;
&lt;a href="https://llm.mlc.ai/docs/compilation/compile_models.html" rel="noopener noreferrer"&gt;Compile Models - MLC-LLM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While going through the documentation, I realized that emscripten also needs to be installed, so I prepared that as well:&lt;br&gt;
&lt;a href="https://emscripten.org/docs/getting_started/downloads.html#installation-instructions-using-the-emsdk-recommended" rel="noopener noreferrer"&gt;Emscripten Installation Instructions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once everything was ready and the compilation was done, I found something called simple-chat in the examples directory of webllm, which I decided to run locally:&lt;br&gt;
&lt;a href="https://github.com/mlc-ai/web-llm/tree/main/examples/simple-chat" rel="noopener noreferrer"&gt;Simple-Chat Example - WebLLM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The compilation and web server setup went smoothly, but then it didn't work and I have completely no idea to make it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneq6pbeev3nxpz3uk8a2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fneq6pbeev3nxpz3uk8a2.png" alt="It depicts a chat interface where a user is asking about the capital of Japan.But an error happens associated with WebGPU" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrap-up
&lt;/h1&gt;

&lt;p&gt;This journey was solely about exploring and running OSS in my local environment, meanwhile I didn't code any single line. It highlighted the power of the OSS community and my respect for everyone developing OSS. I hope to contribute to the LLM ecosystem in some way in the future.&lt;/p&gt;

&lt;p&gt;In conclusion, while there were many challenges, it was a learning experience. M2 Macs can handle these models surprisingly well, encouraging me to keep experimenting. Goodbye for now.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Easy ruby Web server </title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Sat, 01 Dec 2018 13:04:21 +0000</pubDate>
      <link>https://dev.to/knhr__/easy-ruby-web-server--48al</link>
      <guid>https://dev.to/knhr__/easy-ruby-web-server--48al</guid>
      <description>

&lt;p&gt;I read &lt;code&gt;Working With Unix Processes&lt;/code&gt;. It's a very interesting book because I didn't know about Unix Processes. I've used Web servers in my work with not knowing how it does. So I wanted to write this article to remember what I learn.&lt;br&gt;
In this article, I tried to creat easy web server using ruby.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is a Web server
&lt;/h1&gt;

&lt;p&gt;It accepts requests from clients(users), then it returns a response. Web server I'm creating now is very simple, it handles the HTTP GET requests correctly. &lt;/p&gt;

&lt;h1&gt;
  
  
  Launch TCP server
&lt;/h1&gt;

&lt;p&gt;Ruby provides &lt;code&gt;TCPServer&lt;/code&gt; class, it's easy to create TCP connection for me.&lt;br&gt;
server.rb&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'socket'&lt;/span&gt;

&lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;TCPServer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'0.0.0.0'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5678&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accept&lt;/span&gt;
  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="s2"&gt;"Hello world!!"&lt;/span&gt;
  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt; 
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Since keepalive is not supported we can close the client connection, immediately after writing the body.&lt;/p&gt;

&lt;p&gt;I can launch a server to do &lt;code&gt;ruby server.rb&lt;/code&gt;, you can access to &lt;code&gt;curl http://localhost:5678&lt;/code&gt;, then it shows &lt;code&gt;Hello world!!&lt;/code&gt;&lt;br&gt;
&lt;code&gt;server.rb&lt;/code&gt; handle and shuts down your request. I'm ready to make HTTP server because HTTP is on TCP, it's just added some header information.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting started on HTTP Server
&lt;/h1&gt;

&lt;p&gt;Although now this communications on TCP layer, I have to create HTTP protocol to archive my goal.&lt;br&gt;
As I said, HTTP needs some header information. So I add some HTTP headers&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;  &lt;span class="n"&gt;head&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP/1.1 200&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Date: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;httpdate&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Content-Length: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_s&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Whole of code &lt;/p&gt;

&lt;p&gt;server.rb&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'socket'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'time'&lt;/span&gt;

&lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;TCPServer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'0.0.0.0'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5678&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accept&lt;/span&gt;

  &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello world!"&lt;/span&gt;

  &lt;span class="n"&gt;head&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP/1.1 200&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Date: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;httpdate&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Content-Length: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_s&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 

  &lt;span class="c1"&gt;# 1&lt;/span&gt;
  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="n"&gt;head&lt;/span&gt;

  &lt;span class="c1"&gt;# 2&lt;/span&gt;
  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  &lt;span class="c1"&gt;# 3&lt;/span&gt;
  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;

  &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;
  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt; 
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;It adds http header's info &lt;/li&gt;
&lt;li&gt;It represents characters(a CRLF) between header and body&lt;/li&gt;
&lt;li&gt;It add body(It'll be shown in browser. normally it's html, json and xml etc)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then, finally we can see the response in your browser. It returns 200 with "Hello world" response if you access &lt;code&gt;localhost:5678&lt;/code&gt;&lt;br&gt;
And I could understand that HTTP is just added Header information on the TCP server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real world
&lt;/h2&gt;

&lt;p&gt;This can catch only GET and response 200 always. It doesn't look like be a real Web server.&lt;br&gt;
Then the next step I want to control status, header, and body as I want. At first, I install &lt;code&gt;Rack&lt;/code&gt; it is a famous middleware to connect Web server and application(Like Rails).&lt;br&gt;
&lt;a href="http://rack.github.io/"&gt;Rack&lt;/a&gt; provides the just interface &lt;/p&gt;

&lt;p&gt;The code will be like this&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'socket'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'time'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'rack/utils'&lt;/span&gt;

&lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;TCPServer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'0.0.0.0'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5678&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 1&lt;/span&gt;
&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Proc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello world!"&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'200'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'Content-Type'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'text/html'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"Content-Length"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_s&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Hello world!"&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accept&lt;/span&gt;
  &lt;span class="c1"&gt;# 2&lt;/span&gt;
  &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;({})&lt;/span&gt;

  &lt;span class="n"&gt;head&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP/1.1 200&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Date: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;httpdate&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Status: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Rack&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Utils&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;HTTP_STATUS_CODES&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 

  &lt;span class="c1"&gt;# 3&lt;/span&gt;
  &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;head&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  &lt;span class="c1"&gt;# 3&lt;/span&gt;
  &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;part&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; 
    &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="n"&gt;part&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;respond_to?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:close&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt; 
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;It's a setting for the interface which Rack provides&lt;/li&gt;
&lt;li&gt;It retuns three values, status, header, body&lt;/li&gt;
&lt;li&gt;It writes body and headeers. Since body and header will be array, each is used in this code&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Reading request
&lt;/h2&gt;

&lt;p&gt;You can get request data using &lt;code&gt;connection.get&lt;/code&gt;.&lt;br&gt;
&lt;code&gt;connection.get&lt;/code&gt; is like this &lt;code&gt;GET / HTTP/1.1&lt;/code&gt;. Then I can extract request path and method etc from client request.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 1&lt;/span&gt;
&lt;span class="nb"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;full_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;' '&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# 2&lt;/span&gt;
&lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;full_path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'?'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And in &lt;code&gt;Proc.new&lt;/code&gt; I can route the process by each path.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app = Proc.new do |env|
  req = Rack::Request.new(env)
  case req.path
  when "/"
    body = "Hello world!"
    [200, {'Content-Type' =&amp;gt; 'text/html', "Content-Length" =&amp;gt; body.length.to_s}, [body]]
  when /^\/name\/(.*)/
    body = "Hello, #{$1}!"
    [200, {'Content-Type' =&amp;gt; 'text/html', "Content-Length" =&amp;gt; body.length.to_s}, [body]]
  else 
    [404, {"Content-Type" =&amp;gt; "text/html"}, ["Ah!!!"]]
  end
end
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, it can do routing, Method and query parameters are easy to control following like this codes&lt;/p&gt;

&lt;p&gt;The whole of code&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'socket'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'time'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'rack'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'rack/utils'&lt;/span&gt;

&lt;span class="c1"&gt;# app = Rack::Lobster.new&lt;/span&gt;
&lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;TCPServer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'0.0.0.0'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5678&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Proc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Rack&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;path&lt;/span&gt;
  &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="s2"&gt;"/"&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello world!"&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'Content-Type'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'text/html'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"Content-Length"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_s&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
  &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="sr"&gt;/^\/name\/(.*)/&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello, &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="vg"&gt;$1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;!"&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s1"&gt;'Content-Type'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'text/html'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"Content-Length"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_s&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
  &lt;span class="k"&gt;else&lt;/span&gt; 
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"Content-Type"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"text/html"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Ah!!!"&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;accept&lt;/span&gt;
  &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gets&lt;/span&gt;
  &lt;span class="c1"&gt;# 1&lt;/span&gt;
  &lt;span class="nb"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;full_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;' '&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="c1"&gt;# 2&lt;/span&gt;
  &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;full_path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'?'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;# 1&lt;/span&gt;
  &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="s1"&gt;'REQUEST_METHOD'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s1"&gt;'PATH_INFO'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="n"&gt;head&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP/1.1 200&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Date: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;httpdate&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Status: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Rack&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Utils&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;HTTP_STATUS_CODES&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 

  &lt;span class="c1"&gt;# 1&lt;/span&gt;
  &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
    &lt;span class="n"&gt;head&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\r\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;part&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; 
    &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt; &lt;span class="n"&gt;part&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;respond_to?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:close&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt; 
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next =&amp;gt; Easy prefork webserver(if I have a freeeeeee time)&lt;/p&gt;


</description>
      <category>ruby</category>
      <category>webserver</category>
    </item>
    <item>
      <title>Continue goto in Golang</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Wed, 21 Mar 2018 07:14:37 +0000</pubDate>
      <link>https://dev.to/trknhr/continue-goto-in-golang-31lk</link>
      <guid>https://dev.to/trknhr/continue-goto-in-golang-31lk</guid>
      <description>&lt;p&gt;Sometimes We need to go out from loops or don't want to execute unnecessary part in the loop.&lt;br&gt;
At that time, we use &lt;code&gt;continue&lt;/code&gt; or &lt;code&gt;break&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;break&lt;/code&gt; makes out from the inner the loop.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;continue&lt;/code&gt; makes not to execute after this line&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But in nest loop, that'll be complicated like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;list&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;anothers&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,}&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;list&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;found&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;another&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;anothers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;another&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;found&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;found&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"bingo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://play.golang.org/p/7Q0UMjVHcF-" rel="noopener noreferrer"&gt;https://play.golang.org/p/7Q0UMjVHcF-&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ya, of course it's a little bit ugly.&lt;/p&gt;

&lt;p&gt;So you can try to change cleaner with &lt;code&gt;goto&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;list&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;anothers&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,}&lt;/span&gt;
&lt;span class="n"&gt;LabelLabel&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;list&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;another&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;anothers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;another&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;continue&lt;/span&gt; &lt;span class="n"&gt;LoopLabel&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"bingo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://play.golang.org/p/AB6ASJaL7TR" rel="noopener noreferrer"&gt;https://play.golang.org/p/AB6ASJaL7TR&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;continue Loop&lt;/code&gt; goes out to LoopLabel loop, it's the outer loop, not the inner loop.&lt;/p&gt;

&lt;p&gt;Then it's cleaner than previous one. So you can use &lt;code&gt;goto&lt;/code&gt; in nested loops as &lt;code&gt;continue&lt;/code&gt; .&lt;/p&gt;

</description>
      <category>go</category>
    </item>
    <item>
      <title>Variadic functions of Golang</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Mon, 04 Dec 2017 15:27:24 +0000</pubDate>
      <link>https://dev.to/trknhr/variadic-functions-of-golang-dlc</link>
      <guid>https://dev.to/trknhr/variadic-functions-of-golang-dlc</guid>
      <description>&lt;h1&gt;
  
  
  Three dots
&lt;/h1&gt;

&lt;p&gt;Variadic functions can be called with any arguments. Three dots can take an arbitrary number of T(T is same type).&lt;/p&gt;

&lt;p&gt;In inside function, it can be used as slice. Aspect of calling Variadic functions, user use very flexible way. Regardless of that the arguments are only one, multiple.&lt;br&gt;
Here is example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;numbers&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
    &lt;span class="n"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;numbers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;num&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;sum&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;//6&lt;/span&gt;
&lt;span class="n"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;//6&lt;/span&gt;
&lt;span class="n"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;//1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Syntax is simple
&lt;/h1&gt;

&lt;p&gt;If you created the function which wants to receive any number of arguments. You are better to use it.&lt;br&gt;
There are two ways to resolve this situation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func f(ids []int){
//
}
func service(id int){
    f([]int{id})
}
func service2(id []int){
    f(id)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in &lt;code&gt;service&lt;/code&gt;, it needs new slice to adjust the arguments of &lt;code&gt;f&lt;/code&gt;.&lt;br&gt;
It's a little bit troublesome &lt;/p&gt;

&lt;p&gt;Let's use Variadic functions instead of slice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func f(ids ...int){
//
}
func service(id int){
    f(id)
}
func service2(id []int){
    f(id...)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time, a slice parameter needs three dots the end of name. It means to apply Variadic parameters.&lt;/p&gt;

&lt;p&gt;Variadic functions is not only syntax sugar for taking slice parameters, but also helpless to keep the code base simply, and easy to use functions.&lt;/p&gt;

</description>
      <category>go</category>
    </item>
    <item>
      <title>Summary of pointer of Golang</title>
      <dc:creator>Teruo Kunihiro</dc:creator>
      <pubDate>Mon, 27 Nov 2017 13:56:08 +0000</pubDate>
      <link>https://dev.to/trknhr/summary-of-pointer-of-golang-6bo</link>
      <guid>https://dev.to/trknhr/summary-of-pointer-of-golang-6bo</guid>
      <description>&lt;p&gt;Sometimes I confuse how to retrieve from pointer variable. So I wanna summary the pointer of Golang not to forget them.&lt;br&gt;
Because I haven't used languages which have pointer variable. &lt;/p&gt;

&lt;p&gt;Then it's just memo for me.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&amp;amp; stands for the memory address of value.&lt;/li&gt;
&lt;li&gt;* (in front of values) stands for the pointer's underlying value&lt;/li&gt;
&lt;li&gt;* (in front of types name) stands for to store an address of another variable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's an example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"fmt"&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;H&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;

    &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;//0x10410020&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;//10&lt;/span&gt;

    &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;11&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;//0x10410020&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;q&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;90&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;H&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c"&gt;//0x1041002c&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;//90&lt;/span&gt;

    &lt;span class="n"&gt;q&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c"&gt;//0x1041002c&lt;/span&gt;
    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;//80&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
    </item>
  </channel>
</rss>
