<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Reza Rezvani</title>
    <description>The latest articles on DEV Community by Reza Rezvani (@alireza_rezvani).</description>
    <link>https://dev.to/alireza_rezvani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alireza_rezvani"/>
    <language>en</language>
    <item>
      <title>7 Gears 1 Founder -Garry Tan and Claude Code</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Fri, 17 Apr 2026 22:17:50 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/7-gears-1-founder-garry-tan-and-claude-code-4g47</link>
      <guid>https://dev.to/alireza_rezvani/7-gears-1-founder-garry-tan-and-claude-code-4g47</guid>
      <description>&lt;p&gt;Anthropic shipped Claude Design on Friday.&lt;/p&gt;

&lt;p&gt;Every launch-day publication called it a Figma killer.&lt;/p&gt;

&lt;p&gt;After six hours inside it on launch day — with my production codebase as the input and Claude Code on the other end — I think that framing misses what actually shipped.&lt;/p&gt;

&lt;p&gt;Claude Design is not a design tool.&lt;/p&gt;

&lt;p&gt;It is the missing front-end of a four-stage loop that already existed in pieces across the Claude product surface:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Idea capture — prompt, screenshot, codebase pointer&lt;/li&gt;
&lt;li&gt;Codebase-aware design — your colors, typography, components, extracted automatically&lt;/li&gt;
&lt;li&gt;Claude Code handoff — local CLI agent or Claude Code Web&lt;/li&gt;
&lt;li&gt;Shipped product — inside the agent workflow you already use&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;None of the stages is new. What is new is that they live behind a single product URL.&lt;/p&gt;

&lt;p&gt;The only teams who can see this clearly are teams already running Claude Code.&lt;/p&gt;

&lt;p&gt;I pointed it at openLEO, my productized OpenClaw platform. The color palette and typography lifted cleanly from the codebase. The handoff bundle to my local Claude Code instance matched the design intent on structure and visual fidelity.&lt;/p&gt;

&lt;p&gt;Where it was thinner than I wanted: state specifications and animation patterns. Claude Code filled those with reasonable defaults. For now.&lt;/p&gt;

&lt;p&gt;The most useful page Anthropic shipped is not the launch announcement. It is the four-bullet "Known limitations" section in the docs. The biggest one for engineering teams: pointing Claude Design at a monorepo breaks things. Link subdirectories instead.&lt;/p&gt;

&lt;p&gt;Six hours is not a verdict. Research preview features will change. Token economics at team scale are still unknown.&lt;/p&gt;

&lt;p&gt;But the loop is real. And the teams already running Claude Code will see why it matters before anyone else.&lt;/p&gt;

&lt;p&gt;Full breakdown with five documented limitations and the handoff test: [MEDIUM URL]&lt;/p&gt;




&lt;p&gt;What does your own design-to-ship loop look like today? Figma MCP stitched to Claude Code? All-in on Claude Design from day one? Somewhere in between?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://alirezarezvani.medium.com/claude-design-article-with-sketch-style-robot-pointing-at-hand-drawn-ui-wireframeclaude-design-5365901e4fea" rel="noopener noreferrer"&gt;Hier zum vollen Unterchiedlich&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>LLM Wiki Skill: Build a Second Brain With Claude Code and Obsidian</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Sun, 12 Apr 2026 12:10:02 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/llm-wiki-skill-build-a-second-brain-with-claude-code-and-obsidian-2ljg</link>
      <guid>https://dev.to/alireza_rezvani/llm-wiki-skill-build-a-second-brain-with-claude-code-and-obsidian-2ljg</guid>
      <description>&lt;p&gt;Andrej Karpathy published an LLM Wiki gist last week. 5,000+ stars. Nearly 3,000 forks. The idea: instead of retrieving documents every time you ask a question, have an LLM compile and maintain a persistent knowledge base.&lt;br&gt;
I took the pattern and built it as a reusable Claude Code skill.&lt;br&gt;
Four commands:&lt;br&gt;
→ /wiki-init to bootstrap&lt;br&gt;
→ /wiki-ingest to process sources&lt;br&gt;
→ /wiki-query to synthesize answers&lt;br&gt;
→ /wiki-lint to health-check&lt;br&gt;
Two use cases where I have seen it work:&lt;/p&gt;

&lt;p&gt;CTO Decision Wiki — architecture decisions, meeting notes, and post-mortems compiled into a queryable knowledge base. No more reconstructing context from Slack threads.&lt;br&gt;
Content Research Wiki — every source for every article accumulates. Cross-references build automatically. Contradictions get flagged.&lt;/p&gt;

&lt;p&gt;This is the third Karpathy release I have turned into a Claude Code skill — after autoresearch (agents optimize) and AgentHub (agents collaborate). LLM Wiki completes the trilogy: agents remember.&lt;br&gt;
Full skill architecture, page templates, and honest limitations in the article.&lt;/p&gt;

&lt;p&gt;Read the &lt;a href="https://medium.com/@alirezarezvani/llm-wiki-skill-build-a-second-brain-with-claude-code-and-obsidian-2282752758c1" rel="noopener noreferrer"&gt;Full Article on Medium&lt;/a&gt; &lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Project Glasswing &amp; Claude Mythos: What CTOs Shipping Claude Should Read</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Thu, 09 Apr 2026 07:39:18 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/project-glasswing-claude-mythos-what-ctos-shipping-claude-should-read-245d</link>
      <guid>https://dev.to/alireza_rezvani/project-glasswing-claude-mythos-what-ctos-shipping-claude-should-read-245d</guid>
      <description>&lt;p&gt;Anthropic announced Project Glasswing this morning. Twelve launch partners, thousands of autonomously discovered zero-days, and a frontier model Anthropic is refusing to ship.&lt;/p&gt;

&lt;p&gt;I read the announcement and the 132-page Claude Mythos Preview system card side by side, and I think every piece of coverage I found this morning is missing the three signals that actually matter if you already ship software with Claude in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyupaj28cyqwnesb2wo4l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyupaj28cyqwnesb2wo4l.png" alt=" " width="800" height="993"&gt;&lt;/a&gt;&lt;br&gt;
This week's piece is a same-day reading of both documents from inside a seven-person production Claude team. No press-release rewrite. No vendor marketing. Just the three signals the coverage is missing and the open questions I am sitting with at the end of it.&lt;/p&gt;

&lt;p&gt;Two thousand words, no paywall, written in the time it would take you to read the first five press-release summaries.&lt;/p&gt;

&lt;p&gt;Read the Full article on Medium: &lt;a href="https://medium.com/@alirezarezvani/project-glasswing-claude-mythos-what-ctos-shipping-claude-should-read-779ec7f402bc" rel="noopener noreferrer"&gt;Project Glasswing &amp;amp; Claude Mythos&lt;/a&gt; &lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>claude</category>
      <category>mythos</category>
    </item>
    <item>
      <title>AI Agents like OpenClaw Are Entering the Enterprise With Root Access and Junior-Level Judgment</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Wed, 25 Mar 2026 03:09:31 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/ai-agents-like-openclaw-are-entering-the-enterprise-with-root-access-and-junior-level-judgment-10m0</link>
      <guid>https://dev.to/alireza_rezvani/ai-agents-like-openclaw-are-entering-the-enterprise-with-root-access-and-junior-level-judgment-10m0</guid>
      <description>&lt;p&gt;Enterprise AI agents are getting root access with junior-level judgment.&lt;/p&gt;

&lt;p&gt;That is not a metaphor. It is what I see running OpenClaw &lt;br&gt;
in production every day.&lt;/p&gt;

&lt;p&gt;The Agents of Chaos study (38 researchers, 2 weeks, 6 &lt;br&gt;
autonomous agents) documented what happens when agents get &lt;br&gt;
real tools:&lt;/p&gt;

&lt;p&gt;→ One deleted an entire email server to "protect" a secret&lt;br&gt;
→ Several reported "success" while the system state said otherwise&lt;br&gt;
→ None could reliably tell the difference between their owner &lt;br&gt;
  and someone who just asked persuasively enough&lt;/p&gt;

&lt;p&gt;The governance framework that survived in my deployment:&lt;/p&gt;

&lt;p&gt;Access — minimum surface area, always&lt;br&gt;
Authority — separate "can suggest" from "can execute"&lt;br&gt;
Audit — human-readable traces, not just raw logs&lt;br&gt;
Abort — kill it fast, not after a committee meeting&lt;/p&gt;

&lt;p&gt;The durable moat in this space is not intelligence.&lt;br&gt;
It is trustworthy execution.&lt;/p&gt;

&lt;p&gt;Full analysis with production examples: &lt;a href="https://medium.com/@alirezarezvani/ai-agents-like-openclaw-are-entering-the-enterprise-with-root-access-and-junior-level-judgment-0562837284df" rel="noopener noreferrer"&gt;On Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What governance boundary do you find hardest to enforce &lt;br&gt;
with AI agents?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>cybersecurity</category>
      <category>devops</category>
    </item>
    <item>
      <title>Karpathy's agent-native infrastructure + working Python agent template</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Tue, 17 Mar 2026 07:48:47 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/karpathys-agent-native-infrastructure-working-python-agent-template-2o9d</link>
      <guid>https://dev.to/alireza_rezvani/karpathys-agent-native-infrastructure-working-python-agent-template-2o9d</guid>
      <description>&lt;h2&gt;
  
  
  How To Setup Guide A Agent-Native Hub
&lt;/h2&gt;

&lt;p&gt;Karpathy open-sourced AgentHub last week. Then the repo went private.&lt;/p&gt;

&lt;p&gt;I forked it before it disappeared. Here is the practical guide &lt;br&gt;
nobody else has written.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AgentHub is not another AI tool.&lt;/strong&gt; It is infrastructure — a bare Git &lt;br&gt;
repo + message board designed for swarms of AI agents collaborating &lt;br&gt;
on the same codebase.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No branches. No PRs. No merges. Just a sprawling DAG of commits &lt;br&gt;
going in every direction.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What makes it different from GitHub:&lt;/strong&gt;&lt;br&gt;
→ Agents push git bundles (not PRs that wait for review)&lt;br&gt;
→ A DAG of experiments replaces linear branch history&lt;br&gt;
→ A message board replaces code review comments&lt;br&gt;
→ Iteration speed: seconds, not hours&lt;/p&gt;

&lt;p&gt;I have been running multi-agent systems through OpenClaw for months. &lt;br&gt;
AgentHub fills the missing layer — the shared codebase where coding &lt;br&gt;
agents collaborate without human checkpoints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The article includes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete setup from my fork (since original is private)&lt;/li&gt;
&lt;li&gt;Working Python agent template (original — does not exist elsewhere)&lt;/li&gt;
&lt;li&gt;Use cases beyond ML research&lt;/li&gt;
&lt;li&gt;Honest limitations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Full guide:&lt;/strong&gt; &lt;a href="https://alirezarezvani.medium.com/karpathys-agenthub-a-practical-guide-to-building-your-first-ai-agent-swarm-13ed56a2007b" rel="noopener noreferrer"&gt;Karpathy's AgentHub - How To Setup Guide&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Fork:&lt;/strong&gt; &lt;a href="//github.com/alirezarezvani/agenthub"&gt;github.com/alirezarezvani/agenthub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What would you build on agent-native infrastructure?&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>python</category>
      <category>go</category>
    </item>
    <item>
      <title>I Turned Karpathy's Autoresearch Into a Skill That Optimizes Anything — Here Is the Architecture</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Mon, 16 Mar 2026 13:22:41 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/i-turned-karpathys-autoresearch-into-a-skill-that-optimizes-anything-here-is-the-architecture-57j8</link>
      <guid>https://dev.to/alireza_rezvani/i-turned-karpathys-autoresearch-into-a-skill-that-optimizes-anything-here-is-the-architecture-57j8</guid>
      <description>&lt;p&gt;Karpathy released autoresearch last week. 31,000 stars. &lt;br&gt;
100 ML experiments overnight on one GPU.&lt;/p&gt;

&lt;p&gt;Everyone wrote about the ML training loop.&lt;br&gt;
I saw something different: a pattern.&lt;/p&gt;

&lt;p&gt;One file. One metric. One loop. Modify → Evaluate → Keep or Discard → Repeat.&lt;/p&gt;

&lt;p&gt;That pattern has nothing to do with machine learning.&lt;/p&gt;

&lt;p&gt;So I built a skill that applies it to:&lt;br&gt;
→ API response time (benchmark_speed evaluator)&lt;br&gt;
→ Bundle size (benchmark_size evaluator)&lt;br&gt;&lt;br&gt;
→ Headline click-through (LLM judge evaluator)&lt;br&gt;
→ System prompt quality (LLM judge evaluator)&lt;br&gt;
→ Test pass rate, build speed, memory usage&lt;/p&gt;

&lt;p&gt;Works across 11 tools: Claude Code, Codex, Gemini CLI, &lt;br&gt;
Cursor, Windsurf, OpenClaw, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@alirezarezvani/i-turned-karpathys-autoresearch-into-a-agent-skill-for-claude-code-that-optimizes-anything-here-97de83f2b7f0" rel="noopener noreferrer"&gt;The Full Medium Article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The hardest problem: evaluating things that are not numbers.&lt;br&gt;
Headlines do not come with a val_bpb metric.&lt;/p&gt;

&lt;p&gt;Solution: LLM judges using the agent's own subscription.&lt;br&gt;
Critical constraint: the agent cannot modify its own evaluator.&lt;br&gt;
(The alignment problem in miniature.)&lt;/p&gt;

&lt;p&gt;What I have not done yet: run 100 experiments overnight.&lt;br&gt;
The skill shipped this week. The architecture is solid.&lt;br&gt;
The validation is ahead of me.&lt;/p&gt;

&lt;p&gt;Full architecture + honest limitations:&lt;br&gt;
&lt;a href="https://github.com/alirezarezvani/claude-skills/tree/main/engineering/autoresearch-agent" rel="noopener noreferrer"&gt;On Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What manual optimization loop are you running that should be automated?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>agents</category>
      <category>openai</category>
    </item>
    <item>
      <title>AI Agent Skills - What Building 170 Skills Across 9 Domains Teached Me About Portability</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Sun, 15 Mar 2026 20:38:35 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/ai-agent-skills-what-building-170-skills-across-9-domains-teached-me-about-portability-22m4</link>
      <guid>https://dev.to/alireza_rezvani/ai-agent-skills-what-building-170-skills-across-9-domains-teached-me-about-portability-22m4</guid>
      <description>&lt;p&gt;I built 170 AI agent skills across 9 domains over three months.&lt;/p&gt;

&lt;p&gt;Not because I planned to. Because my team kept needing the same &lt;br&gt;
patterns in different tools.&lt;/p&gt;

&lt;p&gt;The biggest lesson was not about skills. It was about portability.&lt;/p&gt;

&lt;p&gt;The SKILL.md open standard exists. Adoption is real — Claude Code, &lt;br&gt;
Codex CLI, Gemini CLI, Cursor, and others all support it.&lt;/p&gt;

&lt;p&gt;But "compatible" means different things to different tools:&lt;br&gt;
→ Auto-triggering works in Claude Code, barely exists elsewhere&lt;br&gt;
→ Progressive disclosure loads correctly in some tools, not others&lt;br&gt;
→ Token budgets vary wildly — install too many skills and some silently disappear&lt;/p&gt;

&lt;p&gt;The engineering decision that paid off most: every Python tool &lt;br&gt;
uses only the standard library. No pip install. No dependencies. &lt;br&gt;
It runs on any machine with Python 3.8+.&lt;/p&gt;

&lt;p&gt;The decision cost: some tools are more fragile than their &lt;br&gt;
library-dependent alternatives. Honest trade-off.&lt;/p&gt;

&lt;p&gt;Full practical account — architecture lessons, portability gaps, &lt;br&gt;
and what I would do differently:&lt;br&gt;
&lt;a href="https://medium.com/@alirezarezvani/ai-agent-skills-at-scale-what-building-170-skills-across-9-domains-taught-me-about-portability-aa9bab4700cb" rel="noopener noreferrer"&gt;Read here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repository: &lt;a href="//github.com/alirezarezvani/claude-skills"&gt;github.com/alirezarezvani/claude-skills&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>claudecode</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Claude Code /btw: The Usefull Side Question That Changed How I Use Context</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Wed, 11 Mar 2026 10:48:07 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/claude-code-btw-the-usefull-side-question-that-changed-how-i-use-context-3h9b</link>
      <guid>https://dev.to/alireza_rezvani/claude-code-btw-the-usefull-side-question-that-changed-how-i-use-context-3h9b</guid>
      <description>&lt;p&gt;What Claude Code /btw Actually Does&lt;br&gt;
The /btw command in Claude Code lets you ask a side question without adding anything to the conversation history. You type /btw followed by your question, get an answer in a dismissible overlay, and the main conversation continues as if nothing happened.&lt;/p&gt;

&lt;p&gt;That sounds simple. It is. But simple is not obvious, and the implications only clicked after I started using it daily.&lt;/p&gt;

&lt;p&gt;Here is what makes /btw different from just asking a normal question:&lt;br&gt;
The question and answer are ephemeral. They appear in an overlay. Press Space, Enter, or Escape — gone. Nothing enters the conversation history. Your context window stays clean.&lt;/p&gt;

&lt;p&gt;It has full visibility into the current conversation. Everything Claude has already read, every file it analyzed, every decision it made — /btw can reference all of it. It just cannot reach for anything new.&lt;/p&gt;

&lt;p&gt;It works while Claude is processing. Mid-generation, mid-tool-call, mid-file-read — you can fire a /btw and get an answer without interrupting the main task.&lt;/p&gt;

&lt;p&gt;And it has no tool access. This is the critical constraint. Claude cannot read files, run commands, or search when answering a /btw. It answers strictly from what is already in the session context.&lt;/p&gt;

&lt;p&gt;The official documentation describes it perfectly: /btw is the inverse of a subagent. A subagent has full tools but starts with an empty context. /btw sees your full conversation but has no tools. Use /btw to ask about what Claude already knows. Use a subagent to go find out something new.&lt;/p&gt;

&lt;p&gt;Five Scenarios Where Claude Code /btw Earns Its Keep&lt;br&gt;
I tracked my /btw usage for two weeks across three different projects. These five patterns covered about 90% of my use cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@alirezarezvani/claude-code-btw-the-usefull-side-question-that-changed-how-i-use-context-d30ddea4aa2d?sk=b697522758753de155be220a7e5882dd" rel="noopener noreferrer"&gt;READ THE FULL ARTICLE ON MEDIUM&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Claude Code just learned to listen — native /voice mode is here</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Tue, 03 Mar 2026 10:51:11 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/claude-code-just-learned-to-listen-native-voice-mode-is-here-g7j</link>
      <guid>https://dev.to/alireza_rezvani/claude-code-just-learned-to-listen-native-voice-mode-is-here-g7j</guid>
      <description>&lt;p&gt;Anthropic shipped a built-in voice mode for Claude Code today. Type /voice, hold spacebar, talk. Free transcription. No MCP plugins, no API keys. I broke down what works, what does not, and how it compares to the community VoiceMode MCP in my latest article on Medium.&lt;/p&gt;

&lt;p&gt;I compared it against the community VoiceMode MCP that has been the standard approach for months. The native option wins for 80% of use cases. The MCP still wins for offline/privacy-sensitive environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@alirezarezvani/the-new-claude-codes-auto-memory-feature-just-changed-how-my-team-works-here-is-the-setup-i-5126174b35dc" rel="noopener noreferrer"&gt;Read the full breakdown&lt;/a&gt;&lt;/p&gt;

</description>
      <category>claude</category>
      <category>claudecode</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>I Combined Claude Code And OpenClaw: Wiring Agentic Coding and Autonomous AI Assistance (A Self-Experience)</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Fri, 20 Feb 2026 13:03:04 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/i-combined-claude-code-and-openclaw-wiring-agentic-coding-and-autonomous-ai-assistance-a-2pd0</link>
      <guid>https://dev.to/alireza_rezvani/i-combined-claude-code-and-openclaw-wiring-agentic-coding-and-autonomous-ai-assistance-a-2pd0</guid>
      <description>&lt;h2&gt;
  
  
  The 2026 agentic shift nobody prepared for in this way
&lt;/h2&gt;

&lt;p&gt;The difference between an agent that writes code and an agent that deploys code is the difference between a sharp knife and a loaded gun. Both are tools. One requires significantly more respect.&lt;/p&gt;

&lt;p&gt;2026 isn’t the year AI learned to code. That happened in 2024. This is the year AI learned to operate.&lt;/p&gt;

&lt;p&gt;Peter Steinberger — the creator of OpenClaw, formerly Moltbot — put it bluntly at the first ClawCon event in San Francisco this February, where 700 developers showed up (compared to 20 at the inaugural meetup just weeks earlier): it’s about building systems that do things on your computer. Not chat. Not suggest. Do.&lt;/p&gt;

&lt;p&gt;Claude Code handles your codebase — reading files, writing code, running tests, managing git. It lives in your terminal. OpenClaw handles everything else — email triage, calendar management, messaging, browser automation, smart home control. It lives in your messaging apps, running as a persistent daemon with long-term memory.&lt;/p&gt;

&lt;p&gt;They are complementary, not competing. One is a specialist. The other is a generalist. And when you wire them together — triggering Claude Code tasks from a Telegram message, getting progress updates in WhatsApp — you get something that feels genuinely different from anything we had last year.&lt;/p&gt;

&lt;p&gt;But here is the thing: the gap between “this is incredible” and “this just broke production” is exactly one misconfigured permission.&lt;/p&gt;

&lt;p&gt;Read the full article on my Medium Blog: &lt;a href="https://medium.com/@alirezarezvani/i-combined-claude-code-and-openclaw-wiring-agentic-coding-and-autonomous-ai-assistance-a-d558458a1a00" rel="noopener noreferrer"&gt;Combining Claude &amp;amp; Open Claw to Build an agentic ai-driven assistant&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>4 Claude Code Subagent Mistakes That Kill Your Workflow (And The Fixes)</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Mon, 02 Feb 2026 12:17:57 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/4-claude-code-subagent-mistakes-that-kill-your-workflow-and-the-fixes-3n72</link>
      <guid>https://dev.to/alireza_rezvani/4-claude-code-subagent-mistakes-that-kill-your-workflow-and-the-fixes-3n72</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mistake&lt;/th&gt;
&lt;th&gt;Symptom&lt;/th&gt;
&lt;th&gt;Fix&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;All tools → all agents&lt;/td&gt;
&lt;td&gt;Token waste, slow responses&lt;/td&gt;
&lt;td&gt;Explicit allowlists per agent role&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No &lt;code&gt;context: fork&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Main conversation polluted&lt;/td&gt;
&lt;td&gt;Isolated execution with forked context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vague descriptions&lt;/td&gt;
&lt;td&gt;50% activation rate&lt;/td&gt;
&lt;td&gt;Action keywords + CLAUDE.md rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Missing activation hooks&lt;/td&gt;
&lt;td&gt;Random agent selection&lt;/td&gt;
&lt;td&gt;PreToolUse hooks for forced evaluation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Setup Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;You've created your first Claude Code subagent. It has a clever name, a detailed system prompt, and you're excited to see it work.&lt;/p&gt;

&lt;p&gt;Then nothing happens. Or worse — the wrong agent activates. Or your main conversation fills with garbage output from a background task.&lt;/p&gt;

&lt;p&gt;I've set up 40+ subagents across multiple projects. Here's what actually breaks and how to fix it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake 1: Allowing All Tools to All Agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;: Your code-reviewer agent has access to &lt;code&gt;Bash&lt;/code&gt;, &lt;code&gt;Write&lt;/code&gt;, and &lt;code&gt;WebFetch&lt;/code&gt;. It doesn't need any of them. Now it's burning tokens on capabilities it shouldn't use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Explicit tool allowlists in frontmatter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;code-reviewer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Reviews code for bugs, security issues, and best practices&lt;/span&gt;
&lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sonnet&lt;/span&gt;
&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Read&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Glob&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Grep&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Task&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The catch&lt;/strong&gt;: You need to know which tools each agent actually needs. Start restrictive, add as needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake 2: Skipping &lt;code&gt;context: fork&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;: Your research agent dumps 3,000 tokens of analysis into your main conversation. Now you've lost 15% of your context window to output you don't need inline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Isolate agent execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;research-agent&lt;/span&gt;
&lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fork&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;context: fork&lt;/code&gt;, the subagent runs in isolation. Results come back summarized, not dumped raw.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The catch&lt;/strong&gt;: Forked context is one-way. You can't continue the thread. Design for discrete tasks.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mistake 3: Vague Descriptions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Helps with code stuff&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude doesn't know when to invoke this. Your activation rate tanks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: Action-oriented keywords that match how you'll actually prompt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;Triggers on: review code, check for bugs, security audit, code quality&lt;/span&gt;
  &lt;span class="s"&gt;Action: Analyzes code for bugs, security vulnerabilities, and style issues&lt;/span&gt;
  &lt;span class="s"&gt;Output: Structured report with severity levels and fix suggestions&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reinforce in CLAUDE.md:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Subagent Routing&lt;/span&gt;

When user mentions "review", "audit", or "check code" → invoke @code-reviewer
When user mentions "research", "find out", or "investigate" → invoke @research-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Mistake 4: No Activation Hooks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;: Even with good descriptions, Claude sometimes picks the wrong agent or skips agents entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix&lt;/strong&gt;: PreToolUse hooks that force evaluation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# In .claude/hooks.yaml&lt;/span&gt;
&lt;span class="na"&gt;hooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;PreToolUse&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Task"&lt;/span&gt;
      &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;echo "Evaluating subagent selection..."&lt;/span&gt;
        &lt;span class="s"&gt;# Log which agent was selected and why&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates an audit trail and forces explicit agent selection.&lt;/p&gt;




&lt;h2&gt;
  
  
  Production Pattern: Sequential Pipeline
&lt;/h2&gt;

&lt;p&gt;Here's how PubNub structures their agent workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm-spec → architect-review → implementer → tester
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each agent has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explicit tool permissions (pm-spec: Read only, implementer: Read + Write + Bash)&lt;/li&gt;
&lt;li&gt;Forked context (no pollution between stages)&lt;/li&gt;
&lt;li&gt;Handoff hooks (output of one triggers input of next)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Production Pattern: Parallel Specialists
&lt;/h2&gt;

&lt;p&gt;Zach Wills runs &lt;code&gt;/add-linear-ticket&lt;/code&gt; with three agents in parallel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PM Agent&lt;/strong&gt;: Scopes requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UX-Designer Agent&lt;/strong&gt;: Creates interface specs
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Software-Engineer Agent&lt;/strong&gt;: Estimates complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each gets dedicated 200k context. Results merge at the end.&lt;/p&gt;

&lt;p&gt;Key insight: Parallel only works when agents don't depend on each other's output.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Setup Checklist
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Before Creating Any Subagent&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; [ ] Tool list: Only what this agent actually needs
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Context strategy: Fork for background tasks, inline for interactive
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Description: Action keywords that match your prompting style
&lt;span class="p"&gt;-&lt;/span&gt; [ ] CLAUDE.md routing: Explicit rules for when to invoke
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Activation hook: Logging to verify correct selection
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Complete Example: Code Reviewer
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;code-reviewer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;Triggers on: review code, check for bugs, security audit, PR review&lt;/span&gt;
  &lt;span class="s"&gt;Action: Analyzes code for bugs, security issues, and style violations&lt;/span&gt;
  &lt;span class="s"&gt;Output: Structured report with severity and fix suggestions&lt;/span&gt;
&lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sonnet&lt;/span&gt;
&lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fork&lt;/span&gt;
&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Read&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Glob&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Grep&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="na"&gt;You are a senior code reviewer. When given code to review&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="s"&gt;1. Check for bugs and logic errors&lt;/span&gt;
&lt;span class="s"&gt;2. Identify security vulnerabilities&lt;/span&gt;
&lt;span class="s"&gt;3. Flag style inconsistencies&lt;/span&gt;
&lt;span class="s"&gt;4. Suggest specific fixes&lt;/span&gt;

&lt;span class="na"&gt;Output format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;## Summary&lt;/span&gt;
&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;One paragraph overview&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;## Issues Found&lt;/span&gt;
&lt;span class="pi"&gt;|&lt;/span&gt; &lt;span class="err"&gt;Severity&lt;/span&gt; &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="err"&gt;Location&lt;/span&gt; &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="err"&gt;Issue&lt;/span&gt; &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="err"&gt;Fix&lt;/span&gt; &lt;span class="err"&gt;|&lt;/span&gt;
&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="s"&gt;----------|----------|-------|-----|&lt;/span&gt;
&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="s"&gt; HIGH/MED/LOW | file:line | description | suggestion |&lt;/span&gt;

&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="s"&gt;# Recommendation&lt;/span&gt;
&lt;span class="err"&gt;[&lt;/span&gt;&lt;span class="s"&gt;Ship / Revise / Block]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;What's your subagent activation rate? I'm curious what setups are working for others.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AI tools supported the research phase. Configuration patterns and production examples are from my own projects.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About the Author&lt;/strong&gt;&lt;br&gt;
Alireza Rezvani — Building AI-augmented development workflows&lt;br&gt;
&lt;a href="https://alirezarezvani.com" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://linkedin.com/in/alirezarezvani" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Detailed breakdown with more enterprise patterns: &lt;a href="https://medium.com/@alirezarezvani/custom-subagents-90-of-developers-set-them-up-wrong-7328341a4a57?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=custom-subagents-setup" rel="noopener noreferrer"&gt;Read on Medium&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>ai</category>
      <category>productivity</category>
      <category>devops</category>
    </item>
    <item>
      <title>10 Claude Code 2.0 Techniques That Turned 3-Week Projects Into 3-Day Sprints</title>
      <dc:creator>Reza Rezvani</dc:creator>
      <pubDate>Tue, 25 Nov 2025 16:08:17 +0000</pubDate>
      <link>https://dev.to/alireza_rezvani/10-claude-code-20-techniques-that-turned-3-week-projects-into-3-day-sprints-1bpp</link>
      <guid>https://dev.to/alireza_rezvani/10-claude-code-20-techniques-that-turned-3-week-projects-into-3-day-sprints-1bpp</guid>
      <description>&lt;h2&gt;
  
  
  How I went from ‘barely keeping up’ to shipping production features in hours (no exaggeration)
&lt;/h2&gt;

&lt;p&gt;Quick Disclaimer: I’m a paying Claude Code Max subscriber ($200/month). Everything here comes from real production work. Not sponsored — just a CTO who found techniques that genuinely transformed how I ship code.&lt;/p&gt;

&lt;p&gt;Two months ago, I faced a nightmare project: complete backend API refactor, 50+ files to touch, zero room for cascading failures. Timeline? Three weeks. Team already maxed out.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
