<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Masaaki Hirano</title>
    <description>The latest articles on DEV Community by Masaaki Hirano (@fl4tlin3).</description>
    <link>https://dev.to/fl4tlin3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fl4tlin3"/>
    <language>en</language>
    <item>
      <title>Introducing Perstack: Agentic AI in 12 Lines of TOML, No Framework Required</title>
      <dc:creator>Masaaki Hirano</dc:creator>
      <pubDate>Thu, 12 Feb 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/fl4tlin3/introducing-perstack-agentic-ai-in-12-lines-of-toml-no-framework-required-41d7</link>
      <guid>https://dev.to/fl4tlin3/introducing-perstack-agentic-ai-in-12-lines-of-toml-no-framework-required-41d7</guid>
      <description>&lt;p&gt;Every agent framework I've used buries what the agent &lt;em&gt;does&lt;/em&gt; inside the code that makes it &lt;em&gt;run&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Agentic AI -- an LLM looping through tool use, planning, and reflection -- is how tools like Claude Code and OpenClaw operate autonomously. The "agentic" part isn't the model; it's &lt;strong&gt;the loop around the model&lt;/strong&gt;. But in every framework I've tried, that loop is tangled with your application code: HTTP handlers, state management, orchestration. Changing what the agent says means deploying your app. Letting a domain expert tune prompts means giving them the codebase.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://perstack.ai" rel="noopener noreferrer"&gt;Perstack&lt;/a&gt; to fix this -- an open-source runtime that separates agent definitions from agent execution. The definition is what changes most; it shouldn't be the hardest to change.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real problem: agents defined in code
&lt;/h2&gt;

&lt;p&gt;Here's what a typical agent definition looks like in a framework:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SecurityReviewer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude-sonnet-4-5&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;FileReader&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nc"&gt;CodeAnalyzer&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;system_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
            You are a security-focused code reviewer.
            Check for SQL injection, XSS, and auth bypass.
            Explain findings with severity ratings.
        &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# orchestration logic
&lt;/span&gt;        &lt;span class="c1"&gt;# tool calling logic
&lt;/span&gt;        &lt;span class="c1"&gt;# state management
&lt;/span&gt;        &lt;span class="c1"&gt;# error handling
&lt;/span&gt;        &lt;span class="c1"&gt;# ...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The three lines that actually define what this agent does are buried inside a class that handles &lt;em&gt;how&lt;/em&gt; it runs. The behavior and the machinery share a file, a deployment pipeline, and a test suite.&lt;/p&gt;

&lt;p&gt;This creates three structural problems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Framework lock-in.&lt;/strong&gt; Your agent definition is expressed in the framework's API. Switching to a different runtime means rewriting every agent, not because the behavior changed, but because the packaging did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer-gated iteration.&lt;/strong&gt; The person who knows the domain -- the security expert, the support lead, the analyst -- can't touch the agent definition without a developer. Prompt tuning becomes a JIRA ticket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No standalone testing.&lt;/strong&gt; You can't run the agent without running the app. Feedback loops don't start until the application is wired up. By then, you've invested weeks before discovering the agent doesn't handle edge cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  12 lines of TOML
&lt;/h2&gt;

&lt;p&gt;Here's the same agent, defined outside the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[experts."security-reviewer"]&lt;/span&gt;
&lt;span class="py"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Reviews code for security vulnerabilities"&lt;/span&gt;
&lt;span class="py"&gt;instruction&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""
You are a security-focused code reviewer.
Check for SQL injection, XSS, and authentication bypass.
Explain each finding with a severity rating and a suggested fix.
"""&lt;/span&gt;

&lt;span class="nn"&gt;[experts."security-reviewer".skills."@perstack/base"]&lt;/span&gt;
&lt;span class="py"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"mcpStdioSkill"&lt;/span&gt;
&lt;span class="py"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"npx"&lt;/span&gt;
&lt;span class="py"&gt;packageName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"@perstack/base"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Twelve lines. The agent's identity, behavior, and tool access -- all in a single TOML file called &lt;code&gt;perstack.toml&lt;/code&gt;. No imports, no classes, no orchestration code.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;instruction&lt;/code&gt; field is natural language. The &lt;code&gt;skills&lt;/code&gt; section declares tool access via &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;MCP&lt;/a&gt; -- &lt;code&gt;@perstack/base&lt;/code&gt; is Perstack's built-in tool server, but any MCP-compatible server works (the same standard that Claude Desktop, Cursor, and other tools use). The runtime handles everything else: model access, tool execution, state management, context windows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is not a simplification. It's a separation.&lt;/strong&gt; The agent definition is what changes hourly -- prompts get tuned, tools get added, delegation chains get restructured. The runtime is what changes quarterly. Coupling them in the same codebase means they deploy together, break together, and bottleneck each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  From idea to running agent
&lt;/h2&gt;

&lt;p&gt;You don't even need to write TOML by hand. &lt;code&gt;create-expert&lt;/code&gt; generates it from a description:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"A code reviewer that checks for security vulnerabilities and suggests fixes"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't scaffolding. &lt;code&gt;create-expert&lt;/code&gt; is itself an agent that generates the &lt;code&gt;perstack.toml&lt;/code&gt;, test-runs the resulting Expert against sample inputs, analyzes the execution, and iterates on the definition until behavior stabilizes. You get a working agent -- not a template.&lt;/p&gt;

&lt;p&gt;Run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx perstack start security-reviewer &lt;span class="s2"&gt;"Review this login handler"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;perstack start&lt;/code&gt; opens a text-based interactive UI. You see the agent reason, call tools, and produce output in real time. No application to deploy. No environment to configure beyond an LLM API key.&lt;/p&gt;

&lt;p&gt;Want headless output for CI?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx perstack run security-reviewer &lt;span class="s2"&gt;"Review this login handler"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;JSON events to stdout. Pipe it wherever you want.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-agent collaboration in TOML
&lt;/h2&gt;

&lt;p&gt;Experts that need to collaborate? Same file. The &lt;code&gt;delegates&lt;/code&gt; field defines which Experts can call which:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[experts."security-reviewer"]&lt;/span&gt;
&lt;span class="py"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Coordinates security review across the codebase"&lt;/span&gt;
&lt;span class="py"&gt;instruction&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""
Conduct a comprehensive security review.
Delegate file-level analysis to the file-reviewer.
Aggregate findings into a prioritized report.
"""&lt;/span&gt;
&lt;span class="py"&gt;delegates&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"@security-reviewer/file-reviewer"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nn"&gt;[experts."@security-reviewer/file-reviewer"]&lt;/span&gt;
&lt;span class="py"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Reviews individual files for security issues"&lt;/span&gt;
&lt;span class="py"&gt;instruction&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Analyze the given file for SQL injection, XSS, CSRF, and auth bypass vulnerabilities."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The coordinator delegates to specialists. Each Expert runs in its own context window -- no prompt bloat from cramming everything into one conversation. The runtime handles the delegation, result aggregation, and checkpoint management.&lt;/p&gt;

&lt;h2&gt;
  
  
  From prototype to production
&lt;/h2&gt;

&lt;p&gt;The CLI is for prototyping. For production, Perstack provides lockfile-based deployment and runtime embedding via &lt;code&gt;@perstack/runtime&lt;/code&gt;. Execution is event-driven -- every step emits structured events, so it fits naturally into containerized environments where you need to stream progress back to your application. The same &lt;code&gt;perstack.toml&lt;/code&gt; drives all of it -- the definition doesn't change because the deployment target changed. The &lt;a href="https://perstack.ai/docs/getting-started/walkthrough/" rel="noopener noreferrer"&gt;getting started walkthrough&lt;/a&gt; covers the full path from CLI to application integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runtime vs. framework: why the distinction matters
&lt;/h2&gt;

&lt;p&gt;Frameworks are opinionated about how you build your application. They provide agent classes, memory abstractions, tool registries, orchestration APIs. Your agent lives inside the framework.&lt;/p&gt;

&lt;p&gt;A runtime is opinionated about how agents &lt;em&gt;execute&lt;/em&gt;. It doesn't care how your application is built. Your application talks to the runtime over an API. The agent definition is data, not code.&lt;/p&gt;

&lt;p&gt;This distinction has real consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-sourced execution.&lt;/strong&gt; Every step the agent takes is recorded as a structured event with step-level checkpoints. You can resume from any point, replay to debug, and diff across model or provider changes. This isn't a logging feature -- it's the execution model. Non-deterministic behavior becomes inspectable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolation by design.&lt;/strong&gt; Each Expert runs in its own context. Workspace boundaries, environment sandboxing, tool whitelisting. When you deploy to a container platform, the isolation model maps directly to infrastructure -- one container, one Expert, one job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Independent lifecycles.&lt;/strong&gt; The agent definition updates hourly. The application code deploys weekly. Environment secrets rotate on their own schedule. User conversations are real-time. A runtime lets these four axes move independently. A framework couples them into one deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provider independence.&lt;/strong&gt; Eight LLM providers, one config change. Anthropic, OpenAI, Google, DeepSeek, Ollama, Azure, Bedrock, Vertex. The agent definition doesn't mention the provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  The methodology shift
&lt;/h2&gt;

&lt;p&gt;The deeper point isn't about TOML syntax or CLI commands. It's about who owns what.&lt;/p&gt;

&lt;p&gt;When agent definitions are code, developers own everything. When agent definitions are natural language in a config file, &lt;strong&gt;domain experts own behavior and developers own integration&lt;/strong&gt;. Each side ships on its own schedule. The prompt specialist doesn't wait for a deploy. The developer doesn't review prompt tweaks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is the same separation that happened with infrastructure (Terraform), CI/CD (YAML pipelines), and containerization (Dockerfiles).&lt;/strong&gt; The pattern is: extract the thing that changes most into a declarative format, give it its own lifecycle, and let a runtime execute it.&lt;/p&gt;

&lt;p&gt;This separation is overdue for agentic AI.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://perstack.ai" rel="noopener noreferrer"&gt;Perstack&lt;/a&gt; is open source under Apache 2.0. The &lt;a href="https://perstack.ai/docs/getting-started/walkthrough/" rel="noopener noreferrer"&gt;getting started walkthrough&lt;/a&gt; covers everything in this article and more. The source is on &lt;a href="https://github.com/perstack-ai/perstack" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I'm building this. If the separation between agent definition and agent execution matters to you, I'd like to hear how you're thinking about it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Prototyping for Agent-First Apps</title>
      <dc:creator>Masaaki Hirano</dc:creator>
      <pubDate>Wed, 11 Feb 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/fl4tlin3/prototyping-for-agent-first-apps-54j7</link>
      <guid>https://dev.to/fl4tlin3/prototyping-for-agent-first-apps-54j7</guid>
      <description>&lt;p&gt;When you build an agent-powered app, the instinct is to start with the app — set up a project, install dependencies, write scaffolding. Then somewhere in the middle, you start figuring out what the agent should actually do.&lt;/p&gt;

&lt;p&gt;This is backwards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent-first means starting with the agent.&lt;/strong&gt; Get the brain working first. Once the agent behaves the way you want, expand outward: add tools, then build the shell around it. The agent is the product — everything else is infrastructure.&lt;/p&gt;

&lt;p&gt;This matters because the agent will keep evolving. Prompts change, capabilities expand, behavior gets refined. If the agent is tangled with your application code, every change risks breaking something unrelated. Keep the brain separate from the body, and both can evolve on their own terms.&lt;/p&gt;

&lt;p&gt;This guide uses &lt;a href="https://perstack.ai" rel="noopener noreferrer"&gt;Perstack&lt;/a&gt; — a toolkit for agent-first development. In Perstack, agents are called &lt;strong&gt;Experts&lt;/strong&gt;: modular micro-agents defined in plain text (&lt;code&gt;perstack.toml&lt;/code&gt;), executed by a runtime that handles model access, tool orchestration, and state management. Perstack supports multiple LLM providers including Anthropic, OpenAI, and Google. You define what the agent should do; the runtime makes it work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt; Node.js 22+ and an LLM API key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-ant-...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What an Expert looks like
&lt;/h2&gt;

&lt;p&gt;An Expert is defined in a &lt;code&gt;perstack.toml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[experts."reviewer"]&lt;/span&gt;
&lt;span class="py"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Reviews code for security issues"&lt;/span&gt;
&lt;span class="py"&gt;instruction&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""
You are a security-focused code reviewer.
Check for SQL injection, XSS, and authentication bypass.
Explain each finding with a severity rating and a suggested fix.
"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the entire definition. No SDK, no boilerplate, no orchestration code. Run it immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx perstack start reviewer &lt;span class="s2"&gt;"Review this login handler"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;perstack start&lt;/code&gt; opens a text-based interactive UI where you can watch the Expert reason and act in real time.&lt;/p&gt;

&lt;h2&gt;
  
  
  From idea to agent in one command
&lt;/h2&gt;

&lt;p&gt;Writing TOML by hand works, but there's a faster way. &lt;a href="https://www.npmjs.com/package/create-expert" rel="noopener noreferrer"&gt;&lt;code&gt;create-expert&lt;/code&gt;&lt;/a&gt; is a CLI that generates Expert definitions from natural language descriptions — it's itself an Expert that builds other Experts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"A code review assistant that checks for security vulnerabilities, suggests fixes, and explains the reasoning behind each finding"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;create-expert&lt;/code&gt; takes your description, generates a &lt;code&gt;perstack.toml&lt;/code&gt;, test-runs the Expert against sample inputs, and iterates on the definition until behavior stabilizes. You get a working Expert — no code, no setup.&lt;/p&gt;

&lt;p&gt;The description doesn't need to be precise. Start vague:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"Something that helps with onboarding new team members"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;create-expert&lt;/code&gt; will interpret your intent, make decisions about scope and behavior, and produce a testable Expert. You can always refine from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Iterate by talking
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;create-expert&lt;/code&gt; reads the existing &lt;code&gt;perstack.toml&lt;/code&gt; in your current directory. Run it again with a refinement instruction, and it modifies the definition in place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"Make it more concise. It's too verbose when explaining findings"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"Add a severity rating to each finding: critical, warning, or info"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"Run 10 tests with different code samples and show me the results"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each iteration refines the definition. The Expert gets better, and you never open an editor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test with real scenarios
&lt;/h2&gt;

&lt;p&gt;Prototyping isn't just about getting the agent to run — it's about finding where it fails.&lt;/p&gt;

&lt;p&gt;Write a test case that your agent should catch. For the code reviewer, create a file with a deliberate vulnerability:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"Read the file test/vulnerable.py and review it. It contains a SQL injection — make sure the reviewer catches it and suggests a parameterized query fix"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the reviewer misses it, you've found a gap in the instruction. Refine and test again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx create-expert &lt;span class="s2"&gt;"The reviewer missed the SQL injection in the raw query on line 12. Update the instruction to pay closer attention to string concatenation in SQL statements"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the feedback loop that matters: &lt;strong&gt;write a scenario the agent should handle, test it, fix the instruction when it fails, repeat.&lt;/strong&gt; By the time you build the app around it, you already know what the agent can and can't do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Evaluate with others
&lt;/h2&gt;

&lt;p&gt;At some point you need feedback beyond your own testing. &lt;code&gt;perstack start&lt;/code&gt; makes this easy — hand someone the &lt;code&gt;perstack.toml&lt;/code&gt; and they can run the Expert themselves:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx perstack start reviewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The interactive UI lets them try their own queries and see how the Expert responds. No app to deploy, no environment to configure beyond the API key.&lt;/p&gt;

&lt;p&gt;Every execution is recorded as checkpoints in the local &lt;code&gt;perstack/&lt;/code&gt; directory. After a round of feedback, inspect what happened:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx perstack log
npx perstack log &lt;span class="nt"&gt;--tools&lt;/span&gt;    &lt;span class="c"&gt;# what tools were called&lt;/span&gt;
npx perstack log &lt;span class="nt"&gt;--errors&lt;/span&gt;   &lt;span class="c"&gt;# what went wrong&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can review specific runs, filter by step, or export as JSON for deeper analysis. See the &lt;a href="https://perstack.ai/docs/references/cli/" rel="noopener noreferrer"&gt;CLI Reference&lt;/a&gt; for the full set of options.&lt;/p&gt;

&lt;p&gt;This gives you a lightweight evaluation workflow: distribute the TOML, collect usage, analyze the logs, refine the instruction.&lt;/p&gt;

&lt;h2&gt;
  
  
  When your prototype grows
&lt;/h2&gt;

&lt;p&gt;At some point, your prototype will need more. The same &lt;code&gt;perstack.toml&lt;/code&gt; scales — you're not throwing away work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The agent needs tools&lt;/strong&gt; — search the web, query a database, call an API → &lt;a href="https://perstack.ai/docs/guides/extending-with-tools/" rel="noopener noreferrer"&gt;Extending with Tools&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The prompt is getting long&lt;/strong&gt; — split into multiple Experts that collaborate → &lt;a href="https://perstack.ai/docs/guides/taming-prompt-sprawl/" rel="noopener noreferrer"&gt;Taming Prompt Sprawl&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The prototype works&lt;/strong&gt; — embed it into your application → &lt;a href="https://perstack.ai/docs/guides/adding-ai-to-your-app/" rel="noopener noreferrer"&gt;Adding AI to Your App&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/perstack-ai/perstack" rel="noopener noreferrer"&gt;Perstack Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>javascript</category>
    </item>
    <item>
      <title>The Hell of AI Agent Development</title>
      <dc:creator>Masaaki Hirano</dc:creator>
      <pubDate>Wed, 10 Dec 2025 16:19:28 +0000</pubDate>
      <link>https://dev.to/fl4tlin3/the-hell-of-ai-agent-development-47b4</link>
      <guid>https://dev.to/fl4tlin3/the-hell-of-ai-agent-development-47b4</guid>
      <description>&lt;p&gt;This isn't engineering. It's prayer.&lt;/p&gt;

&lt;p&gt;AI agent development right now is a muddy hell.&lt;/p&gt;

&lt;p&gt;The prompt that worked perfectly yesterday lies to your face today.&lt;br&gt;
The framework labeled "cutting edge" will be called "legacy" next week, and its maintainers will disappear the month after.&lt;br&gt;
And the more you look at logs to debug, the more you realize you're wasting your life just trying to guess the LLM's "feelings" instead of checking your logic.&lt;/p&gt;

&lt;p&gt;Do you know the web development scene around 2005 where people screamed "Explode, IE!"?&lt;br&gt;
The chaos here makes that look cute.&lt;/p&gt;

&lt;p&gt;For the lucky Gen Z devs whose biggest trauma is a React hydration error, let me explain the vibe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Your model is a moody teenager.&lt;/strong&gt; It might run your logic, or it might decide to write a poem about strawberries instead. You don't know until you pay the API bill.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Debugging is a gacha game.&lt;/strong&gt; You pull the lever (run the prompt), spend $0.10, and hope you get a 5-star response (valid JSON). Usually, you get a 1-star trash item (a hallucination).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Documentation is a folklore.&lt;/strong&gt; The only "docs" are a 6-hour-old tweet from the founder, a Discord channel where everyone is just screaming "LFG", or an undecipherable Arxiv paper written by someone who has never left the lab.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Definition of "Agent" Died Three Times
&lt;/h2&gt;

&lt;p&gt;On November 30, 2022, ChatGPT appeared, and we all happily jumped into the fire.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2023: The Year of Pointless Optimization&lt;/strong&gt;&lt;br&gt;
We called them "ReAct." We felt like wizards tuning prompt chains.&lt;br&gt;
We spent hundreds of hours fine-tuning chunk sizes for Vector DBs. We debated "RecursiveRetrieval" vs "ParentDocumentRetrieval" as if it was religious dogma.&lt;br&gt;
We thought we were building assets. We were just building sandcastles that would be washed away by the next model release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2024: The Year the Credit Card Died&lt;/strong&gt;&lt;br&gt;
Cline and Manus arrived. The definition changed from "answering a question" to "completing a task."&lt;br&gt;
Great, right? No.&lt;br&gt;
The agent would get stuck in a loop, fixing the same line of code for 4 hours while you slept.&lt;br&gt;
You woke up not to a finished feature, but to a $500 OpenAI bill and a git repo in a detached HEAD state.&lt;br&gt;
We realized "autonomy" often just meant "autonomous money burning."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2025: The Year "Legacy" Became a Daily Event&lt;/strong&gt;&lt;br&gt;
Computer Use and Claude Code pulled the trigger. Agents now control the browser and terminal directly.&lt;br&gt;
RAG pipelines we built in 2023? &lt;strong&gt;Dead.&lt;/strong&gt;&lt;br&gt;
The custom tool definitions we agonized over in 2024? &lt;strong&gt;Dead.&lt;/strong&gt;&lt;br&gt;
I personally led a heavy RAG project. I spent months optimizing retrieval key-value stores. In one morning update, a model with a massive context window and direct filesystem access turned my life's work into "legacy bloat."&lt;br&gt;
I watched my code rot in real-time. It wasn't just obsolete; it was embarrassing.&lt;/p&gt;

&lt;p&gt;Now, top-runner agents rewrite the rules daily. The definition of an agent isn't something you learn anymore. It's something that gets forced upon you while you're trying to fix yesterday's breaking change.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality on the Development Floor
&lt;/h2&gt;

&lt;p&gt;While the industry burns outside, inside the office, we are drowning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Client's "Just make it like ChatGPT"&lt;/strong&gt;&lt;br&gt;
Clients see Sam Altman's tweet and say, "Can we just add this?"&lt;br&gt;
They don't know that the "this" they want requires a complete re-architecture of the agentic loop we spent 3 months building.&lt;br&gt;
The benchmark we targeted 6 months ago is now considered "dumb." You built a calculator; they now want a mathematician.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "SOTA" Curse&lt;/strong&gt;&lt;br&gt;
Is the internal tool you built with LangChain in early 2024 still running?&lt;br&gt;
Was that complex CrewAI multi-agent swarm actually better than a single well-prompted Claude 3.5 Sonnet call?&lt;br&gt;
Be honest.&lt;br&gt;
Most of our "cutting-edge" projects from 6 months ago are now just... awkward. Like a flip phone in the iPhone era.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 95% Failure Rate&lt;/strong&gt;&lt;br&gt;
MIT says 95% of corporate AI investments fail.&lt;br&gt;
We know why. It's not because the tech is bad. It's because by the time you deploy, the tech is &lt;strong&gt;gone&lt;/strong&gt;.&lt;br&gt;
We are building bridges while the river keeps moving 5 miles downstream every week.&lt;br&gt;
The result is always the same: "Abandon" or "Rebuild." There is no "Maintain."&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompts Bloat and No One Touches Them
&lt;/h2&gt;

&lt;p&gt;There is also a technical hell.&lt;/p&gt;

&lt;p&gt;The funny thing about prompts is that they rot faster than milk in the sun.&lt;/p&gt;

&lt;p&gt;You add an edge case here, an XML tag there, and a "please don't hallucinate" prayer at the bottom.&lt;br&gt;
Suddenly, your sleek system instruction looks like a conspiracy theorist's manifesto.&lt;br&gt;
And the LLM? It starts acting like a tired intern on a Friday afternoon. You ask for JSON, it hands you a poem. You ask it to check the docs, it hallucinates a library that doesn't exist. It ignores your all-caps instructions because it got fascinated by a typo in line 400.&lt;/p&gt;

&lt;p&gt;But the technical reality is serious. &lt;strong&gt;Context Rot.&lt;/strong&gt;&lt;br&gt;
Research shows the "Lost in the Middle" phenomenon is real. As token count increases, retrieval accuracy drops. The Inverse Scaling Law kicks in.&lt;br&gt;
A 200k context window is meaningless if the effective attention span is only 8k. It is a fundamental trade-off between capacity and precision that no amount of prompt engineering can fully solve.&lt;/p&gt;

&lt;p&gt;And then? No one touches that prompt anymore. It becomes "The Sacred Text." It is legacy code, but worse—it's legacy code that costs money every time you run it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frameworks: A Bullet Train to the Graveyard
&lt;/h2&gt;

&lt;p&gt;Even more troublesome is framework dependency.&lt;/p&gt;

&lt;p&gt;Can you port an agent written in LangChain to Mastra?&lt;br&gt;
No. You're not just porting code; you're rewriting your entire mental model of how an agent thinks. The prompt structure, tool definitions, and memory handling are all tightly coupled to the framework's opinion.&lt;/p&gt;

&lt;p&gt;When you &lt;code&gt;uv add&lt;/code&gt; an agent framework, you aren't just adding a dependency. You are buying a ticket on a high-speed train with no brakes.&lt;br&gt;
You are determined to die with that framework.&lt;br&gt;
And frameworks &lt;em&gt;do&lt;/em&gt; die. Or worse, they "evolve."&lt;br&gt;
How many of you felt a piece of your soul wither away during the migration from LangChain 0.1 to 0.2? We know exactly what "breaking change" means here. It means "rewrite everything or stay on the vulnerable version forever."&lt;/p&gt;

&lt;p&gt;Accumulated assets—prompts, tool designs, agent configurations—lose their value along with the death of the framework.&lt;/p&gt;

&lt;p&gt;But is that struggle just a developer's problem?&lt;/p&gt;

&lt;p&gt;If you switch frameworks, the agent's behavior changes. If you rewrite the prompt, the tone of the response changes. Features that were usable yesterday disappear today due to "spec changes."&lt;br&gt;
From the user's perspective, that might be a "deterioration" rather than an "improvement." Familiar operations suddenly change, and expected responses no longer return.&lt;/p&gt;

&lt;p&gt;We developers call it "paying off technical debt." But for users, it's simply "it got harder to use."&lt;br&gt;
Framework convenience, model convenience, development team convenience—everything is passed on to the user experience.&lt;br&gt;
This is the moment when the hell of agent development spreads to the user.&lt;/p&gt;

&lt;p&gt;Naturally, we know this risk, so we can't easily make a move.&lt;br&gt;
It is clear that there is a much greater risk of change than when writing REST APIs on a framework.&lt;br&gt;
It's no longer about choosing a framework from a software perspective, but choosing which coffin is comfortable to lie in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security? You Mean "RCE as a Service"?
&lt;/h2&gt;

&lt;p&gt;And then there is the production environment. Multi-user. Security.&lt;/p&gt;

&lt;p&gt;Let's be honest about what we are building. We are essentially building &lt;strong&gt;Remote Code Execution as a Service&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We are giving a probabilistic token generator—which we know hallucinates—the ability to execute shell commands, read files, and access the network.&lt;br&gt;
In any other context, this would be a security nightmare. A critical vulnerability designed by us.&lt;br&gt;
But in 2025, if you don't allow this, your agent is "dumb." If you restrict permissions, it can't do the job.&lt;/p&gt;

&lt;p&gt;So you are forced to choose:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Build a Crippled Agent&lt;/strong&gt;: Safely useless.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Build a Security Time Bomb&lt;/strong&gt;: Use standard frameworks that run agents inside your app process. One prompt injection, and your server is gone.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Become a Platform Team&lt;/strong&gt;: Build custom sandboxes, gVisor clusters, and ephemeral VMs just to run a single logical loop.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;"Just use a sandbox."&lt;br&gt;
Easier said than done. Are you ready to maintain a complex infrastructure layer just to safely let your agent read a text file?&lt;/p&gt;

&lt;h2&gt;
  
  
  Is There a Way Out of Hell?
&lt;/h2&gt;

&lt;p&gt;If you've read this far and your eye is twitching, you are not alone.&lt;br&gt;
Welcome to the club. The coffee is stale, the Jira tickets are endless, and the hell is just getting started.&lt;/p&gt;

&lt;p&gt;The problem isn't the technology. It's that we haven't agreed on the &lt;strong&gt;boundaries&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Remember why IE6 was a nightmare? Because Microsoft tried to bundle the &lt;em&gt;browser&lt;/em&gt; with the &lt;em&gt;web&lt;/em&gt;.&lt;br&gt;
If you used their browser, you had to use their non-standard tags.&lt;br&gt;
That is exactly what is happening today. Every LLM provider and framework author is trying to verify their own "IE6" ecosystem.&lt;/p&gt;

&lt;p&gt;We need to define the "HTML" of agents. We need &lt;strong&gt;Separation of Concerns&lt;/strong&gt; before we all go insane:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Model ≠ Agent&lt;/strong&gt;: Your logic shouldn't break just because you switched from GPT-5 to Claude 4.5.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Framework ≠ Agent&lt;/strong&gt;: Your prompts and assets are &lt;em&gt;yours&lt;/em&gt;. They shouldn't die just because LangChain released v2.0.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;App ≠ Agent&lt;/strong&gt;: The agent is the worker. The app is just the office. Don't weld the worker to the desk.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We did this for the Web. HTML for structure, CSS for style, JS for logic.&lt;br&gt;
Because we drew those lines, the ecosystem exploded.&lt;br&gt;
Until we draw these lines for Agents, we are just building legacy code on a treadmill. And that is why we are in hell.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Are We Heading?
&lt;/h2&gt;

&lt;p&gt;Honestly? The hell will continue until we stop accepting it.&lt;/p&gt;

&lt;p&gt;It took 10 years for the Web to get standard HTML5. We are in year 3 of LLMs. We are barely at the starting line.&lt;/p&gt;

&lt;p&gt;But history is clear on one thing: &lt;strong&gt;Lock-in always loses.&lt;/strong&gt;&lt;br&gt;
IE died. Flash died. Proprietary walled gardens eventually get demolished by Open Source bulldozers.&lt;br&gt;
The winner isn't decided by Sam Altman or Dario Amodei. It is decided by &lt;strong&gt;us&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Every time you choose a framework, you are casting a vote.&lt;br&gt;
Every time you contribute to an open standard, you are building the exit door from this hell.&lt;br&gt;
The ecosystem is shaped by what &lt;em&gt;we&lt;/em&gt; use, not what &lt;em&gt;they&lt;/em&gt; sell.&lt;/p&gt;

&lt;p&gt;So, what can we grassroots developers do?&lt;br&gt;
Actually, we hold all the cards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build, Choose, or Speak Up.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you Build: Isolate.&lt;/strong&gt;&lt;br&gt;
Write code that doesn't care which LLM it's talking to. Write prompts that don't care which framework runs them. Don't wait for a standard; be the one who creates the boundary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you Choose: Resist.&lt;/strong&gt;&lt;br&gt;
Convenience is the bait. Lock-in is the hook. Don't bite.&lt;br&gt;
Choose tools that let you leave. If a framework demands 100% of your soul, give it 0%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you Speak Up: Unite.&lt;/strong&gt;&lt;br&gt;
Your struggle is not unique. Share your "hell." The only thing proprietary vendors fear is a united developer community saying "No."&lt;/p&gt;




&lt;p&gt;It is December 2025. I am still in hell.&lt;br&gt;
But looking around, I see I am not burning alone.&lt;/p&gt;

&lt;p&gt;The "Top Runners" are done with their releases for the year. They are probably heading to Hawaii.&lt;br&gt;
We, on the other hand, have hallucinations to debug and breaking changes to fix.&lt;/p&gt;

&lt;p&gt;Enjoy the holidays. Try to disconnect.&lt;br&gt;
We have a massive amount of "legacy" code to rewrite in 2026.&lt;/p&gt;

&lt;p&gt;See you in hell.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Your turn: What’s your AI agent hell story? Share it in the comments.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>discuss</category>
      <category>blogging</category>
    </item>
    <item>
      <title>MCP Wins, But the War Just Started: Why Standardization Won't Save Us from Lock-in</title>
      <dc:creator>Masaaki Hirano</dc:creator>
      <pubDate>Tue, 09 Dec 2025 19:48:06 +0000</pubDate>
      <link>https://dev.to/fl4tlin3/mcp-wins-but-the-war-just-started-why-standardization-wont-save-us-from-lock-in-53a4</link>
      <guid>https://dev.to/fl4tlin3/mcp-wins-but-the-war-just-started-why-standardization-wont-save-us-from-lock-in-53a4</guid>
      <description>&lt;p&gt;December 9, 2025 marked a historical turning point.&lt;/p&gt;

&lt;p&gt;Anthropic &lt;a href="https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation" rel="noopener noreferrer"&gt;announced&lt;/a&gt; that it has donated its &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; to the Linux Foundation.&lt;br&gt;
Concurrently, a new management organization, the &lt;strong&gt;Agentic AI Foundation (AAIF)&lt;/strong&gt;, has been established under the Linux Foundation umbrella.&lt;/p&gt;

&lt;p&gt;This is not merely the transfer of a single technology.&lt;br&gt;
The founding members of this new foundation include not only Anthropic but also &lt;strong&gt;OpenAI, Block, Google, Microsoft, and AWS&lt;/strong&gt;—giant tech companies that should theoretically be competitors.&lt;/p&gt;

&lt;p&gt;The chaotic AI industry, where each company had been locking users in with proprietary standards, has finally moved unanimously toward the "standardization of Agentic AI."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Current State of MCP by the Numbers: Why is it the De Facto Standard?
&lt;/h2&gt;

&lt;p&gt;The momentum of MCP has reached a level that is unstoppable.&lt;br&gt;
Looking at its track record from its introduction a year ago to the present, it is clear why so many companies decided to join.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;10,000+&lt;/strong&gt;: The number of active public MCP servers.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;97M+&lt;/strong&gt;: Monthly downloads of Python and TypeScript SDKs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Full Adoption by Major Platforms&lt;/strong&gt;: ChatGPT, Cursor, Gemini, Microsoft Copilot, VS Code—virtually all tools used by developers now support MCP.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Immediate Infrastructure Support&lt;/strong&gt;: AWS, Cloudflare, Google Cloud, and Microsoft Azure officially provide deployment environments for MCP servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is safe to say that MCP is no longer "Anthropic's convenient standard" but has completely permeated as the "plumbing for running AI agents."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Linux Foundation?
&lt;/h2&gt;

&lt;p&gt;The greatest significance of this donation lies in the establishment of &lt;strong&gt;Vendor Neutrality&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Until now, no matter how open it was, MCP was "Anthropic's standard." For other companies, betting fully on a competitor's technology that could change specs at any time was a risk.&lt;br&gt;
However, by donating it to the Linux Foundation, MCP has become a "public good" rather than the property of a specific company.&lt;/p&gt;

&lt;p&gt;Anthropic itself explicitly stated the reason for the donation: "to ensure that MCP remains open source, community-led, and vendor-neutral."&lt;br&gt;
Future development and maintenance will be determined not by the intentions of a specific company but by community input and a transparent governance model. This is why OpenAI and Google were able to participate with confidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  AAIF: The Movement for "Agent Standardization" Beyond MCP
&lt;/h2&gt;

&lt;p&gt;In addition to MCP, other critical projects were donated simultaneously to the newly established AAIF (Agentic AI Foundation).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;AGENTS.md (Donated by OpenAI)&lt;/strong&gt;:
A metadata standard for describing tools and capabilities used by agents. The "instructions for agents," which had been disparate across companies, will now be unified.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;goose (Donated by Block)&lt;/strong&gt;:
A framework for robust AI agent development.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By managing these under the same foundation, interoperability between tools is expected to improve dramatically. A future where "an agent built with goose reads the specs defined in AGENTS.md and executes tools via MCP" is becoming a reality, even with a mix of different vendors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Updates: Evolution to Practical Use
&lt;/h2&gt;

&lt;p&gt;Beyond standardization, technical progress has not stopped. The spec update released on November 25 added the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Asynchronous operations&lt;/strong&gt;: Agents can now proceed without waiting for time-consuming processes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Statelessness&lt;/strong&gt;: Reduces the burden of state management on the server side, improving scalability.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Server identity&lt;/strong&gt;: Enhanced verification of connection legitimacy.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Official extensions&lt;/strong&gt;: A mechanism to safely extend standard features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, the Claude directory already lists &lt;strong&gt;over 75 connectors&lt;/strong&gt;, and features like "Tool Search" and "Programmatic Tool Calling" via API are becoming robust. A world where "you just plug it in and it works" is right before our eyes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Will Anything Change?
&lt;/h2&gt;

&lt;p&gt;We are all too familiar with the current state of AI development—a miserable loop where standards proliferate and code becomes obsolete in six months.&lt;br&gt;
Seeing this news, did you think, "This is the end of hell"?&lt;/p&gt;

&lt;p&gt;Let me give you the conclusion. &lt;strong&gt;Nothing will change. At least, not for now.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MCP has won. The standardization war is over.&lt;br&gt;
However, this merely means the "shape of the power outlet" has been decided. Just because the outlet is unified doesn't mean the power supply becomes stable or the appliances become high-performance.&lt;/p&gt;

&lt;p&gt;As Ilya Sutskever mentioned in his podcast &lt;a href="https://www.dwarkesh.com/p/ilya-sutskever-2" rel="noopener noreferrer"&gt;"We're moving from the age of scaling to the age of research"&lt;/a&gt;, we are still squarely in the &lt;strong&gt;"Age of Research."&lt;/strong&gt;&lt;br&gt;
Models are still "Jagged." They are genius at some tasks but make incredibly stupid mistakes at others. This fundamental problem will not be solved just because the protocol has been unified.&lt;/p&gt;

&lt;p&gt;The attitude of the vendors won't change either.&lt;br&gt;
They will standardize the troublesome part of "connection," but will then shift all their resources to &lt;strong&gt;locking you in with "Intelligence" and "Ecosystems"&lt;/strong&gt; beyond that connection.&lt;/p&gt;

&lt;p&gt;The evidence is clear.&lt;br&gt;
Almost simultaneously with the MCP donation, Anthropic is heavily pushing &lt;a href="https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt;. This packages procedural knowledge—"how to make the agent behave"—but its execution environment deeply depends on the Claude ecosystem.&lt;br&gt;
On the OpenAI side, Codex has experimentally started similar "Skills" support.&lt;/p&gt;

&lt;p&gt;Even more serious is the &lt;strong&gt;"Siloization of Troublesome Operations."&lt;/strong&gt;&lt;br&gt;
Data, runtime, and long-term memory management. These are massive management costs for humans. That's why vendors whisper sweetly, "We'll handle all the messy stuff on the platform side," and actively push for siloization.&lt;br&gt;
Where is the data generated by the agent? The execution logs? The state?&lt;br&gt;
Before you know it, all of it is imprisoned inside a specific vendor's black box, and we aren't even given the key—such a future is steadily being built.&lt;/p&gt;

&lt;p&gt;In other words, even if "anything can connect via MCP," the "Brain (Skills)" to use those connected tools wisely will be described in each company's proprietary specs and locked in.&lt;br&gt;
The structure where "the plumbing is common, but the water (intelligence) flowing through it only comes from our faucet" is actually being reinforced.&lt;/p&gt;

&lt;p&gt;But we can't blame them.&lt;br&gt;
Fundamentally, &lt;strong&gt;the "Agent-first" paradigm shift itself was a nuclear bomb that filled in the vendors' moats.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the "Human-first" world, the cost of learning a new tool (learning cost) was the biggest barrier. We couldn't switch OSs or leave Office because relearning was a hassle for humans.&lt;br&gt;
But in the "Agent-first" world, it's the agent, not the human, that uses the tool. The agent adapts to the new environment without complaint, hits the API, and completes the task.&lt;/p&gt;

&lt;p&gt;By removing the human from the loop, switching costs have effectively become zero.&lt;br&gt;
If "today's strongest model" changes tomorrow, you can switch your entire workflow to the latest intelligence by rewriting a single line in a config file. Now that connection is standardized with MCP, that fluidity is at its peak.&lt;/p&gt;

&lt;p&gt;That is precisely why vendors have no choice but to be desperate. If they don't forcibly tie users down with value-adds like "Skills" and "Ecosystems," they will become mere plumbing providers.&lt;/p&gt;

&lt;p&gt;Their desperation is the flip side of their fear of becoming "replaceable commodities."&lt;/p&gt;

&lt;p&gt;So, the reality in front of us—the pain of prompt engineering, model hallucinations, the battle against non-deterministic behavior—will still be there tomorrow.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>opensource</category>
      <category>discuss</category>
    </item>
    <item>
      <title>DIY GitHub Issue Bot — Just Your LLM API Key</title>
      <dc:creator>Masaaki Hirano</dc:creator>
      <pubDate>Fri, 05 Dec 2025 22:01:17 +0000</pubDate>
      <link>https://dev.to/fl4tlin3/diy-github-issue-bot-just-your-llm-api-key-3plf</link>
      <guid>https://dev.to/fl4tlin3/diy-github-issue-bot-just-your-llm-api-key-3plf</guid>
      <description>&lt;p&gt;You've seen those fancy AI bots that answer GitHub issues. Most of them require subscriptions, SaaS accounts, or complex setups.&lt;/p&gt;

&lt;p&gt;What if you could build one yourself with just your LLM API key?&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://github.com/perstack-ai/perstack" rel="noopener noreferrer"&gt;Perstack&lt;/a&gt;, you can. Copy two files, add one secret, done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See it in action:&lt;/strong&gt; &lt;a href="https://github.com/perstack-ai/perstack/issues/55" rel="noopener noreferrer"&gt;This issue&lt;/a&gt; was answered by the bot reading the actual codebase.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📚 &lt;strong&gt;New to Perstack?&lt;/strong&gt; Check out the &lt;a href="https://docs.perstack.ai" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; and &lt;a href="https://docs.perstack.ai/getting-started" rel="noopener noreferrer"&gt;getting started guide&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What You Get
&lt;/h2&gt;

&lt;p&gt;An AI bot that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👀 Reacts to show it's processing&lt;/li&gt;
&lt;li&gt;🔍 Actually reads your codebase (not just guessing)&lt;/li&gt;
&lt;li&gt;💬 Posts answers with activity logs&lt;/li&gt;
&lt;li&gt;💰 Costs only what you use (your LLM API)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Activity log example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;💭 I need to understand how the runtime state machine works...
📁 Listing: packages/runtime/src
📖 Reading: runtime-state-machine.ts
💭 Found the state transitions...
✅ Done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup (5 Minutes)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Copy Files
&lt;/h3&gt;

&lt;p&gt;Copy from the &lt;a href="https://github.com/perstack-ai/perstack/tree/main/examples/github-issue-bot" rel="noopener noreferrer"&gt;example repo&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your-repo/
├── .github/workflows/issue-bot.yml   ← workflow
└── scripts/checkpoint-filter.ts      ← formats output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflow runs on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New issue opened&lt;/li&gt;
&lt;li&gt;Comment containing &lt;code&gt;@perstack-issue-bot&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Add Secret
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Settings → Secrets → Actions → &lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's it. Open an issue and watch it work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customize It
&lt;/h2&gt;

&lt;p&gt;Want custom behavior? Here's the agent-first development workflow:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Create &lt;code&gt;perstack.toml&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"claude-sonnet-4-5"&lt;/span&gt;

&lt;span class="nn"&gt;[provider]&lt;/span&gt;
&lt;span class="py"&gt;providerName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"anthropic"&lt;/span&gt;

&lt;span class="nn"&gt;[experts."my-issue-bot"]&lt;/span&gt;
&lt;span class="py"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Custom issue bot for my project"&lt;/span&gt;
&lt;span class="py"&gt;instruction&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"""
You are an issue bot for the Acme project.

## Rules
- Always check docs/ directory first
- Add labels: use `gh issue edit --add-label`
  - "bug" for bug reports
  - "feature" for feature requests  
  - "question" for questions
- If the issue is about authentication, mention @security-team
- Keep answers under 500 words
"""&lt;/span&gt;

&lt;span class="nn"&gt;[experts."my-issue-bot".skills."@perstack/base"]&lt;/span&gt;
&lt;span class="py"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"mcpStdioSkill"&lt;/span&gt;
&lt;span class="py"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"npx"&lt;/span&gt;
&lt;span class="py"&gt;packageName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"@perstack/base"&lt;/span&gt;
&lt;span class="py"&gt;requiredEnv&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"GH_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"GITHUB_REPO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ISSUE_NUMBER"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Test Locally with &lt;code&gt;perstack start&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Make sure you have &lt;a href="https://cli.github.com/" rel="noopener noreferrer"&gt;GitHub CLI&lt;/a&gt; installed and authenticated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-key
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GH_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;gh auth token&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITHUB_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;owner/repo
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ISSUE_NUMBER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;123

npx perstack start my-issue-bot &lt;span class="s2"&gt;"Answer issue #&lt;/span&gt;&lt;span class="nv"&gt;$ISSUE_NUMBER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens an interactive TUI where you can watch the bot think, read files, and generate answers in real-time. Tweak the instructions, run again, iterate.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Push When Happy
&lt;/h3&gt;

&lt;p&gt;Update the workflow to use your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npx perstack run --config ./path/to/perstack.toml my-issue-bot "Answer issue&lt;/span&gt; &lt;span class="c1"&gt;#$ISSUE_NUMBER"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push to your branch, and your customized bot is live.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;agent-first development&lt;/strong&gt; — define behavior in text, test interactively, deploy when ready. No code changes, just prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Agent-First Works
&lt;/h3&gt;

&lt;p&gt;The magic is in Perstack's runtime. Your Expert definition — just text in a TOML file — gets executed by the runtime, not compiled into code.&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Same behavior everywhere&lt;/strong&gt;: Local machine, CI, production — the Expert runs identically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterate without rebuilding&lt;/strong&gt;: Change the prompt, run again, no compile step&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portable&lt;/strong&gt;: Push to a branch, it just works&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The runtime handles everything: connecting to LLMs, managing tool calls, streaming events. You just define &lt;em&gt;what&lt;/em&gt; the Expert should do.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skills &amp;amp; MCP
&lt;/h3&gt;

&lt;p&gt;Experts interact with the world through &lt;strong&gt;Skills&lt;/strong&gt; — MCP servers that expose tools.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;@perstack/base&lt;/code&gt; provides file ops (&lt;code&gt;readTextFile&lt;/code&gt;, &lt;code&gt;listDirectory&lt;/code&gt;), command execution (&lt;code&gt;exec&lt;/code&gt;), and more. The runtime spins up the MCP server, passes environment variables, and routes tool calls automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[experts."my-bot".skills."@perstack/base"]&lt;/span&gt;
&lt;span class="py"&gt;requiredEnv&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"GH_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c"&gt;# Runtime passes this to the MCP server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Event Stream
&lt;/h3&gt;

&lt;p&gt;Everything is observable. The runtime emits JSON events to stdout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"callTool"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"toolCall"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="nl"&gt;"toolName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"readTextFile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"src/index.ts"&lt;/span&gt;&lt;span class="p"&gt;}}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"completeRun"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"Here's the answer..."&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pipe this to a script for real-time UIs, logging, or integration with your systems. See &lt;a href="https://github.com/perstack-ai/perstack/blob/main/scripts/checkpoint-filter.ts" rel="noopener noreferrer"&gt;checkpoint-filter.ts&lt;/a&gt; for an example.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Perstack?
&lt;/h2&gt;

&lt;p&gt;Think npm for AI agents. Define modular agents, publish to a registry, compose them like packages.&lt;/p&gt;

&lt;p&gt;No vendor lock-in. No subscriptions. Just your code and your API key.&lt;br&gt;
I'm building this — feedback welcome!&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/perstack-ai/perstack/tree/main/examples/github-issue-bot" rel="noopener noreferrer"&gt;GitHub Issue Bot Example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/perstack-ai/perstack" rel="noopener noreferrer"&gt;Perstack GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.perstack.ai" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>github</category>
      <category>automation</category>
      <category>perstack</category>
    </item>
    <item>
      <title>What is your motivation to use WebComponents?</title>
      <dc:creator>Masaaki Hirano</dc:creator>
      <pubDate>Thu, 25 Oct 2018 05:04:51 +0000</pubDate>
      <link>https://dev.to/fl4tlin3/what-is-your-motivation-to-use-webcomponents-2129</link>
      <guid>https://dev.to/fl4tlin3/what-is-your-motivation-to-use-webcomponents-2129</guid>
      <description>&lt;p&gt;If you have already started to write WebComponents code in production/personal use, please let me know what is your motivation to use it.&lt;/p&gt;

&lt;p&gt;Because it's standard? Don't want to use react/vue? Or other reason?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>javascript</category>
      <category>webcomponents</category>
    </item>
  </channel>
</rss>
