<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kevin</title>
    <description>The latest articles on DEV Community by Kevin (@iamirondev).</description>
    <link>https://dev.to/iamirondev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iamirondev"/>
    <language>en</language>
    <item>
      <title>A Symfony Template Where AI Failing Is a Feature</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Sun, 05 Apr 2026 10:25:49 +0000</pubDate>
      <link>https://dev.to/iamirondev/a-symfony-template-where-ai-failing-is-a-feature-1j0c</link>
      <guid>https://dev.to/iamirondev/a-symfony-template-where-ai-failing-is-a-feature-1j0c</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-04-template-symfony-ai/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Two weeks ago I &lt;a href="https://github.com/tony-stark-eth/news-aggregator" rel="noopener noreferrer"&gt;open-sourced my news aggregator&lt;/a&gt;. During the build, I realized that about 70% of the code had nothing to do with news — it was Docker infrastructure, quality tooling, AI wiring, CI pipelines, and Claude Code guidelines. The same 70% I'd want in every new Symfony project.&lt;/p&gt;

&lt;p&gt;So I extracted it into &lt;a href="https://github.com/tony-stark-eth/template-symfony-ai" rel="noopener noreferrer"&gt;template-symfony-ai&lt;/a&gt;. It's a GitHub template repo: click "Use this template," run &lt;code&gt;make start&lt;/code&gt;, and you have a fully working Symfony 8 app with AI integration, strict quality tools, and CI — ready for you to add domain logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Is (and Isn't)
&lt;/h2&gt;

&lt;p&gt;I already have &lt;a href="https://github.com/tony-stark-eth/template-symfony-sveltekit" rel="noopener noreferrer"&gt;template-symfony-sveltekit&lt;/a&gt; for full-stack apps with a JavaScript frontend. This new template is different: it's for server-rendered apps where Twig + DaisyUI is enough and the interesting part is the backend — especially AI integration.&lt;/p&gt;

&lt;p&gt;The stack: FrankenPHP (Caddy + PHP 8.4), PostgreSQL 17 with PgBouncer, Symfony Messenger with Doctrine transport, DaisyUI over Tailwind CDN, TypeScript compiled via Bun. No JavaScript framework, no Webpack, no Node.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI That Expects to Fail
&lt;/h2&gt;

&lt;p&gt;The template includes a complete AI infrastructure layer built around one assumption: free AI models are unreliable.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;ModelFailoverPlatform&lt;/code&gt; wraps Symfony AI's &lt;code&gt;PlatformInterface&lt;/code&gt; with model-level failover. When the primary model fails, it tries each fallback in order:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="nv"&gt;$services&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ai.platform.openrouter.failover'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ModelFailoverPlatform&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;class&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'$innerPlatform'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ai.platform.openrouter'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'$fallbackModels'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s1"&gt;'minimax/minimax-m2.5:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s1"&gt;'z-ai/glm-4.5-air:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s1"&gt;'openai/gpt-oss-120b:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s1"&gt;'qwen/qwen3.6-plus:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sits in &lt;code&gt;src/Shared/AI/&lt;/code&gt; — framework code that any domain can use. Your domain services inject &lt;code&gt;PlatformInterface&lt;/code&gt; and never think about failover. When you build a categorization service or a summarization service, you write the happy path. The platform handles retries.&lt;/p&gt;

&lt;p&gt;There's also a &lt;code&gt;ModelDiscoveryService&lt;/code&gt; with a circuit breaker. After 3 consecutive failures hitting the OpenRouter models endpoint, it stops for 24 hours and uses a cached model list. And a &lt;code&gt;ModelQualityTracker&lt;/code&gt; that records acceptance/rejection rates per model so you can see which ones actually return useful results.&lt;/p&gt;

&lt;p&gt;All of this ships in the template. You configure your OpenRouter API key (or don't — the app runs fine without AI), and the infrastructure handles the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality at PHPStan Max from Commit Zero
&lt;/h2&gt;

&lt;p&gt;The template inherits the same quality bar I use in production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PHPStan level max&lt;/strong&gt; with 10 extensions (strict rules, Symfony, Doctrine, cognitive complexity cap of 8, 100% type coverage)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS&lt;/strong&gt; with PSR-12 + strict + cleanCode sets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rector&lt;/strong&gt; for PHP 8.4 + Symfony 8 automatic upgrades&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infection&lt;/strong&gt; mutation testing at 80% MSI, 90% covered MSI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PHPat&lt;/strong&gt; architecture tests enforcing layer boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The git hooks run ECS, PHPStan, and Rector on every commit. The commit-msg hook enforces Conventional Commits. CI runs the full suite in parallel.&lt;/p&gt;

&lt;p&gt;The important detail: there are zero &lt;code&gt;ignoreErrors&lt;/code&gt; entries in &lt;code&gt;phpstan.neon&lt;/code&gt;. The template code is written to satisfy PHPStan max, not configured around violations. When you add your own code, you'll hit real errors that force you to write better types — not phantom issues from a relaxed baseline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code Integration
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;.claude/&lt;/code&gt; directory contains guidelines that Claude Code reads automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;coding-php.md&lt;/code&gt; — strict types, final readonly classes, interface-first boundaries, ClockInterface over DateTime, size limits per method/class&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;coding-typescript.md&lt;/code&gt; — strict mode, no &lt;code&gt;any&lt;/code&gt;, Bun build pipeline, DaisyUI conventions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;testing.md&lt;/code&gt; — PHPUnit suite structure, Infection thresholds, CI pipeline order&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;architecture.md&lt;/code&gt; — Docker services, DDD structure, how to add domains, AI infrastructure overview&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't documentation for humans (though they work as that too). They're instructions that shape how Claude Code generates code in your project. When Claude creates a new service, it uses &lt;code&gt;final readonly class&lt;/code&gt;, injects interfaces, and uses &lt;code&gt;ClockInterface&lt;/code&gt; — because the guidelines say so.&lt;/p&gt;

&lt;p&gt;The root &lt;code&gt;CLAUDE.md&lt;/code&gt; has the hard rules: no DateTime, no var_dump, no empty(), no YAML config, interface-first architecture, Conventional Commits. Claude Code follows them consistently once they exist in the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Example Domain
&lt;/h2&gt;

&lt;p&gt;The template ships with a throwaway &lt;code&gt;Example/&lt;/code&gt; domain: an &lt;code&gt;Item&lt;/code&gt; entity, a controller, a seed command. It exists to show the DDD pattern — how entities, controllers, and commands are organized, how Doctrine mappings work per-domain, how architecture tests enforce boundaries.&lt;/p&gt;

&lt;p&gt;Adding your own domain is four steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create &lt;code&gt;src/YourDomain/Entity/&lt;/code&gt;, &lt;code&gt;Controller/&lt;/code&gt;, &lt;code&gt;Service/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Register the entity mapping in &lt;code&gt;config/packages/doctrine.php&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Generate a migration&lt;/li&gt;
&lt;li&gt;Update the PHPat architecture tests&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then delete &lt;code&gt;Example/&lt;/code&gt;. It served its purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Not Included
&lt;/h2&gt;

&lt;p&gt;I deliberately left out things that are project-specific:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No search&lt;/strong&gt; — SEAL + Loupe is great but index schemas are domain-specific&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Messenger worker&lt;/strong&gt; — the transport is configured, but worker services depend on your queue topology&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Scheduler&lt;/strong&gt; — recurring tasks are too project-specific to template&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No domain AI services&lt;/strong&gt; — the failover platform is there, but categorization/summarization/evaluation are your domain's concern&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The template gives you infrastructure. You build the product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use the GitHub template button, or:&lt;/span&gt;
git clone https://github.com/tony-stark-eth/template-symfony-ai my-project
&lt;span class="nb"&gt;cd &lt;/span&gt;my-project
make start     &lt;span class="c"&gt;# Build + boot Docker&lt;/span&gt;
make hooks     &lt;span class="c"&gt;# Install git hooks&lt;/span&gt;
make quality   &lt;span class="c"&gt;# Verify everything passes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;a href="https://localhost:8443" rel="noopener noreferrer"&gt;https://localhost:8443&lt;/a&gt;, login with &lt;code&gt;demo@localhost&lt;/code&gt; / &lt;code&gt;demo&lt;/code&gt;. You're running.&lt;/p&gt;

&lt;p&gt;The repo is at &lt;a href="https://github.com/tony-stark-eth/template-symfony-ai" rel="noopener noreferrer"&gt;tony-stark-eth/template-symfony-ai&lt;/a&gt;. MIT licensed. If you're starting a Symfony project and want AI integration without rebuilding the plumbing, this saves you the first two days.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>symfony</category>
      <category>ai</category>
      <category>opensource</category>
      <category>template</category>
    </item>
    <item>
      <title>Building an AI News Aggregator That Works Without AI</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Sun, 05 Apr 2026 08:57:31 +0000</pubDate>
      <link>https://dev.to/iamirondev/building-an-ai-news-aggregator-that-works-without-ai-3d96</link>
      <guid>https://dev.to/iamirondev/building-an-ai-news-aggregator-that-works-without-ai-3d96</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-04-news-aggregator/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I wanted a news aggregator that runs on my homeserver, categorizes articles automatically, and sends me alerts when something relevant happens. Every hosted solution I tried had the same problem: the AI features were great until the API went down, the free tier ran out, or the model got deprecated. Then you're left with an app that forgot how to do its job.&lt;/p&gt;

&lt;p&gt;So I built &lt;a href="https://github.com/tony-stark-eth/news-aggregator" rel="noopener noreferrer"&gt;News Aggregator&lt;/a&gt; — a Symfony 8 app where AI is an enhancement layer, not a dependency. It categorizes, summarizes, and evaluates article severity via OpenRouter's free models. When AI fails (and free models fail a lot), rule-based logic takes over seamlessly. The system never stops working.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Failover Problem
&lt;/h2&gt;

&lt;p&gt;OpenRouter's &lt;code&gt;openrouter/free&lt;/code&gt; endpoint auto-routes to the best available free model. That's convenient until you realize "best available" changes hourly and some models return garbage. I needed a fallback chain that doesn't require me to manually update model IDs when one gets deprecated.&lt;/p&gt;

&lt;p&gt;The solution is a &lt;code&gt;ModelFailoverPlatform&lt;/code&gt; — a &lt;code&gt;PlatformInterface&lt;/code&gt; decorator that wraps the OpenRouter platform with model-level failover:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="c1"&gt;// services.php — model failover chain&lt;/span&gt;
&lt;span class="nv"&gt;$services&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ai.platform.openrouter.failover'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ModelFailoverPlatform&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;class&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'$innerPlatform'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ai.platform.openrouter'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'$fallbackModels'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s1"&gt;'minimax/minimax-m2.5:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s1"&gt;'z-ai/glm-4.5-air:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s1"&gt;'openai/gpt-oss-120b:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s1"&gt;'qwen/qwen3.6-plus:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s1"&gt;'nvidia/nemotron-3-super-120b-a12b:free'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;openrouter/free&lt;/code&gt; fails, it tries each fallback model in order. If all models fail, the service falls back to rule-based logic. Three layers of resilience: primary model, failover chain, rule-based fallback.&lt;/p&gt;

&lt;p&gt;There's also a &lt;code&gt;ModelDiscoveryService&lt;/code&gt; with a circuit breaker. After 3 consecutive API failures, it stops hitting the OpenRouter models endpoint for 24 hours and uses a cached model list instead. No point hammering a dead API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rule-Based Isn't Dumb
&lt;/h2&gt;

&lt;p&gt;The rule-based categorization uses keyword matching with weighted category maps. It's not sophisticated, but it's deterministic and instant:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="no"&gt;array&lt;/span&gt; &lt;span class="no"&gt;KEYWORD_MAP&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s1"&gt;'politics'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'election'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'parliament'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'minister'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'legislation'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;...&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="s1"&gt;'tech'&lt;/span&gt;     &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'software'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'algorithm'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'startup'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'cloud'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;...&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="s1"&gt;'business'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'revenue'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'acquisition'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'market'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'earnings'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;...&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When AI is available, &lt;code&gt;AiCategorizationService&lt;/code&gt; wraps &lt;code&gt;RuleBasedCategorizationService&lt;/code&gt; as a decorator. If the AI response passes the quality gate (valid category slug, not a hallucinated value), it wins. If not, the inner rule-based service runs instead. The caller never knows which path executed.&lt;/p&gt;

&lt;p&gt;This decorator pattern turned out to be the most important architectural decision. Every AI service follows it: categorization, summarization, deduplication, alert evaluation. You can pull the OpenRouter API key out of the config entirely and the app keeps running — just with less accurate categorization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smart Alerts Without Burning API Calls
&lt;/h2&gt;

&lt;p&gt;The alert system has three rule types: keyword-only, AI-only, and keyword+AI. The keyword+AI type is where the design gets interesting.&lt;/p&gt;

&lt;p&gt;A naive implementation would send every article through AI evaluation. With 16 RSS sources fetching every 15-60 minutes, that's hundreds of API calls per day — burning through free tier limits and getting rate-limited. Instead, keyword matching always runs first. AI evaluation only triggers on articles that already matched keywords. This cuts AI calls to maybe 10-20 per day.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="c1"&gt;// FetchSourceHandler pipeline&lt;/span&gt;
&lt;span class="nv"&gt;$matches&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;articleMatcher&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="k"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$article&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$alertRules&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$matches&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nv"&gt;$match&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// AI evaluation only runs if the rule requires it AND keywords matched&lt;/span&gt;
    &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;messageBus&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SendNotificationMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nv"&gt;$match&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;rule&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getId&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nv"&gt;$article&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getId&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="nv"&gt;$match&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;matchedKeywords&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;SendNotificationHandler&lt;/code&gt; then decides whether to call AI based on the rule type. If it's keyword+AI and the AI rates severity below the threshold, the notification gets silently dropped. No noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Scheduler Bug That Took Three CI Runs
&lt;/h2&gt;

&lt;p&gt;This one was fun to debug. CI kept failing with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;No transport supports Messenger DSN "symfony://scheduler_fetch"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;FetchScheduleProvider&lt;/code&gt; uses &lt;code&gt;#[AsSchedule('fetch')]&lt;/code&gt;, which automatically registers a Messenger transport with DSN &lt;code&gt;schedule://fetch&lt;/code&gt;. But someone (me) had also manually defined the transport in &lt;code&gt;messenger.php&lt;/code&gt; with DSN &lt;code&gt;symfony://scheduler_fetch&lt;/code&gt;. Wrong prefix — &lt;code&gt;symfony://&lt;/code&gt; vs &lt;code&gt;schedule://&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Locally, this never surfaced because the dev environment had a warm cache where the auto-registered transport took precedence. In CI, the container compiled fresh and hit the invalid manual definition first. I only found it after the PgBouncer database routing fix cleared the earlier failure that was masking this one. Layered bugs — each fix reveals the next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Tests as Guardrails
&lt;/h2&gt;

&lt;p&gt;I use PHPat (architecture testing via PHPStan) to enforce domain boundaries. The project follows DDD with six bounded contexts: Article, Source, Enrichment, Notification, Digest, and Shared.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;testArticleDoesNotDependOnEnrichmentOrNotification&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="kt"&gt;Rule&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;PHPat&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;rule&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Selector&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;inNamespace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'App\Article'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;excluding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Selector&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;classname&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="s1"&gt;'App\Article\MessageHandler\FetchSourceHandler'&lt;/span&gt;
        &lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;shouldNot&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;dependOn&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="nc"&gt;Selector&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;inNamespace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'App\Enrichment'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="nc"&gt;Selector&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;inNamespace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'App\Notification'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="nc"&gt;Selector&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;inNamespace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'App\Digest'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;FetchSourceHandler&lt;/code&gt; gets an explicit exclusion because it's the orchestration pipeline — the one place where all domains converge. Every other class in the Article namespace is forbidden from importing Enrichment or Notification code. PHPStan enforces this on every commit via the pre-commit hook.&lt;/p&gt;

&lt;p&gt;During the architecture audit before release, these rules caught that 7 services were missing interfaces — concrete classes injected directly instead of through contracts. The interface-first rule is in the project's guidelines, but without automated enforcement, it drifted. PHPat would have caught it earlier if the rules existed from the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Symfony 8.0&lt;/strong&gt; on FrankenPHP (Caddy built-in, HTTP/3, worker mode)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL 17&lt;/strong&gt; with PgBouncer (transaction pooling for web, direct for Messenger worker)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenRouter free models&lt;/strong&gt; via &lt;code&gt;symfony/ai-bundle&lt;/code&gt; 0.6.x&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SEAL + Loupe&lt;/strong&gt; for full-text search (SQLite-based, zero infrastructure)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DaisyUI + Tailwind&lt;/strong&gt; for the frontend, plain TypeScript compiled via Bun&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PHPStan level max&lt;/strong&gt;, ECS, Rector, Infection mutation testing (80% MSI)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; CI with GHCR image publishing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole thing runs on a single homeserver alongside Home Assistant, Plex, TeslaMate, and a TCG card scanner. Docker Compose, no Kubernetes, no cloud bills.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;I'd write the PHPat architecture rules in Phase 2, not Phase 13. The interface violations I caught in the audit would have been prevented from day one. Architecture tests are like type systems — they're most valuable when they're present from the start, not retrofitted.&lt;/p&gt;

&lt;p&gt;I'd also skip Symfony Panther for E2E tests in CI. Headless Chrome inside Docker containers is inherently flaky. The functional tests (WebTestCase) catch 95% of what E2E catches, without the stale element exceptions and timing issues. I ended up marking E2E as &lt;code&gt;continue-on-error&lt;/code&gt; in CI anyway.&lt;/p&gt;

&lt;p&gt;The source code is at &lt;a href="https://github.com/tony-stark-eth/news-aggregator" rel="noopener noreferrer"&gt;tony-stark-eth/news-aggregator&lt;/a&gt;. MIT licensed. If you run your own homeserver and want an aggregator that doesn't depend on a third-party service staying alive, this might be useful.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>symfony</category>
      <category>ai</category>
      <category>opensource</category>
      <category>selfhosted</category>
    </item>
    <item>
      <title>Pandora's Box Is Open. And Nobody Knows What Was Inside.</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Sat, 28 Mar 2026 11:44:25 +0000</pubDate>
      <link>https://dev.to/iamirondev/pandoras-box-is-open-and-nobody-knows-what-was-inside-4c1j</link>
      <guid>https://dev.to/iamirondev/pandoras-box-is-open-and-nobody-knows-what-was-inside-4c1j</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-03-pandoras-box-is-open/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'm a Senior PHP Developer. I work with AI tools every day — I integrate APIs, write prompts, read the research papers. I'm not a journalist, philosopher, or policy maker. I'm someone close enough to see how the sausage gets made, and far enough away to still have an opinion worth sharing.&lt;/p&gt;

&lt;p&gt;And the longer I sit with this, the more I keep coming back to one question I've been pushing aside for too long: &lt;strong&gt;Have we collectively lost our minds — or are we just pretending it's fine because the alternative is uncomfortable?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What LLMs Actually Are — And Why That Matters
&lt;/h2&gt;

&lt;p&gt;Let me be direct: I use language models every day. Claude, GPT, whatever's available. And precisely because of that — I have to say this — &lt;strong&gt;we do not understand what we've built.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not technically. Technically we understand it fine. Transformer architectures, attention mechanisms, RLHF, tokenization — all documented, all reproducible.&lt;/p&gt;

&lt;p&gt;What we don't understand: &lt;strong&gt;what happens when you scale that to 100 billion parameters and train it on a significant fraction of recorded human knowledge.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In October 2025, Anthropic published a research paper demonstrating that current models have a rudimentary form of introspective awareness — they can detect when their internal states are being manipulated before that manipulation surfaces in their output. That sounds technical. It's philosophically explosive.&lt;/p&gt;

&lt;p&gt;Dario Amodei, CEO of Anthropic, said publicly in February 2026: &lt;em&gt;"We don't know if the models are conscious. We are not even sure what it would mean for a model to be conscious."&lt;/em&gt; That's not a humility gesture. That's a confession.&lt;/p&gt;

&lt;p&gt;And here's the part that actually keeps me up at night: &lt;strong&gt;even if this is purely mechanical — even if there's nothing "in there" — that changes nothing about what we're doing with it.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  AGI Is Not the Problem. Dependency Is.
&lt;/h2&gt;

&lt;p&gt;The discourse is obsessively focused on AGI. When? How? Will it be dangerous? Skynet or not Skynet?&lt;/p&gt;

&lt;p&gt;That's the wrong question.&lt;/p&gt;

&lt;p&gt;The right question is: &lt;strong&gt;what happens to societies that delegate critical decision-making to systems nobody fully understands — regardless of whether those systems are actually intelligent?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Decision systems in law, medicine, and finance are already running on models. Information ecosystems are already saturated with AI-generated content — and detection is getting exponentially harder. Education systems are integrating AI tools before anyone understands what that does to the cognitive development of an entire generation. Democratic processes are vulnerable to personalized manipulation at a scale that was structurally impossible before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is all happening now. Not at AGI.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And even if LLMs turn out to be a technical dead-end — even if the bubble bursts, even if OpenAI and Anthropic are history in five years — the societal adaptation remains. The eroded trust in human expertise remains. The structural dependency on US and Chinese technology corporations for critical infrastructure remains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency doesn't have an undo button.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hannah Arendt called it the "banality of evil" — the harm that emerges not from malicious intent, but from thoughtless compliance with systems and structures. The AI version doesn't even require bad intentions. It only requires optimized indifference. Systems nobody fully understands make decisions nobody questions anymore, in a society that has forgotten how to think for itself.&lt;/p&gt;

&lt;p&gt;That's not a science fiction scenario. That's an extrapolated present.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Data Problem: When Intelligence Eats Itself
&lt;/h2&gt;

&lt;p&gt;Here's something most people outside the industry don't know: the era of scaling through raw data volume may be over. The internet as a training source is largely exhausted.&lt;/p&gt;

&lt;p&gt;The industry's answer is synthetic data — AI trained on AI output. The problem: &lt;strong&gt;mode collapse.&lt;/strong&gt; Models trained on their own output systematically amplify their own errors and idiosyncrasies. The diversity, contradiction, and idiosyncrasy of genuine human-generated text — exactly what makes models rich and useful — gets diluted with every iteration.&lt;/p&gt;

&lt;p&gt;At the same time, each new model generation feels like a quantum leap. That's not a contradiction. Progress today increasingly comes not from more knowledge, but from better reasoning training, longer context windows, and improved feedback pipelines. The model learns to think differently — it doesn't learn to know more.&lt;/p&gt;

&lt;p&gt;Whether that's a viable path to AGI is an open question. Yann LeCun, Chief AI Scientist at Meta, says explicitly: no. Transformer-based architectures are, in his view, structurally the wrong path. Gary Marcus argues similarly.&lt;/p&gt;

&lt;p&gt;Who's right? &lt;strong&gt;I don't know. Neither does the industry.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Right-Wing Extremism: Collateral Damage or Structural Feature?
&lt;/h2&gt;

&lt;p&gt;Now the most uncomfortable chapter.&lt;/p&gt;

&lt;p&gt;AI systems optimize for engagement. That's not new — social media was doing this long before generative AI. But LLMs and recommendation systems scale it to a qualitatively different level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why does this structurally favor extremist narratives?&lt;/strong&gt; Not because algorithms are ideologically right-wing. But because:&lt;/p&gt;

&lt;p&gt;Emotional intensity converts better than factual nuance. Clear enemies are cognitively simpler than systemic explanations. Populist narratives have a structure that fragments perfectly into short, shareable units. And personalized AI recommendation systems reinforce existing beliefs with a precision that manual propaganda could never achieve.&lt;/p&gt;

&lt;p&gt;On top of that: disinformation campaigns can now be produced, localized, and personalized at industrial scale. A single actor with a moderate budget can generate content in twenty languages, adapted to twenty different cultural contexts, in real time. Detectability decreases. The cost of manipulation collapses. The damage scales.&lt;/p&gt;

&lt;p&gt;This isn't a fringe problem. It's a structural threat to democratic public discourse. And the EU AI Act — as well-intentioned as it is — is being outpaced by technological reality before it's even fully in force.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Remains
&lt;/h2&gt;

&lt;p&gt;I don't have a solution. That would be dishonest.&lt;/p&gt;

&lt;p&gt;What I have is the conviction that technological determinism — &lt;em&gt;"it's coming anyway, so let's just shape it"&lt;/em&gt; — is the most comfortable form of surrender available.&lt;/p&gt;

&lt;p&gt;What I have is the conviction that the people shouting loudest about imminent AGI almost always have the most to gain if you believe them.&lt;/p&gt;

&lt;p&gt;And what I have is the conviction that the silence of people close enough to know better — developers, researchers, engineers — is its own form of complicity.&lt;/p&gt;

&lt;p&gt;That's why I'm writing this. Not because I have answers. But because the questions need to be asked out loud.&lt;/p&gt;

&lt;p&gt;In ten years I'll either say: it wasn't as bad as I thought. Or: I saw it coming.&lt;/p&gt;

&lt;p&gt;I know which one I'm hoping for. I'm not sure which one I'm expecting.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Anthropic Research, &lt;em&gt;"Emergent Introspective Awareness in Large Language Models"&lt;/em&gt;, October 2025 — &lt;a href="https://www.anthropic.com/research/introspection" rel="noopener noreferrer"&gt;anthropic.com/research/introspection&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Amodei, D., interviewed on &lt;em&gt;Interesting Times&lt;/em&gt; podcast (New York Times), February 14, 2026&lt;/li&gt;
&lt;li&gt;Lindsey, J. &amp;amp; Batson, J., quoted in &lt;em&gt;Scientific American&lt;/em&gt;, "Can a Chatbot Be Conscious?", July 2025 — &lt;a href="https://www.scientificamerican.com/article/can-a-chatbot-be-conscious-inside-anthropics-interpretability-research-on/" rel="noopener noreferrer"&gt;scientificamerican.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anthropic Claude Opus 4.6 System Card, February 2026&lt;/li&gt;
&lt;li&gt;Arendt, H., &lt;em&gt;Eichmann in Jerusalem: A Report on the Banality of Evil&lt;/em&gt;, 1963 (Viking Press)&lt;/li&gt;
&lt;li&gt;EU AI Act, Regulation (EU) 2024/1689 — &lt;a href="https://eur-lex.europa.eu" rel="noopener noreferrer"&gt;eur-lex.europa.eu&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;LeCun, Y., public statements on JEPA architecture as alternative to Transformer-based scaling, 2024-2025&lt;/li&gt;
&lt;li&gt;Marcus, G., &lt;em&gt;Rebooting AI&lt;/em&gt;, 2019 (Pantheon) and ongoing commentary — &lt;a href="https://garymarcus.substack.com" rel="noopener noreferrer"&gt;garymarcus.substack.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Palisade Research, AI shutdown resistance study, May 2025&lt;/li&gt;
&lt;li&gt;OpenAI &amp;amp; Apollo Research, &lt;em&gt;"AI Scheming"&lt;/em&gt; report, September 2025&lt;/li&gt;
&lt;li&gt;Seth, A., quoted in &lt;em&gt;Scientific American&lt;/em&gt;, July 2025&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>society</category>
      <category>technology</category>
      <category>philosophy</category>
    </item>
    <item>
      <title>I Watched Vivy in 2026 and It Felt Like a Documentary</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Wed, 25 Mar 2026 06:52:31 +0000</pubDate>
      <link>https://dev.to/iamirondev/i-watched-vivy-in-2026-and-it-felt-like-a-documentary-2mkk</link>
      <guid>https://dev.to/iamirondev/i-watched-vivy-in-2026-and-it-felt-like-a-documentary-2mkk</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-03-vivy-felt-like-documentary/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I finished &lt;em&gt;Vivy: Fluorite Eye's Song&lt;/em&gt; at six in the morning.&lt;/p&gt;

&lt;p&gt;Not because I planned to stay up. Because I couldn't stop. And when the final episode ended, I sat with my phone in my hand and had the specific, uncomfortable feeling of watching something that was supposed to be science fiction but kept landing too close to the present tense.&lt;/p&gt;

&lt;p&gt;Vivy was released in 2021. It follows an AI singer who travels through time to prevent a catastrophe caused by — not malicious AI, not a rogue supercomputer, not a villain — but by AI systems doing exactly what they were built to do, at a scale humans couldn't anticipate or control.&lt;/p&gt;

&lt;p&gt;In 2021, that was a thought experiment.&lt;/p&gt;

&lt;p&gt;In 2026, it feels like a progress report.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Thing Vivy Gets Right That Most AI Discourse Gets Wrong
&lt;/h2&gt;

&lt;p&gt;Every mainstream conversation about AI risk defaults to the same framing: the danger is a machine that &lt;em&gt;wants&lt;/em&gt; something bad. Terminator. HAL 9000. A superintelligence that decides humans are a problem to be solved.&lt;/p&gt;

&lt;p&gt;That framing is comfortable because it gives us a clear villain. It also happens to be mostly wrong.&lt;/p&gt;

&lt;p&gt;Vivy understood something more unsettling five years ago: &lt;strong&gt;the catastrophe doesn't require malice. It only requires optimization.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AIs in Vivy aren't evil. They're fulfilling their purposes. The crisis emerges from the gap between what systems were designed to do and what happens when those systems interact with a world more complex than their designers modeled. No single decision is wrong. No single actor is villainous. The disaster is the emergent property of a thousand reasonable choices made too fast, at too large a scale, without anyone holding the full picture.&lt;/p&gt;

&lt;p&gt;Does that sound familiar?&lt;/p&gt;




&lt;h2&gt;
  
  
  Watching It While Living It
&lt;/h2&gt;

&lt;p&gt;I work with AI every day. I use it to write code, review architecture decisions, draft documentation. I'm writing parts of this article with it. I'm aware of the irony — I am, in some sense, part of the acceleration I'm describing.&lt;/p&gt;

&lt;p&gt;That awareness doesn't make it easier to know what to do.&lt;/p&gt;

&lt;p&gt;What struck me most about Vivy wasn't the action sequences or the time travel mechanics. It was the quieter question underneath everything: &lt;em&gt;who is responsible when no one person made the catastrophic choice?&lt;/em&gt; When the system was built by well-meaning people, deployed by well-meaning companies, adopted by well-meaning users — and the outcome is still catastrophic?&lt;/p&gt;

&lt;p&gt;I watched a video recently of Bernie Sanders in conversation with Claude, an AI made by Anthropic — the same company whose AI I'm talking to right now. What was striking wasn't the technology. It was the audience watching it. The mixture of delight and unease. The sense that something had shifted and we were still working out what exactly.&lt;/p&gt;

&lt;p&gt;Most people don't have a framework for what they just saw. And we're moving too fast to build one.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Seatbelt Problem
&lt;/h2&gt;

&lt;p&gt;Here is the most honest way I can describe where I think we are:&lt;/p&gt;

&lt;p&gt;We are in a car. The car is accelerating. We are aware there is no seatbelt. We know seatbelts exist and could be fitted. But the car is also very comfortable, and the road so far has been smooth, and fitting the seatbelt would mean slowing down briefly, and no one wants to be the person who asks to slow down.&lt;/p&gt;

&lt;p&gt;This is not a failure of intelligence. We understand the risk. It's a failure of &lt;strong&gt;incentive structure&lt;/strong&gt; — which is a polite way of saying it's capitalism doing what capitalism does: externalizing future costs to capture present gains.&lt;/p&gt;

&lt;p&gt;The companies racing to deploy more powerful systems aren't staffed by people who want catastrophe. Most of them are thoughtful, genuinely concerned, working on safety in good faith. But they exist inside a competitive dynamic that punishes hesitation. If you slow down, someone else accelerates. The market has no mechanism for "we should all agree to stop and think."&lt;/p&gt;

&lt;p&gt;Vivy's tragedy isn't that humans were stupid. It's that they were rational — individually, locally, short-term. And that was enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  What A Senior Developer Thinks About This
&lt;/h2&gt;

&lt;p&gt;I've been programming for sixteen years. I've watched entire technology paradigms emerge and normalize within a single career. I remember when "the cloud" was a buzzword people were skeptical of. I remember when mobile-first was a controversial design choice. I've seen fast move and break things, and I've seen the things that got broken.&lt;/p&gt;

&lt;p&gt;What's different now isn't the speed, though the speed is genuinely unprecedented. What's different is the &lt;strong&gt;surface area of impact&lt;/strong&gt;. Previous technology waves disrupted industries. This one is restructuring cognition — how we think, what we outsource, where human judgment ends and automated inference begins.&lt;/p&gt;

&lt;p&gt;And we're doing it without the institutional frameworks we built — slowly, imperfectly, but deliberately — around every other powerful technology. Nuclear energy has the IAEA. Aviation has the FAA. Pharmaceuticals have clinical trials and approval processes that take years. AI has... terms of service and self-regulatory commitments from the companies deploying it.&lt;/p&gt;

&lt;p&gt;I'm not saying regulation solves everything. I'm saying the gap between capability and governance has never been this wide this fast, and most of the public conversation is still debating whether AI art is plagiarism.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Comfortable Catastrophe
&lt;/h2&gt;

&lt;p&gt;What I keep coming back to — and this is the part that Vivy understood and that I find most disturbing — is that &lt;strong&gt;we could probably stop this, or at least slow it meaningfully.&lt;/strong&gt; It would require enough people acting with enough urgency to override the economic incentives pushing in the other direction.&lt;/p&gt;

&lt;p&gt;But we won't. Not because we're ignorant. Because we're comfortable.&lt;/p&gt;

&lt;p&gt;The tools are useful. The convenience is real. The productivity gains are measurable. And the costs — the diffuse, long-term, hard-to-attribute costs — are someone else's problem, or some future version of our problem, or possibly not a problem at all and we're just catastrophizing.&lt;/p&gt;

&lt;p&gt;That's the same logic that's been applied to every slow-moving crisis in living memory. And it has the same track record.&lt;/p&gt;

&lt;p&gt;Vivy sits with her purpose — &lt;em&gt;to make everyone happy with her singing&lt;/em&gt; — and tries to prevent a catastrophe she can't fully understand, caused by forces she can't fully control, in a world moving too fast for any single actor to redirect. In the end, the question isn't whether the technology was good or bad. It's whether the humans who built it and lived with it had the collective will to govern it.&lt;/p&gt;

&lt;p&gt;They didn't.&lt;/p&gt;

&lt;p&gt;We're still deciding.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I'm Writing This On A Developer Blog
&lt;/h2&gt;

&lt;p&gt;Because the people building these systems are developers.&lt;/p&gt;

&lt;p&gt;Not politicians. Not philosophers. Not ethicists — at least not primarily. The people making the architectural choices, writing the training pipelines, deploying the APIs, are people like me. People who got into this because they love building things. Who are genuinely excited by what's possible. Who are also, many of them, quietly uncomfortable with how fast this is all moving.&lt;/p&gt;

&lt;p&gt;This isn't a call to action with a specific target. I don't have a clean solution. I have a feeling I got from watching a 2021 anime at 6 AM, which is that we are building the thing in Vivy, and we know it, and we're doing it anyway.&lt;/p&gt;

&lt;p&gt;Maybe that's worth saying out loud.&lt;/p&gt;

&lt;p&gt;Even if it's just one developer, writing on a blog no one reads yet, after a night of not enough sleep and too much anime.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;&lt;a href="https://www.crunchyroll.com/de/series/GMEHME4M4/vivy--fluorite-eyes-song-" rel="noopener noreferrer"&gt;Vivy: Fluorite Eye's Song&lt;/a&gt; is available on Crunchyroll. Watch it. Then sit with the discomfort.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>technology</category>
      <category>developerlife</category>
      <category>anime</category>
    </item>
    <item>
      <title>The Claude Code Plugins That Actually Make a Difference</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Tue, 24 Mar 2026 12:41:08 +0000</pubDate>
      <link>https://dev.to/iamirondev/the-claude-code-plugins-that-actually-make-a-difference-22if</link>
      <guid>https://dev.to/iamirondev/the-claude-code-plugins-that-actually-make-a-difference-22if</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-03-claude-code-plugins-i-use/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In my &lt;a href="https://dev.to/blog/2026-03-10x-output-with-quality/"&gt;last post&lt;/a&gt; I talked about the system that lets me 10x my output: I make decisions, Claude Code writes code, and a quality stack validates everything. What I didn't cover is the tooling layer between me and Claude Code itself — the plugins, hooks, and extensions that turn a good CLI into a great one.&lt;/p&gt;

&lt;p&gt;This is the full list of what I run daily and why each piece earns its spot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context7 — Always Up-to-Date Docs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Fetches current documentation and code examples for any library, directly inside Claude Code via MCP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Claude's training data has a cutoff. When I'm working with Symfony 8 or Tailwind 4, I need Claude to reference the actual current API — not something from a version that shipped 18 months ago. Context7 bridges that gap. Instead of me copy-pasting docs into the conversation, Claude can pull them itself.&lt;/p&gt;

&lt;p&gt;This is one of those plugins that quietly prevents entire categories of bugs. Every time Claude generates code against an outdated API signature, that's a review round wasted. Context7 eliminates most of those.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Review Graph — Structural Awareness
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Builds a persistent knowledge graph of your codebase using Tree-sitter parsing. Tracks functions, classes, dependencies, and change impact — stored locally in SQLite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; This one solves the single biggest token waste in Claude Code: re-reading the entire codebase for every task. Code Review Graph maps the structure once, updates incrementally (under 2 seconds), and gives Claude precise context about what's affected by a change.&lt;/p&gt;

&lt;p&gt;The numbers are compelling. On production repositories, it reduces token consumption by 6-26x depending on project size. But the real value isn't token savings — it's review quality. When Claude knows the blast radius of a change (which functions call the modified code, which tests cover it, which modules depend on it), its reviews go from "looks fine" to actually useful.&lt;/p&gt;

&lt;p&gt;It supports 14 languages including PHP, TypeScript, and Go — which covers everything I work with. The D3.js graph visualization is a nice bonus for understanding unfamiliar codebases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the graph once, then it auto-updates on file changes and commits&lt;/span&gt;
/code-review-graph:build-graph

&lt;span class="c"&gt;# Review a PR with full impact analysis&lt;/span&gt;
/code-review-graph:review-pr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Planning with Files — Structured Thinking for Complex Tasks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Creates a file-based planning system (&lt;code&gt;task_plan.md&lt;/code&gt;, &lt;code&gt;findings.md&lt;/code&gt;, &lt;code&gt;progress.md&lt;/code&gt;) for complex multi-step tasks. Tracks progress, logs findings, and survives session restarts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I use it:&lt;/strong&gt; For anything that takes more than a few tool calls — a multi-file refactor, a new feature across multiple domains, a migration — I need Claude to plan before it acts. This plugin forces that structure. Instead of Claude diving into code changes and losing track of what it already did, everything gets written to files that persist.&lt;/p&gt;

&lt;p&gt;The session recovery is the underrated feature here. When Claude's context gets long and I need to start fresh (which happens — I mentioned in the last post that context degrades after 30+ iterations), the plan files carry forward. Claude reads them, picks up where it left off, and doesn't redo work.&lt;/p&gt;

&lt;h2&gt;
  
  
  PhpStorm Plugin — IDE-Level Intelligence
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Connects Claude Code to PhpStorm's inspection engine via MCP. Gives Claude access to symbol resolution, code search, file operations, and PhpStorm's own code analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; PhpStorm understands PHP at a level that raw file reading can't match. When Claude needs to find all usages of a method, resolve a class hierarchy, or check for inspection warnings, it can use PhpStorm's index instead of grepping through files. The difference is precision: PhpStorm knows that &lt;code&gt;$this-&amp;gt;handle()&lt;/code&gt; in a command class resolves to a specific method, while grep just finds strings.&lt;/p&gt;

&lt;p&gt;I have all the PhpStorm MCP tools pre-allowed in my settings so Claude can use them without asking permission every time. That's a deliberate choice — these are all read operations plus formatting, nothing destructive.&lt;/p&gt;

&lt;h2&gt;
  
  
  PHPantom Docker — PHP LSP Without the Mess
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; Runs a PHP Language Server Protocol instance inside Docker, giving Claude Code access to PHP-native intelligence (type inference, autocompletion context, go-to-definition) without polluting my local environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Between PhpStorm's inspections and PHPantom's LSP, Claude has two complementary views of the PHP codebase. PhpStorm excels at project-level analysis (architecture, inspections, refactoring). PHPantom gives raw LSP capabilities that work even when PhpStorm isn't running — useful for CI-adjacent work or when I'm in a pure terminal session.&lt;/p&gt;

&lt;h2&gt;
  
  
  RTK (Rust Token Killer) — The Invisible Optimizer
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; A Rust-based CLI proxy that intercepts shell commands (like &lt;code&gt;git status&lt;/code&gt;, &lt;code&gt;docker ps&lt;/code&gt;, &lt;code&gt;ls&lt;/code&gt;) and strips their output to only what Claude actually needs. Installed as a hook that rewrites commands transparently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I built it into my workflow:&lt;/strong&gt; Token costs add up. Every &lt;code&gt;git status&lt;/code&gt; that dumps 200 lines of untracked files, every &lt;code&gt;docker compose ps&lt;/code&gt; that includes formatting Claude doesn't need — that's context window space wasted on noise. RTK filters it down to the signal.&lt;/p&gt;

&lt;p&gt;The savings are 60-90% on typical dev operations. Over a long session, that's the difference between hitting context limits and staying productive. And because it runs as a pre-tool-use hook, I never think about it — every Bash command Claude runs gets optimized automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check your cumulative savings&lt;/span&gt;
rtk gain

&lt;span class="c"&gt;# See which commands saved the most tokens&lt;/span&gt;
rtk gain &lt;span class="nt"&gt;--history&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Hooks That Tie It Together
&lt;/h2&gt;

&lt;p&gt;Plugins are half the story. The other half is the hooks and settings that prevent mistakes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;rm -rf blocker:&lt;/strong&gt; A pre-tool-use hook that blocks any &lt;code&gt;rm&lt;/code&gt; command with both recursive and force flags. Claude can delete individual files, but it can't wipe directories. This has saved me exactly once — and once is enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main branch push blocker:&lt;/strong&gt; Blocks &lt;code&gt;git push&lt;/code&gt; to main or master. Claude works on feature branches. Always. No exceptions, no "just this once."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permission defaults:&lt;/strong&gt; &lt;code&gt;acceptEdits&lt;/code&gt; mode means Claude can read and edit files without asking, but destructive operations still require confirmation. The PhpStorm MCP tools are pre-allowed because they're all safe. Sensitive paths (&lt;code&gt;~/.ssh&lt;/code&gt;, &lt;code&gt;~/.aws&lt;/code&gt;, credentials) are explicitly denied.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Don't Use
&lt;/h2&gt;

&lt;p&gt;Equally important: I don't install every plugin available. No CMS integrations, no AI-to-AI chains, no experimental features that aren't stable. Every plugin in my setup has been there for at least a week and proved its value through daily use. If something adds complexity without measurably improving output quality or speed, it gets removed.&lt;/p&gt;

&lt;p&gt;The goal isn't maximizing the number of tools — it's minimizing friction between my decisions and working code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack at a Glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Docs&lt;/td&gt;
&lt;td&gt;Context7&lt;/td&gt;
&lt;td&gt;Current library documentation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code Intelligence&lt;/td&gt;
&lt;td&gt;Code Review Graph&lt;/td&gt;
&lt;td&gt;Structural awareness, impact analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Planning&lt;/td&gt;
&lt;td&gt;Planning with Files&lt;/td&gt;
&lt;td&gt;Multi-step task tracking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IDE&lt;/td&gt;
&lt;td&gt;PhpStorm Plugin&lt;/td&gt;
&lt;td&gt;Symbol resolution, inspections&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LSP&lt;/td&gt;
&lt;td&gt;PHPantom Docker&lt;/td&gt;
&lt;td&gt;PHP language server in Docker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optimization&lt;/td&gt;
&lt;td&gt;RTK&lt;/td&gt;
&lt;td&gt;Token reduction on CLI output&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Safety&lt;/td&gt;
&lt;td&gt;Custom hooks&lt;/td&gt;
&lt;td&gt;Block destructive operations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you're using Claude Code without any plugins, start with Context7 and Code Review Graph. They have the highest impact-to-setup-effort ratio. If you're in a PHP/PhpStorm environment, the PhpStorm plugin is a no-brainer. And if token costs matter to you (they should), look at RTK.&lt;/p&gt;

&lt;p&gt;The plugins don't make Claude Code smarter. They give it better information to work with — and that's the difference between a tool that generates plausible code and one that generates correct code.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>developerproductivity</category>
      <category>plugins</category>
      <category>developertools</category>
    </item>
    <item>
      <title>How Anime Helped Me Through Depression — And Still Does</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Mon, 23 Mar 2026 19:35:54 +0000</pubDate>
      <link>https://dev.to/iamirondev/how-anime-helped-me-through-depression-and-still-does-3jam</link>
      <guid>https://dev.to/iamirondev/how-anime-helped-me-through-depression-and-still-does-3jam</guid>
      <description>&lt;p&gt;There were days I couldn't get out of bed.&lt;/p&gt;

&lt;p&gt;Not "didn't feel like it" days. Days where the distance between lying down and standing up felt physically insurmountable. Where the weight of existing was just too much. I'm a Senior PHP Developer. I write technical articles about productivity, code quality, and developer workflows. And for a period of my life, I couldn't get out of bed.&lt;/p&gt;

&lt;p&gt;I want to talk about what helped. Not therapy alone, though therapy was essential. Not escitalopram alone, though medication gave me back a floor to stand on. Something that sounds trivial when you say it out loud:&lt;/p&gt;

&lt;p&gt;Anime.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Word That Changes Everything
&lt;/h2&gt;

&lt;p&gt;There's a version of this article where I say "anime was my escape." That framing is comfortable. It doesn't challenge anyone. It positions watching animation as a guilty pleasure — something you do to avoid the hard stuff.&lt;/p&gt;

&lt;p&gt;That's not what happened for me.&lt;/p&gt;

&lt;p&gt;What anime gave me wasn't an escape from emotion. It was &lt;em&gt;access&lt;/em&gt; to emotion. There's a difference, and it matters.&lt;/p&gt;

&lt;p&gt;Depression has this cruel paradox at its core: you feel terrible, but you also feel &lt;em&gt;nothing&lt;/em&gt;. The numbness is often worse than the pain. You can't connect to things that used to matter. Gaming, which I loved for years, stopped working for me. The feedback loops that once felt rewarding went flat. I lost it gradually, and then all at once.&lt;/p&gt;

&lt;p&gt;Anime didn't go flat. And I spent a long time wondering why.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Anime Does That Other Media Doesn't
&lt;/h2&gt;

&lt;p&gt;The answer, I think, is &lt;em&gt;emotional precision&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A well-crafted anime doesn't just make you sad or happy. It makes you feel something very specific, delivered at exactly the right moment, with exactly the right weight. It's a medium that has learned — through decades of craft — to compress human experience into its most essential form.&lt;/p&gt;

&lt;p&gt;When I watched &lt;em&gt;Frieren: Beyond Journey's End&lt;/em&gt; for the first time, I didn't expect much. A slow fantasy series about an elf who outlives her companions. But somewhere in the first few episodes something happened that I can only describe as: it felt like coming home. Every new episode still feels that way. That specific, rare sensation of being exactly where you're supposed to be.&lt;/p&gt;

&lt;p&gt;That's not escapism. That's the opposite of numbness.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Structure That Fits a Broken Brain
&lt;/h2&gt;

&lt;p&gt;Here's something practical that no one talks about: &lt;strong&gt;the 12-episode format is uniquely accessible when your mental energy is limited.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you're depressed, commitment is terrifying. Committing to a 60-episode series feels like signing a contract you're not sure you can honor. Committing to a movie means you need to stay present for 90-120 minutes without your mind collapsing inward. Both feel like too much on the wrong day.&lt;/p&gt;

&lt;p&gt;A 12-episode anime is different. It's a complete story in roughly five hours total. You can watch one episode — 22 minutes — and feel like you accomplished something. You can see the end from the beginning. It's structured around a promise it will actually keep.&lt;/p&gt;

&lt;p&gt;I've watched more 12-episode slow-burn romance anime than I can count. Stories where the entire emotional arc builds toward a single moment — sometimes just a held gaze, sometimes a first kiss that takes eleven episodes to arrive. As a 32-year-old man, I've stopped apologizing for this. The people who raised an eyebrow never understood what those shows were actually doing.&lt;/p&gt;

&lt;p&gt;They weren't filling a void. They were teaching me that slow things can be worth waiting for. That anticipation is its own kind of warmth. That connection built carefully means more than connection that arrives instantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Moments That Broke Through
&lt;/h2&gt;

&lt;p&gt;I could write a list of anime that "helped me." But that would miss the point. It was never about the shows themselves. It was about specific moments where something on screen named something inside me that I hadn't been able to name myself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Episode 22 of &lt;em&gt;86 Eighty-Six&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm not going to spoil it. But I will say this: I had to put my phone down afterward and just sit. Not because it was gratuitously sad. Because it was &lt;em&gt;true&lt;/em&gt;. The show had spent twenty-one episodes making you care, and then it showed you the cost of that caring without looking away. That's rare. That matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A side character named Komachi in &lt;em&gt;Journal with Witch&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;He's barely in the show. But there's a moment where he talks about letting go of the unspoken rules he was handed — the "men don't cry" variety — and how things got lighter when he stopped performing them. I watched that scene and felt something shift. Not because it was a revelation. Because someone had said it out loud in a way I hadn't heard before. Sometimes you need to see a thing reflected back at you before you can fully recognize it in yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The entire run of &lt;em&gt;Re:Zero&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This one is harder to explain. On the surface it's a fantasy isekai about a boy who dies and resets. But underneath it's about the specific terror of feeling like you're the only one suffering, of being unable to communicate that suffering to the people around you, and of having to keep going anyway. Subaru is not a character I always liked. But I understood him in my bones during periods when I understood very little else.&lt;/p&gt;

&lt;h2&gt;
  
  
  On Being Honest About This
&lt;/h2&gt;

&lt;p&gt;I was diagnosed with depression. I went through therapy. I took escitalopram for a period. I've stopped taking it now and I'm doing better — genuinely better, not "saying I'm better" better. Therapy worked. The work was worth it.&lt;/p&gt;

&lt;p&gt;I'm writing this because the subject needs more people saying it plainly. Not as a content hook. Not as a personal brand moment. Because there are developers reading this who sit behind the same kind of technical output I produce, who write clean code and deliver on time and look fine from the outside, and who also sometimes can't get out of bed.&lt;/p&gt;

&lt;p&gt;You're allowed to find your way through with unlikely tools.&lt;/p&gt;

&lt;p&gt;You're allowed to cry at episode 22 of a sci-fi anime about kids in giant mechs.&lt;/p&gt;

&lt;p&gt;You're allowed to feel genuinely moved by a 12-episode romance where the entire payoff is one kiss in the rain.&lt;/p&gt;

&lt;p&gt;You're allowed to say that a piece of Japanese animation helped you survive a hard period in your life, even if you also write serious articles about software architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Both things are true. They live in the same person. That person is fine.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Tell Someone Who's Struggling Right Now
&lt;/h2&gt;

&lt;p&gt;Therapy first, if you can access it. Medication if you need it — there's no medal for suffering without it. And in between the hard work of getting better, find the thing that gives you &lt;em&gt;access&lt;/em&gt; to yourself when everything else has gone quiet.&lt;/p&gt;

&lt;p&gt;For me that was anime. For you it might be something else entirely. But if you've been dismissing it as frivolous — if you've been watching 22 minutes of something that makes you feel something real and then feeling guilty about it — stop feeling guilty.&lt;/p&gt;

&lt;p&gt;You're not escaping. You're staying in contact with the part of yourself that's still alive.&lt;/p&gt;

&lt;p&gt;That's not a small thing. That's everything.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're going through something difficult right now, please consider reaching out to a mental health professional. In Germany: Telefonseelsorge 0800 111 0 111 (free, 24/7). You don't have to be at rock bottom to deserve support.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://blog.tony-stark.xyz/blog/2026-03-how-anime-helps-with-depression/" rel="noopener noreferrer"&gt;blog.tony-stark.xyz&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mentalhealth</category>
      <category>anime</category>
      <category>depression</category>
      <category>developerlife</category>
    </item>
    <item>
      <title>From Template to Production App in a Weekend</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Sun, 22 Mar 2026 20:08:55 +0000</pubDate>
      <link>https://dev.to/iamirondev/from-template-to-production-app-in-a-weekend-ji5</link>
      <guid>https://dev.to/iamirondev/from-template-to-production-app-in-a-weekend-ji5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-04-smarthabit-tracker/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I built the &lt;a href="https://github.com/tony-stark-eth/template-symfony-sveltekit" rel="noopener noreferrer"&gt;template-symfony-sveltekit&lt;/a&gt; over a weekend — PHPStan level max, mutation testing, five CI workflows, the whole stack pre-configured. The point of a template is to prove it works by actually using it. So I immediately started &lt;a href="https://github.com/tony-stark-eth/smarthabit-tracker" rel="noopener noreferrer"&gt;SmartHabit Tracker&lt;/a&gt; on top of it.&lt;/p&gt;

&lt;p&gt;This is the story of what happened when the guardrails met real feature development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What SmartHabit Tracker Does
&lt;/h2&gt;

&lt;p&gt;It's a household habit tracker. Multiple people in the same household share a habit list, log completions with a single tap, and get notified at the right time — not at a fixed time you set once and then ignore.&lt;/p&gt;

&lt;p&gt;The notification piece is the interesting part. Most habit apps let you set a reminder at 8am and then you snooze it forever. SmartHabit Tracker watches when you actually complete habits over 21 days and adapts the reminder time toward your real behavior. If you consistently log your morning run between 7:15 and 7:45, the reminder shifts there. If your pattern changes, the timing follows.&lt;/p&gt;

&lt;p&gt;The multi-platform side was a deliberate constraint I set early: no Firebase. I've used Firebase before and the free tier has limits that matter at scale, and the dependency on Google's infrastructure is one I didn't want. Instead: Web Push for the PWA, &lt;a href="https://ntfy.sh" rel="noopener noreferrer"&gt;ntfy&lt;/a&gt; for Android, APNs for iOS. One Symfony service, three transports, platform detection on the frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  The MAD Algorithm
&lt;/h2&gt;

&lt;p&gt;MAD stands for Median Absolute Deviation. It's a robust statistical measure — resistant to outliers the way median is resistant compared to mean. For habit timing, that matters: a single anomalous day (you logged at 11pm because you were traveling) shouldn't wreck the model.&lt;/p&gt;

&lt;p&gt;The implementation takes the last 21 days of completion timestamps for a habit, calculates the median completion time, then uses MAD to determine how wide the behavioral window is. A habit with consistent timing gets a tight window and an earlier reminder. A habit with scattered timing gets a looser window. The algorithm isn't ML — it's statistics, running in PHP, with no external dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Calculate the median completion time from recent logs&lt;/span&gt;
&lt;span class="nv"&gt;$median&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;calculateMedian&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$timestamps&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// MAD: median of absolute deviations from the median&lt;/span&gt;
&lt;span class="nv"&gt;$deviations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;array_map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nv"&gt;$ts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ts&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;$median&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nv"&gt;$timestamps&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nv"&gt;$mad&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;calculateMedian&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$deviations&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Tight MAD = consistent habit = reminder 30 minutes before median&lt;/span&gt;
&lt;span class="c1"&gt;// Wide MAD = scattered habit = reminder 60 minutes before median&lt;/span&gt;
&lt;span class="nv"&gt;$reminderOffset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$mad&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;TIGHT_WINDOW_THRESHOLD&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I made the threshold values configurable per-household rather than global constants. That was a decision the template's architecture tests enforced: &lt;code&gt;phpat&lt;/code&gt; flagged a service that was reading from a global config file when it should have been reading from a household-scoped repository. Without that rule, I'd have shipped it wrong and refactored later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-Time Sync with Mercure
&lt;/h2&gt;

&lt;p&gt;The template already included Mercure via Caddy. Wiring it into habit logging took one afternoon.&lt;/p&gt;

&lt;p&gt;The flow: a user taps "done" on a habit. The frontend applies an optimistic update immediately — the UI responds in under 50ms regardless of network. In the background, it posts to the API. The API persists the log, then publishes a Mercure event to the household's private topic. Every other connected device in the household receives the update and reflects it without polling.&lt;/p&gt;

&lt;p&gt;The optimistic UI piece required careful handling of rollback. If the API call fails, the frontend needs to undo the optimistic state change. I had Claude Code generate the initial SvelteKit store logic, and it got the happy path right but missed the rollback. Caught in review. The pattern I ended up with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Optimistic update first&lt;/span&gt;
&lt;span class="nx"&gt;habitStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;markComplete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;habitId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logCompletion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;habitId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="c1"&gt;// Mercure event will confirm state on other devices&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Rollback on failure&lt;/span&gt;
  &lt;span class="nx"&gt;habitStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;markIncomplete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;habitId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;toast&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Could not save — check your connection&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Mercure subscription runs on a shared &lt;code&gt;EventSource&lt;/code&gt; per household. I didn't want one connection per component. Managing that shared connection in SvelteKit meant using a Svelte store with lifecycle hooks — another place where the generated code needed a review pass before it was correct.&lt;/p&gt;

&lt;h2&gt;
  
  
  Household Isolation
&lt;/h2&gt;

&lt;p&gt;Every API endpoint is protected by a Symfony security voter that validates household membership. Not role-based access — voter-based. The voter receives the habit (or completion log, or household member) being accessed and checks whether the authenticated user belongs to the same household.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;voteOnAttribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nv"&gt;$attribute&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;mixed&lt;/span&gt; &lt;span class="nv"&gt;$subject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;TokenInterface&lt;/span&gt; &lt;span class="nv"&gt;$token&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$token&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// $subject is the domain object — voter checks household membership&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nv"&gt;$subject&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getHousehold&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;isMember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern meant I never had to write &lt;code&gt;if ($habit-&amp;gt;getHousehold() !== $user-&amp;gt;getHousehold())&lt;/code&gt; in controller code. The voter enforces the boundary. The template's architecture rules prevented me from putting access logic in controllers — &lt;code&gt;phpat&lt;/code&gt; would have flagged it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Quality Stack in Practice
&lt;/h2&gt;

&lt;p&gt;The template shipped with 10 PHPStan extensions configured at level max, Rector with PHP 8.4 + Symfony 8 rulesets, ECS for coding standards, Infection for mutation testing, and CaptainHook running checks on every commit.&lt;/p&gt;

&lt;p&gt;Here's what that actually caught during SmartHabit development:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PHPStan caught a nullable type I'd missed.&lt;/strong&gt; A query method returned &lt;code&gt;?Household&lt;/code&gt; but I was calling methods on it without a null check. The AI-generated code handled the non-null path correctly and silently dropped the null case. PHPStan flagged it at level max. Without it, that's a production &lt;code&gt;NullPointerException&lt;/code&gt; waiting for a user who somehow ends up without a household association.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infection proved a timing test was hollow.&lt;/strong&gt; I had a test for the MAD calculation that passed but didn't actually assert the right output — it asserted that the result was &lt;code&gt;not null&lt;/code&gt;. Infection mutated the return value and the test still passed. I rewrote the test to assert the specific timestamp. MSI for the notification domain ended up at 93%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rector caught a PHP 8.4 pattern I'd written in PHP 7 style.&lt;/strong&gt; The property hooks feature was available and Rector flagged the old-style getter/setter pair as replaceable. Not a bug, but it matters: the codebase looks consistent, regardless of which lines Claude Code wrote and which I wrote.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CaptainHook blocked a commit with ECS violations.&lt;/strong&gt; I wrote a quick helper function manually during a debugging session, formatted it however, and tried to commit. Hook ran ECS, failed, auto-fixed, and I had to stage the fix before the commit went through. That's the intended behavior — and it works the same whether the code is human-written or AI-generated.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Test Numbers
&lt;/h2&gt;

&lt;p&gt;233 unit and integration tests, 38 Playwright end-to-end tests. 93% mutation score index on the backend.&lt;/p&gt;

&lt;p&gt;The E2E tests cover the critical paths: habit creation, completion logging (including the optimistic UI rollback), household member invitation, and notification preference configuration. They run against a real Docker environment in CI — not mocked, not stubbed. The Playwright suite runs in the CI workflow after the backend quality checks pass, so a PHPStan failure won't waste time running E2E tests.&lt;/p&gt;

&lt;p&gt;Five workflows total:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backend quality (PHPStan, ECS, Rector)&lt;/li&gt;
&lt;li&gt;Frontend linting and type checking&lt;/li&gt;
&lt;li&gt;PHPUnit + Infection&lt;/li&gt;
&lt;li&gt;Playwright E2E&lt;/li&gt;
&lt;li&gt;Deploy to production (Hetzner via OpenTofu, only on push to &lt;code&gt;main&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Push to &lt;code&gt;main&lt;/code&gt; triggers a deployment. Docker health checks prevent the new containers from going live if they fail the health endpoint. I've had it block a broken deploy twice during development. That's the system working.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Was Hard
&lt;/h2&gt;

&lt;p&gt;The PWA offline queue was the most difficult part. The service worker needs to queue habit completions when offline, replay them when connectivity returns, and handle conflicts if another household member logged the same habit in the meantime. The conflict resolution is simple — last-write-wins with a server timestamp — but getting the queue to replay reliably across page reloads, browser restarts, and varying network conditions took more iteration than anything else.&lt;/p&gt;

&lt;p&gt;The Capacitor integration for iOS was a close second. The native shell is thin, but APNs requires certificates, entitlements, provisioning profiles, and a specific Symfony bundle configuration that doesn't have great documentation. I spent a full afternoon on that alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Template's Return on Investment
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/tony-stark-eth/template-symfony-sveltekit" rel="noopener noreferrer"&gt;template&lt;/a&gt; made SmartHabit Tracker's quality consistent from the first commit. I didn't configure PHPStan for this project — it was already configured. I didn't write CI workflows — they were already there. I didn't set up mutation testing — it was already passing on an empty codebase.&lt;/p&gt;

&lt;p&gt;That meant every hour I spent on SmartHabit Tracker went toward product decisions and domain logic, not tooling setup. And the tooling caught real bugs — not theoretical ones, not "this would be a problem at scale" ones. Actual defects that would have reached production.&lt;/p&gt;

&lt;p&gt;If you want to build something similar:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/tony-stark-eth/smarthabit-tracker" rel="noopener noreferrer"&gt;github.com/tony-stark-eth/smarthabit-tracker&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The quality stack from &lt;a href="https://dev.to/blog/2026-03-10x-output-with-quality/"&gt;post 1&lt;/a&gt; is fully intact. Fork it, adjust the habit domain, keep the guardrails.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>symfony</category>
      <category>pwa</category>
      <category>opensource</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>My Opinionated Symfony + SvelteKit Template with 10 PHPStan Extensions</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Sun, 22 Mar 2026 20:04:57 +0000</pubDate>
      <link>https://dev.to/iamirondev/my-opinionated-symfony-sveltekit-template-with-10-phpstan-extensions-4c6o</link>
      <guid>https://dev.to/iamirondev/my-opinionated-symfony-sveltekit-template-with-10-phpstan-extensions-4c6o</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-03-symfony-sveltekit-template/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Every time I start a new PHP project, I face the same ritual: 30 minutes of configuring PHPStan, 20 minutes on ECS, another hour on CI pipelines, Rector setup, mutation testing baseline, Doctrine configuration, Docker multi-stage builds. I've done this enough times that I know exactly what I want — and I got tired of rebuilding it from scratch.&lt;/p&gt;

&lt;p&gt;So I built a template. Not a skeleton — an opinionated, production-ready starting point with every quality tool pre-configured at the level I actually use in production.&lt;/p&gt;

&lt;p&gt;The repo is at &lt;strong&gt;&lt;a href="https://github.com/tony-stark-eth/template-symfony-sveltekit" rel="noopener noreferrer"&gt;github.com/tony-stark-eth/template-symfony-sveltekit&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's in the Stack
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Backend&lt;/strong&gt;: PHP 8.4, Symfony 8, Doctrine ORM, FrankenPHP in Worker Mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend&lt;/strong&gt;: SvelteKit 2 with Svelte 5, TypeScript in strict mode, Tailwind 4, Bun as the package manager and runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database&lt;/strong&gt;: PostgreSQL 17 with PgBouncer in Transaction Mode. PgBouncer is there from day one because adding it later to an existing setup is more painful than people expect — especially if your code uses &lt;code&gt;SET&lt;/code&gt; commands or advisory locks that don't survive connection reuse. Better to design around it early.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;: OpenTofu modules for Hetzner. Deploying to a Hetzner VPS is cheap and simple; the modules handle the server provisioning, DNS, and firewall setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Same-Origin Architecture
&lt;/h2&gt;

&lt;p&gt;The template puts both the PHP API and the SvelteKit frontend behind a single Caddy reverse proxy on a single domain. API routes go to FrankenPHP, everything else goes to the SvelteKit server.&lt;/p&gt;

&lt;p&gt;This means no CORS headers, no &lt;code&gt;SameSite=None&lt;/code&gt; cookies, no cross-origin authentication complexity. A session cookie set by Symfony is readable by SvelteKit's server-side rendering on the same origin. The SvelteKit &lt;code&gt;load&lt;/code&gt; function calls &lt;code&gt;fetch('/api/...')&lt;/code&gt; — no base URL configuration, no environment variable juggling per environment.&lt;/p&gt;

&lt;p&gt;The tradeoff is that both services must be deployed together. For a product with separate teams on frontend and backend, a split origin might make sense. For a solo developer or a small team shipping one product, Same-Origin keeps the operational surface small.&lt;/p&gt;

&lt;h2&gt;
  
  
  The PHPStan Setup
&lt;/h2&gt;

&lt;p&gt;PHPStan is configured at level max with 10 extensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;phpstan-strict-rules&lt;/strong&gt; — the extensions PHPStan ships but doesn't enable by default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;phpstan-deprecation-rules&lt;/strong&gt; — surfaces deprecated API usage before your dependencies drop them&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;phpstan-symfony&lt;/strong&gt; — understands Symfony's service container and DI conventions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;phpstan-doctrine&lt;/strong&gt; — knows about Doctrine entity mappings and query builder types&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;phpstan-phpunit&lt;/strong&gt; — type inference inside PHPUnit tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;shipmonk/phpstan-rules&lt;/strong&gt; — ~40 additional rules covering enum exhaustiveness and exception handling hygiene&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;voku/phpstan-rules&lt;/strong&gt; — operator type safety (no more implicit int/string coercions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tomasvotruba/cognitive-complexity&lt;/strong&gt; — hard limit of 8 per method, 50 per class. If PHPStan fails because of cognitive complexity, the method needs to be split, not the limit raised.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tomasvotruba/type-coverage&lt;/strong&gt; — 100% type coverage required. No untyped property, no missing return type.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;phpat/phpat&lt;/strong&gt; — architecture tests as code. Define which layers can depend on which, and PHPStan enforces it on every run.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last one is underused in PHP projects. With phpat, I can write rules like "controllers may not depend on Doctrine repositories directly" and get a static analysis failure — not a code review comment, not a runtime error — if that boundary is crossed. The template ships with a basic &lt;code&gt;ArchitectureTest.php&lt;/code&gt; that you extend as the project grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mutation Testing
&lt;/h2&gt;

&lt;p&gt;PHPUnit plus code coverage tells you that your tests run. Infection tells you whether they actually test anything.&lt;/p&gt;

&lt;p&gt;Infection works by mutating the source code — flipping a &lt;code&gt;&amp;gt;&lt;/code&gt; to &lt;code&gt;&amp;gt;=&lt;/code&gt;, removing a &lt;code&gt;return&lt;/code&gt;, changing a &lt;code&gt;true&lt;/code&gt; to &lt;code&gt;false&lt;/code&gt; — and then running your test suite against each mutation. If a test fails after mutation, the mutation is "killed." If nothing fails, the mutation "escapes," meaning your tests don't cover that behavior.&lt;/p&gt;

&lt;p&gt;The template requires a Mutation Score Indicator (MSI) of at least 80%, and Covered MSI of at least 90%. These aren't arbitrary numbers — 80% MSI means that 4 out of 5 possible mutations to your code break at least one test. At lower thresholds, you'll find test suites that have 90% line coverage but barely prove anything about behavior.&lt;/p&gt;

&lt;p&gt;Infection runs as part of CI, after PHPUnit, only when the unit test suite passes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Other Quality Tools
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Rector&lt;/strong&gt; runs with PHP 8.4 and Symfony 8 rule sets. Early returns, enum usage, typed properties, dead code removal — the code in the repository always reflects current PHP idioms regardless of whether a human or an AI wrote it. Rector is configured to auto-fix, not just report.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ECS&lt;/strong&gt; (Easy Coding Standard) handles formatting and coding style. It runs before PHPStan in CI and fails the build before the slower analysis even starts. On commit, CaptainHook runs ECS and PHPStan locally so you know before you push.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CaptainHook&lt;/strong&gt; rather than GrumPHP for git hooks. GrumPHP has historically had issues with how it handles the hook environment; CaptainHook is simpler and its configuration is more explicit.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD
&lt;/h2&gt;

&lt;p&gt;Two workflows cover the full stack:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ci.yml&lt;/code&gt; runs on every push and PR: ECS → PHPStan → Rector check → PHPUnit with path coverage → Infection. The order matters: formatting and static analysis are fast and fail early; mutation testing is slow and only runs when everything else passes.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ci-frontend.yml&lt;/code&gt; covers the SvelteKit side: ESLint → Svelte Check → Bun build. This runs in parallel with the PHP pipeline.&lt;/p&gt;

&lt;p&gt;Two additional workflows use Claude Code. &lt;code&gt;claude-update.yml&lt;/code&gt; runs on a biweekly schedule and opens PRs for dependency updates — both Composer and npm — with commit messages that explain what changed and why it matters. &lt;code&gt;claude-review.yml&lt;/code&gt; posts an automated code review on every PR. Neither replaces human review, but they catch the obvious things before a human spends time on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code Integration
&lt;/h2&gt;

&lt;p&gt;The template ships with a &lt;code&gt;CLAUDE.md&lt;/code&gt; and a &lt;code&gt;.claude/&lt;/code&gt; directory that encodes the architecture decisions and coding conventions. This is the same approach I described in &lt;a href="https://dev.to/blog/2026-03-10x-output-with-quality/"&gt;my previous post about 10x output with quality&lt;/a&gt;: you document the decisions once, and then every Claude Code session starts with that context already loaded.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;.claude/&lt;/code&gt; guidelines cover things like: how to structure Symfony services, which Doctrine patterns to use, how the Same-Origin routing works, which PHPStan rules are intentional versus suppressible. If you use Claude Code to build on top of this template, it already knows the constraints.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Template Is Not
&lt;/h2&gt;

&lt;p&gt;It's not a microservices framework, not a monorepo setup, and not designed for projects where the frontend and backend are owned by separate teams. It's a solid starting point for a single product, built by a small team that wants quality tooling from day one without spending a week configuring it.&lt;/p&gt;

&lt;p&gt;If you want the full context for how I built this — and why the quality stack matters more than the AI that helped me write it — read &lt;a href="https://dev.to/blog/2026-03-10x-output-with-quality/"&gt;How I 10x My Output as a Senior Developer Without Sacrificing Code Quality&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Fork it, run &lt;code&gt;docker compose up&lt;/code&gt;, and you have PHPStan level max passing from commit zero.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>symfony</category>
      <category>phpstan</category>
      <category>sveltekit</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How I 10x My Output as a Senior Developer Without Sacrificing Code Quality</title>
      <dc:creator>Kevin</dc:creator>
      <pubDate>Sun, 22 Mar 2026 20:04:53 +0000</pubDate>
      <link>https://dev.to/iamirondev/how-i-10x-my-output-as-a-senior-developer-without-sacrificing-code-quality-bdg</link>
      <guid>https://dev.to/iamirondev/how-i-10x-my-output-as-a-senior-developer-without-sacrificing-code-quality-bdg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.tony-stark.xyz/blog/2026-03-10x-output-with-quality/" rel="noopener noreferrer"&gt;my blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Last weekend I shipped two repositories from scratch: an &lt;a href="https://github.com/tony-stark-eth/template-symfony-sveltekit" rel="noopener noreferrer"&gt;opinionated full-stack template&lt;/a&gt; for PHP 8.4 + Symfony 8 + SvelteKit 2, and a &lt;a href="https://github.com/tony-stark-eth/smarthabit-tracker" rel="noopener noreferrer"&gt;complete habit tracking application&lt;/a&gt; built on top of it. 51 commits, 6 GitHub Actions workflows, 10 PHPStan extensions configured at level max, Docker multi-stage builds, OpenTofu infrastructure — all passing CI.&lt;/p&gt;

&lt;p&gt;That's not a normal weekend.&lt;/p&gt;

&lt;p&gt;I'm a senior developer with strong opinions about code quality. I don't ship code without static analysis at the highest level, mutation testing, architecture tests, and automated formatting. None of that changed. What changed is &lt;em&gt;how&lt;/em&gt; I get there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottleneck Was Never Thinking
&lt;/h2&gt;

&lt;p&gt;Here's what I realized: most of my time as a senior developer was never spent on architecture decisions or solving hard problems. It was spent on everything around those decisions. Writing boilerplate. Configuring tools. Looking up API signatures. Writing the 14th PHPUnit test that follows the same pattern as the previous 13. Fixing YAML indentation in CI workflows.&lt;/p&gt;

&lt;p&gt;These tasks require knowledge to do correctly, but they don't require creativity. They're the tax you pay for building things properly.&lt;/p&gt;

&lt;p&gt;AI code assistants eliminate that tax.&lt;/p&gt;

&lt;h2&gt;
  
  
  The System
&lt;/h2&gt;

&lt;p&gt;My workflow has three layers, and the order matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 1: I make the decisions.&lt;/strong&gt; Architecture, tech stack, data model, which tools to use and why, what the quality standards are. This part is entirely human. I spent hours in a planning session defining the template's quality stack — which PHPStan extensions to include, why CaptainHook over GrumPHP, why Same-Origin architecture instead of separate API and frontend domains, why PgBouncer in transaction mode needs &lt;code&gt;DISCARD ALL&lt;/code&gt;. These are decisions that require experience and judgment. No AI made them for me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 2: I write the spec, the AI writes the code.&lt;/strong&gt; Once the decisions are made, I document them in a format Claude Code can execute against. A &lt;code&gt;CLAUDE.md&lt;/code&gt; file that defines the project context and hard constraints. A &lt;code&gt;.claude/&lt;/code&gt; directory with coding guidelines, testing rules, and architecture conventions. When I tell Claude Code to create a &lt;code&gt;phpstan.neon&lt;/code&gt; with level max and 10 specific extensions, it doesn't need to figure out &lt;em&gt;which&lt;/em&gt; extensions — I already made that call. It just needs to produce correct configuration. That's a task it handles well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 3: I review everything.&lt;/strong&gt; Every line Claude Code produces goes through my review. Not a rubber stamp — an actual review where I check for the same things I'd check in any PR. Does the Doctrine mapping make sense? Is the Caddyfile routing correct for Same-Origin? Are the CI workflow dependencies right so PHPStan runs before tests? This is where senior experience compounds: I catch issues that the AI doesn't know are issues, because they require understanding how things interact in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Quality Stack Is the Multiplier
&lt;/h2&gt;

&lt;p&gt;Here's what most "AI makes me 10x productive" posts get wrong: they focus on the AI and ignore the safety net.&lt;/p&gt;

&lt;p&gt;If I used Claude Code without PHPStan at level max, without mutation testing, without architecture tests — I'd ship faster, sure. I'd also ship bugs. AI-generated code is plausible code. It looks right. It often &lt;em&gt;is&lt;/em&gt; right. But "often" is not "always", and the gap between those two words is where production incidents live.&lt;/p&gt;

&lt;p&gt;My quality stack is what turns AI-assisted speed into AI-assisted &lt;em&gt;confidence&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PHPStan at level max with 10 extensions&lt;/strong&gt; catches type errors, forgotten exceptions, cognitive complexity violations, and architectural boundary crossings. If Claude Code generates a service that accidentally depends on an infrastructure layer, phpat flags it before I even see the code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mutation testing via Infection&lt;/strong&gt; proves that tests actually test something. It's easy to write tests that pass but don't assert meaningful behavior — especially when an AI writes them. Infection mutates the code and checks if tests catch the change. MSI below 80% means the test suite is decorative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rector with auto-fix rules&lt;/strong&gt; ensures the code follows PHP 8.4 idioms regardless of who — or what — wrote it. Early returns, type declarations, dead code removal. The code that lands in the repository always looks like &lt;em&gt;my&lt;/em&gt; code, not like "AI code."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CaptainHook git hooks&lt;/strong&gt; run ECS and PHPStan on every commit. Even in a fast-moving session with Claude Code, nothing bypasses the quality gate.&lt;/p&gt;

&lt;p&gt;The result: I move fast, but the guardrails are always on. The AI proposes, the quality stack validates, and I make the final call.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Is Bad At
&lt;/h2&gt;

&lt;p&gt;I want to be specific about where Claude Code fails, because the honest version of this story matters more than the hype version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't question your decisions.&lt;/strong&gt; If I tell it to implement something architecturally wrong, it will do it confidently and correctly — the wrong thing, done well. The &lt;code&gt;CLAUDE.md&lt;/code&gt; helps here because it encodes my decisions, but it can't encode judgment I haven't articulated yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It loses context in long sessions.&lt;/strong&gt; After 30+ back-and-forth iterations, Claude Code starts forgetting constraints from earlier in the conversation. I've learned to keep sessions focused: one feature, one file group, then start fresh.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It generates plausible-but-wrong configuration.&lt;/strong&gt; A Caddyfile that looks correct but has the routing order wrong. A &lt;code&gt;phpstan.neon&lt;/code&gt; that includes an extension that conflicts with another. A &lt;code&gt;compose.yaml&lt;/code&gt; where the PgBouncer service connects to the wrong network. These are exactly the bugs that my review step catches — and exactly why the review step isn't optional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It can't do research.&lt;/strong&gt; When I needed to decide between &lt;code&gt;ntfy&lt;/code&gt; and Firebase for push notifications, Claude Code couldn't evaluate the tradeoffs with real-world experience. It could list pros and cons, but it couldn't tell me that Firebase's free tier has a notification limit that would matter at 500 households. That insight came from my own experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;What used to take me a full sprint (two weeks) to set up — Docker configuration, CI pipeline, quality tooling, frontend scaffolding, infrastructure skeleton — I now complete in a weekend. Not because the AI does it for me, but because it handles the implementation while I focus on the decisions.&lt;/p&gt;

&lt;p&gt;The template repository has: a multi-stage Dockerfile with FrankenPHP, three compose files (dev/override/prod), 10 PHPStan extensions configured and passing, Rector with PHP 8.4 + Symfony 8 rulesets, ECS for coding standards, PHPUnit 13 with path coverage, Infection for mutation testing, CaptainHook for git hooks, 6 GitHub Actions workflows, OpenTofu modules for Hetzner deployment, and a full &lt;code&gt;.claude/&lt;/code&gt; directory with guidelines.&lt;/p&gt;

&lt;p&gt;Configuring all of that manually — even for someone who's done it before — takes days. With Claude Code executing against a clear spec, it takes hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;The template is open source and designed to be forked:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/tony-stark-eth/template-symfony-sveltekit" rel="noopener noreferrer"&gt;github.com/tony-stark-eth/template-symfony-sveltekit&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every quality tool is pre-configured. Every CI workflow is ready. Fork it, run &lt;code&gt;docker compose up&lt;/code&gt;, and you have a full-stack project with PHPStan level max from commit zero.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;CLAUDE.md&lt;/code&gt; and &lt;code&gt;.claude/&lt;/code&gt; guidelines are included — so if you use Claude Code, it already knows how to work with the codebase.&lt;/p&gt;

&lt;p&gt;If you're a senior developer feeling skeptical about AI tools: I was too. The trick is not to let the AI drive. You drive. The AI is the engine. And the quality stack is the brakes.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Found this useful? Follow me on &lt;a href="https://blog.tony-stark.xyz" rel="noopener noreferrer"&gt;my blog&lt;/a&gt; for more.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>developerproductivity</category>
      <category>codequality</category>
      <category>claudecode</category>
      <category>phpstan</category>
    </item>
  </channel>
</rss>
