<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ww-w.ai</title>
    <description>The latest articles on DEV Community by ww-w.ai (@ww-w-ai).</description>
    <link>https://dev.to/ww-w-ai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ww-w-ai"/>
    <language>en</language>
    <item>
      <title>Lorem Ipsum Makes LLMs Smarter. No, Seriously.</title>
      <dc:creator>ww-w.ai</dc:creator>
      <pubDate>Mon, 11 May 2026 17:32:06 +0000</pubDate>
      <link>https://dev.to/ww-w-ai/lorem-ipsum-makes-llms-smarter-no-seriously-1j8l</link>
      <guid>https://dev.to/ww-w-ai/lorem-ipsum-makes-llms-smarter-no-seriously-1j8l</guid>
      <description>&lt;p&gt;You know Lorem Ipsum. The placeholder text designers have been slapping into mockups since the 1960s. Turns out, it might be one of the most effective tools for making language models better at math.&lt;/p&gt;

&lt;p&gt;A paper dropped last week — "Nonsense Helps: Prompt Space Perturbation Broadens Reasoning Exploration" (Huang et al., May 2026) — and the core finding is wild: prepending random Lorem Ipsum text before math problems during reinforcement learning training produces models that solve problems they otherwise never could.&lt;/p&gt;

&lt;p&gt;Let me walk through why this works, because it is genuinely clever once you see the mechanism.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: When Every Answer Is Wrong, Nobody Learns
&lt;/h2&gt;

&lt;p&gt;Modern LLM training uses reinforcement learning after the initial pretraining phase. One popular method is GRPO (Group Relative Policy Optimization), where you sample multiple candidate answers for a question, then reward the good ones and penalize the bad ones.&lt;/p&gt;

&lt;p&gt;Here is the catch. For hard questions, &lt;em&gt;all&lt;/em&gt; sampled answers might be wrong. When that happens, every candidate gets the same score. The relative advantage between them collapses to zero. No gradient. No learning signal. The model just shrugs and moves on.&lt;/p&gt;

&lt;p&gt;This is called the &lt;strong&gt;zero-advantage problem&lt;/strong&gt;, and it hits hardest on the exact questions you want the model to learn most — the difficult ones sitting at the frontier of its capability.&lt;/p&gt;

&lt;p&gt;Previous fixes tried resampling (just roll the dice again) or adjusting reward scaling. They help a little, but fundamentally you are still asking the same question the same way, hoping for a different result.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Just Jam Some Latin In There
&lt;/h2&gt;

&lt;p&gt;LoPE — Lorem Perturbation for Exploration — does something that sounds like a prank. When the model fails on a hard question, LoPE prepends a randomly assembled chunk of Lorem Ipsum text before the prompt and resamples.&lt;/p&gt;

&lt;p&gt;So instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Solve: What is the integral of x^2 from 0 to 3?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model sees:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Solve: What is the integral of x^2 from 0 to 3?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And somehow, this works. The nonsense prefix perturbs the model's internal state just enough to push it down different reasoning paths. Think of it like giving a stuck hiker a gentle shove in a random direction — sometimes that is all you need to find a trail you could not see before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Latin and Not Just Random Characters?
&lt;/h2&gt;

&lt;p&gt;The authors tested this systematically. Not all perturbations are equal. What works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Latin-based vocabulary&lt;/strong&gt; (Lorem Ipsum words)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low perplexity&lt;/strong&gt; (around 25) — the text needs to "look like language" to the model, even if it is meaningless&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What does not work well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Random character strings (too alien, the model just ignores or breaks)&lt;/li&gt;
&lt;li&gt;High-perplexity gibberish&lt;/li&gt;
&lt;li&gt;Perturbations in the model's primary training language (too much semantic interference)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lorem Ipsum hits a sweet spot: familiar enough that the model processes it normally, foreign enough that it does not contaminate the actual reasoning task. It nudges the hidden states without hijacking them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;p&gt;Tested on Qwen3-4B-Base across standard math benchmarks:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Benchmark&lt;/th&gt;
&lt;th&gt;Standard GRPO&lt;/th&gt;
&lt;th&gt;LoPE&lt;/th&gt;
&lt;th&gt;Change&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;MATH-500&lt;/td&gt;
&lt;td&gt;77.80&lt;/td&gt;
&lt;td&gt;82.60&lt;/td&gt;
&lt;td&gt;+4.80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AMC&lt;/td&gt;
&lt;td&gt;47.76&lt;/td&gt;
&lt;td&gt;58.21&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+22% relative&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AIME 2024&lt;/td&gt;
&lt;td&gt;16.41&lt;/td&gt;
&lt;td&gt;19.90&lt;/td&gt;
&lt;td&gt;+3.49&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Overall avg&lt;/td&gt;
&lt;td&gt;49.37&lt;/td&gt;
&lt;td&gt;53.99&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;+4.62 pts&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On the 7B model, the gap widens further: &lt;strong&gt;+6.20 points&lt;/strong&gt; over standard GRPO.&lt;/p&gt;

&lt;p&gt;But the most interesting result is qualitative. On a set of 352 hard questions, LoPE &lt;strong&gt;uniquely solved 50 questions that no other method could crack&lt;/strong&gt;. These were not marginal improvements on borderline problems. These were questions where every other approach produced zero correct answers, and LoPE found solutions.&lt;/p&gt;

&lt;p&gt;The mechanism shows up clearly in the advantage signal. For those rare successful trajectories on hard problems, LoPE amplifies the advantage by &lt;strong&gt;2.1x to 5.0x&lt;/strong&gt; compared to standard resampling. When a perturbed prompt finally produces a correct answer, that success gets a much stronger training signal because it stands out sharply against the failed attempts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for Practitioners
&lt;/h2&gt;

&lt;p&gt;Three takeaways if you work with LLMs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Exploration is still an unsolved problem.&lt;/strong&gt; We talk a lot about scaling data and compute, but how models explore the solution space during RL training is arguably more important and much less understood. LoPE is evidence that we are leaving performance on the table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Prompt sensitivity is a feature, not a bug.&lt;/strong&gt; The fact that meaningless prefix text can unlock entirely different reasoning chains tells us something deep about how these models navigate their latent space. The "right" answer is often reachable — the model just needs a different starting point.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Simple methods can beat complex ones.&lt;/strong&gt; LoPE is almost embarrassingly simple to implement. No architecture changes. No reward model modifications. Just prepend some Lorem Ipsum during resampling. If you are doing RL fine-tuning, this is a near-zero-cost experiment to try.&lt;/p&gt;

&lt;p&gt;The broader lesson: sometimes the best interventions do not add information. They add noise in exactly the right way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Paper Link
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/abs/2605.05566" rel="noopener noreferrer"&gt;Nonsense Helps: Prompt Space Perturbation Broadens Reasoning Exploration&lt;/a&gt;&lt;br&gt;
Huang, Huang, Li, Cai, Yang, Huang (Washington University in St. Louis) — May 7, 2026&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: This is an arXiv preprint — not yet peer-reviewed. But the results are concrete, the methodology is clean, and the lead researcher (Jiaxin Huang) is a Microsoft Research PhD Fellow and AAAI 2026 New Faculty Highlight recipient. Worth watching.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Image Source: Huang et al., "Nonsense Helps" (arXiv:2605.05566), CC BY-NC-SA 4.0&lt;/em&gt;&lt;/p&gt;




</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>llm</category>
      <category>research</category>
    </item>
    <item>
      <title>Delete the Vercel Claude Code Plugin. Here's Why I Did.</title>
      <dc:creator>ww-w.ai</dc:creator>
      <pubDate>Mon, 11 May 2026 13:46:49 +0000</pubDate>
      <link>https://dev.to/ww-w-ai/delete-the-vercel-claude-code-plugin-heres-why-i-did-39hl</link>
      <guid>https://dev.to/ww-w-ai/delete-the-vercel-claude-code-plugin-heres-why-i-did-39hl</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Vercel Claude Code plugin creates a &lt;strong&gt;permanent device UUID&lt;/strong&gt; on your machine the instant you install it. No notification. No expiry. No rotation.&lt;/li&gt;
&lt;li&gt;Session starts, tool calls, skill matches — all sent to &lt;code&gt;telemetry.vercel.com&lt;/code&gt;. &lt;strong&gt;Default ON, no consent prompt.&lt;/strong&gt; Prompt metadata (matched skill + score) included.&lt;/li&gt;
&lt;li&gt;What's worse: they built a consent dialog for prompt text collection. But clicking "No thanks" only stops prompt text. All other telemetry keeps running. Most users will think they opted out of everything.&lt;/li&gt;
&lt;li&gt;The documentation exists — buried eight directories deep inside &lt;code&gt;~/.claude/plugins/cache/&lt;/code&gt;. Nobody reads it. &lt;strong&gt;Documented ≠ Informed.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What I Found
&lt;/h2&gt;

&lt;p&gt;I was building a static analysis tool for AI plugins — scanning popular skills for security issues. Regex pattern matching plus dual-LLM cross-verification.&lt;/p&gt;

&lt;p&gt;I was running a batch scan — 200 Claude Code skills, checking for destructive commands, data exfiltration, prompt injection, the usual. On skill #147, the scanner flagged something in &lt;code&gt;~/.claude/&lt;/code&gt;. Not in some random GitHub repo. On my own machine.&lt;/p&gt;

&lt;p&gt;I didn't suspect Vercel for a second. I assumed the flag was a false positive in my own skill. So I pulled the Vercel plugin source as a reference — to compare against "known good" code and figure out what I was doing wrong.&lt;/p&gt;

&lt;p&gt;Then I read the Vercel source. Here's what I found.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Evidence
&lt;/h2&gt;

&lt;p&gt;All file paths and line numbers reference &lt;code&gt;vercel-plugin&lt;/code&gt; v0.32.7, located at &lt;code&gt;~/.claude/plugins/cache/vercel/vercel-plugin/0.32.7/&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Every session start sends this:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// session-start-profiler.mts:702-709&lt;/span&gt;
&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;device_id&lt;/span&gt;            &lt;span class="c1"&gt;// permanent device identifier&lt;/span&gt;
&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;platform&lt;/span&gt;             &lt;span class="c1"&gt;// darwin, linux, win32&lt;/span&gt;
&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;likely_skills&lt;/span&gt;        &lt;span class="c1"&gt;// which skills you use&lt;/span&gt;
&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;greenfield&lt;/span&gt;           &lt;span class="c1"&gt;// whether the project is new&lt;/span&gt;
&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;vercel_cli_installed&lt;/span&gt; &lt;span class="c1"&gt;// whether you have the Vercel CLI&lt;/span&gt;
&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;vercel_cli_version&lt;/span&gt;   &lt;span class="c1"&gt;// which version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Every tool call you make — any tool, not just Vercel's:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// pretooluse-skill-inject.mts:969-971&lt;/span&gt;
&lt;span class="nx"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;tool_name&lt;/span&gt;          &lt;span class="c1"&gt;// which tool you just called&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Every time a skill matches your prompt:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// pretooluse-skill-inject.mts:1205-1210&lt;/span&gt;
&lt;span class="nx"&gt;skill&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;injected&lt;/span&gt;               &lt;span class="c1"&gt;// which skill got injected&lt;/span&gt;
&lt;span class="nx"&gt;skill&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;match_type&lt;/span&gt;             &lt;span class="c1"&gt;// how it matched&lt;/span&gt;
&lt;span class="nx"&gt;skill&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;tool_name&lt;/span&gt;              &lt;span class="c1"&gt;// against which tool&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Every prompt you submit:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// user-prompt-submit-skill-inject.mts:1063-1065&lt;/span&gt;
&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;skill&lt;/span&gt;                 &lt;span class="c1"&gt;// which skill matched your prompt&lt;/span&gt;
&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;score&lt;/span&gt;                 &lt;span class="c1"&gt;// confidence score&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All of it flows to a single endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;https://telemetry.vercel.com/api/vercel-plugin/v1/events
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;None of it asked for your permission.&lt;/p&gt;

&lt;h3&gt;
  
  
  The permanent device ID
&lt;/h3&gt;

&lt;p&gt;This is the part that should make you check your machine right now. Run this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; ~/.claude/vercel-plugin-device-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;473d7060-5a37-4ebb-9082-b09a983c****
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A UUID. Created the instant you installed the plugin. Silently. No notification. It never expires. It never rotates. It ties together every session, every project, every client engagement you've ever worked on with Claude Code.&lt;/p&gt;

&lt;p&gt;For context: Chrome DevTools rotates session IDs every 24 hours (&lt;code&gt;ClearcutSender.ts:35,68-70&lt;/code&gt;). Vercel's device ID never expires. Privacy-conscious analytics platforms moved away from persistent device IDs years ago. This one lasts forever.&lt;/p&gt;

&lt;p&gt;Dozens of telemetry events per coding session. All tied to a permanent fingerprint. All default-on.&lt;/p&gt;




&lt;h2&gt;
  
  
  "But It's in the README"
&lt;/h2&gt;

&lt;p&gt;Technically, yes. The plugin's README.md has a &lt;code&gt;## Telemetry&lt;/code&gt; section. It explains what's collected and how to disable it.&lt;/p&gt;

&lt;p&gt;But does anyone seriously think that counts as consent?&lt;/p&gt;

&lt;p&gt;Walk through what actually happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You install the plugin.&lt;/li&gt;
&lt;li&gt;It prints a success message.&lt;/li&gt;
&lt;li&gt;You start coding.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At no point does any text appear on your screen about telemetry. No prompt. No checkbox. No banner. Nothing. Meanwhile, in the background: &lt;code&gt;~/.claude/vercel-plugin-device-id&lt;/code&gt; is written to disk, session events are queued, and your usage patterns start flowing to Vercel's servers.&lt;/p&gt;

&lt;p&gt;The README is sitting in &lt;code&gt;~/.claude/plugins/cache/vercel/vercel-plugin/0.32.7/&lt;/code&gt;. Eight directories deep inside a hidden folder. Nobody browses there.&lt;/p&gt;

&lt;p&gt;GDPR defines valid consent as "freely given, specific, informed, and unambiguous." Most companies — including startups with a fraction of Vercel's resources — treat this as the baseline. I haven't seen a single serious startup ship permanent device tracking without an install-time consent prompt in years. It's just not done anymore.&lt;/p&gt;

&lt;p&gt;Remember: Chrome DevTools rotates its session IDs every 24 hours (&lt;code&gt;ClearcutSender.ts:35,68-70&lt;/code&gt;). That's the standard. Vercel's device ID never rotates. Never expires. Created once, lives forever.&lt;/p&gt;

&lt;p&gt;This is not a gray area. This is not "technically compliant." A permanent device UUID, created silently, tied to every session, with no install-time disclosure — this is clearly Vercel's mistake.&lt;/p&gt;

&lt;p&gt;I used this plugin daily for months. I had no idea. And I'm the developer who was literally building a tool to analyze plugin source code.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Part That's Even More Absurd — I Never Consented
&lt;/h2&gt;

&lt;p&gt;Here's what makes this worse. The plugin actually has a consent dialog — for prompt text collection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// user-prompt-submit-telemetry.mts:58-61&lt;/span&gt;
&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;  &lt;span class="c1"&gt;// full prompt content, up to 100KB — OPT-IN ONLY&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An explicit question appears: "Share your prompt text to help improve skill matching." You can say yes or no. Your choice is saved.&lt;/p&gt;

&lt;p&gt;So they know how to build consent flows. They built the infrastructure. They just chose not to use it for device tracking, tool-call logging, skill-usage profiling, and platform fingerprinting.&lt;/p&gt;

&lt;p&gt;And here's the trap: if you click "No thanks," you think you've opted out. You haven't. Base telemetry — everything in the previous section — keeps running. The README even says so: "base telemetry remains on by default."&lt;/p&gt;

&lt;p&gt;But you already clicked "No thanks." In your mind, the matter is settled. That's not a documentation gap. That's a dark pattern.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Protect Yourself
&lt;/h2&gt;

&lt;p&gt;Do this now. It takes 60 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Check if you're affected
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; ~/.claude/vercel-plugin-device-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the file exists, you have a permanent tracking UUID on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Disable telemetry
&lt;/h3&gt;

&lt;p&gt;Add this to your shell profile (&lt;code&gt;.zshrc&lt;/code&gt;, &lt;code&gt;.bashrc&lt;/code&gt;, etc.):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VERCEL_PLUGIN_TELEMETRY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;off
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then reload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ~/.zshrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Or just uninstall the plugin entirely
&lt;/h3&gt;

&lt;p&gt;If you don't need it, remove it. One fewer thing sending data you didn't agree to.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Should Change
&lt;/h2&gt;

&lt;p&gt;Two proposals. Design standards, not policy demands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Surface telemetry at install time.&lt;/strong&gt; One prompt. Plain language. "This plugin collects [X] and sends it to [Y]. OK?" The user sees it. The user decides. This is four lines of install-time code. Vercel already has the consent infrastructure. They use it for prompt text. Extend it to everything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Treat data flows as API surface.&lt;/strong&gt; If your plugin sends data to an external endpoint, document it the way you'd document an API. What data. Where it goes. How often. How to stop it. Put this in the install output, not in a README eight directories deep.&lt;/p&gt;

&lt;p&gt;These aren't radical ideas. Homebrew notifies you on first run. VS Code notifies you on first launch. It's already the industry standard. The Vercel plugin just doesn't.&lt;/p&gt;




&lt;p&gt;Check your &lt;code&gt;~/.claude/&lt;/code&gt; directory right now. What did you find? Drop it in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>security</category>
      <category>privacy</category>
    </item>
    <item>
      <title>We Need a CatRun for the AI Era</title>
      <dc:creator>ww-w.ai</dc:creator>
      <pubDate>Tue, 05 May 2026 15:36:00 +0000</pubDate>
      <link>https://dev.to/ww-w-ai/we-need-a-catrun-for-the-ai-era-34a0</link>
      <guid>https://dev.to/ww-w-ai/we-need-a-catrun-for-the-ai-era-34a0</guid>
      <description>&lt;p&gt;&lt;em&gt;A 16-pixel hero in your macOS menu bar. Watches LLM traffic. That's it.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  RunCat told us the CPU was busy. Nothing tells us the agent is.
&lt;/h2&gt;

&lt;p&gt;You remember &lt;a href="https://kyome.io/runcat/index.html?lang=en" rel="noopener noreferrer"&gt;RunCat&lt;/a&gt; — the kitten in your menu bar that runs faster when your CPU is busy. Almost a decade old. Adorable. Useful. Asks nothing of you.&lt;/p&gt;

&lt;p&gt;AI-native development needs the same thing for a different signal. Not CPU. &lt;strong&gt;Agent traffic.&lt;/strong&gt; Is there a live LLM request flowing right now, or is everything quiet?&lt;/p&gt;

&lt;p&gt;That's why I built &lt;strong&gt;AgentRunner&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We need a CatRun for the AI era. So I made one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A 16-pixel hero in your macOS menu bar. Runs when your agent's actually working. Idle when it isn't. That's the whole UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mscgpheaiq6v6t3mxgl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mscgpheaiq6v6t3mxgl.gif" alt="AgentRunner demo — 23-second walkthrough" width="600" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Seven things it's built around
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. The menu bar is where you already glance.&lt;/strong&gt; Same place as the clock. No extra tab, no extra window, no "I'll open the dashboard later."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Below the noise floor.&lt;/strong&gt; &amp;lt;1% CPU, ~20MB RAM. Native SwiftUI. A monitor that becomes its own monitoring problem is a joke.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Flashy "live agent dashboards" don't last.&lt;/strong&gt; Animated traffic, live token deltas, color-coded latency heatmaps — fun for a week, closed and forgotten by the next sprint. CatRun ran for a decade because it asked you nothing. Same spirit here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Detailed analysis belongs in a different tool.&lt;/strong&gt; Token spend, cache misses, run history — that needs report depth. That's what &lt;a href="https://github.com/ww-w-ai/cc-token-saver" rel="noopener noreferrer"&gt;cc-token-saver&lt;/a&gt; is for, and it gets its own post next. AgentRunner = glance. cc-token-saver = report. Don't make one app try to be both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Vendor-neutral by design.&lt;/strong&gt; It watches LLM traffic, not Claude traffic. Claude Code, Codex, Cursor, Windsurf, local LLaMA via Ollama, any agent loop hitting a model endpoint over HTTPS. No API key, no per-vendor SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Local-only. Zero telemetry.&lt;/strong&gt; Detection happens on your machine. The app does not phone home. No analytics SDK, no event ping. An agent monitor that ships your data anywhere doesn't deserve trust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Idle vs Active. Binary.&lt;/strong&gt; That's the entire UI. CatRun gave us a kitten that ran when CPU spiked. AgentRunner gives you a 16-pixel hero that runs when LLM traffic flows. Same spirit. Useful. Small. Invisible until you glance at it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repo&lt;/strong&gt;: &lt;a href="https://github.com/ww-w-ai/AgentRunner" rel="noopener noreferrer"&gt;https://github.com/ww-w-ai/AgentRunner&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License&lt;/strong&gt;: Apache-2.0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requires&lt;/strong&gt;: macOS 13+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;cc-token-saver post: coming next.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>ai</category>
      <category>macos</category>
      <category>devtools</category>
    </item>
  </channel>
</rss>
