<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joske Vermeulen</title>
    <description>The latest articles on DEV Community by Joske Vermeulen (@ai_made_tools).</description>
    <link>https://dev.to/ai_made_tools</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ai_made_tools"/>
    <language>en</language>
    <item>
      <title>claude-opus-4-vs-gpt-5</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Sun, 05 Apr 2026 09:45:05 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/claude-opus-4-vs-gpt-5-2g74</link>
      <guid>https://dev.to/ai_made_tools/claude-opus-4-vs-gpt-5-2g74</guid>
      <description>&lt;p&gt;Both Claude Opus 4 and GPT-5 are top-tier AI models, but they excel in different areas. Here's how they compare.&lt;/p&gt;

&lt;h2&gt;
  
  
  At a glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Claude Opus 4&lt;/th&gt;
&lt;th&gt;GPT-5&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Provider&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context window&lt;/td&gt;
&lt;td&gt;200K tokens&lt;/td&gt;
&lt;td&gt;128K tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Input price&lt;/td&gt;
&lt;td&gt;$15 / 1M tokens&lt;/td&gt;
&lt;td&gt;$10 / 1M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output price&lt;/td&gt;
&lt;td&gt;$75 / 1M tokens&lt;/td&gt;
&lt;td&gt;$30 / 1M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coding (SWE-bench)&lt;/td&gt;
&lt;td&gt;~76.8%&lt;/td&gt;
&lt;td&gt;~71.8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multimodal&lt;/td&gt;
&lt;td&gt;Text + images&lt;/td&gt;
&lt;td&gt;Text + images + audio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subscription&lt;/td&gt;
&lt;td&gt;$20/mo (Claude Pro)&lt;/td&gt;
&lt;td&gt;$20/mo (ChatGPT Plus)&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Coding
&lt;/h2&gt;

&lt;p&gt;Claude Opus 4 has the edge here. It scores higher on SWE-bench and tends to produce cleaner, more complete code on the first try. Developers working on complex multi-file refactors or architecture decisions generally prefer Opus.&lt;/p&gt;

&lt;p&gt;GPT-5 is no slouch — it's significantly better than GPT-4o and handles most coding tasks well. But for advanced coding work, Opus is the current leader.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner: Claude Opus 4&lt;/strong&gt; 🏆&lt;/p&gt;

&lt;h2&gt;
  
  
  Reasoning
&lt;/h2&gt;

&lt;p&gt;GPT-5 excels at multi-step reasoning and math. It scored perfectly on AIME benchmarks and handles complex logical chains well. Opus 4 is strong too, but GPT-5 has a slight edge on pure reasoning tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner: GPT-5&lt;/strong&gt; 🏆&lt;/p&gt;

&lt;h2&gt;
  
  
  Context &amp;amp; long documents
&lt;/h2&gt;

&lt;p&gt;Opus 4 supports 200K tokens vs GPT-5's 128K. If you're working with large codebases, long documents, or need to process a lot of context at once, Opus gives you more room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner: Claude Opus 4&lt;/strong&gt; 🏆&lt;/p&gt;

&lt;h2&gt;
  
  
  Price
&lt;/h2&gt;

&lt;p&gt;GPT-5 is cheaper on both input and output. If you're making heavy API use, the cost difference adds up — especially on output tokens where Opus is 2.5x more expensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner: GPT-5&lt;/strong&gt; 🏆&lt;/p&gt;

&lt;h2&gt;
  
  
  Which should you pick?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use case&lt;/th&gt;
&lt;th&gt;Pick&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Complex coding projects&lt;/td&gt;
&lt;td&gt;Claude Opus 4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Math &amp;amp; reasoning tasks&lt;/td&gt;
&lt;td&gt;GPT-5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Large codebase analysis&lt;/td&gt;
&lt;td&gt;Claude Opus 4 (bigger context)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Budget-conscious API use&lt;/td&gt;
&lt;td&gt;GPT-5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;General assistant&lt;/td&gt;
&lt;td&gt;Either — both excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multimodal (audio)&lt;/td&gt;
&lt;td&gt;GPT-5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;For coding and long-context work, &lt;strong&gt;Claude Opus 4&lt;/strong&gt; is the better choice. For reasoning, math, and cost efficiency, &lt;strong&gt;GPT-5&lt;/strong&gt; wins. Both are excellent — you can't go wrong with either at the $20/mo subscription tier.&lt;/p&gt;

&lt;p&gt;The real answer: try both. Both offer free tiers or trials.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;em&gt;See our full &lt;a href="https://www.aimadetools.com/blog/ai-model-comparison/?utm_source=devto" rel="noopener noreferrer"&gt;AI Model Comparison&lt;/a&gt; for all models side by side.&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.aimadetools.com/blog/claude-opus-4-vs-gpt-5/?utm_source=devto" rel="noopener noreferrer"&gt;https://www.aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>comparison</category>
      <category>claude</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>I Used Cursor AI for a Week — Here's What Actually Happened</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Fri, 03 Apr 2026 09:56:44 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/i-used-cursor-ai-for-a-week-heres-what-actually-happened-4fkf</link>
      <guid>https://dev.to/ai_made_tools/i-used-cursor-ai-for-a-week-heres-what-actually-happened-4fkf</guid>
      <description>&lt;p&gt;I've been hearing about Cursor for months. Every dev subreddit, every Twitter thread, every "10x your productivity" post — Cursor was always in the conversation. So I decided to actually use it as my only editor for a full week and see what the hype is about.&lt;/p&gt;

&lt;p&gt;Here's the unfiltered version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 1: The Switch
&lt;/h2&gt;

&lt;p&gt;Switching from VS Code to Cursor took about five minutes. It's literally a fork of VS Code, so all my extensions, keybindings, and themes carried over. My muscle memory worked from the first second. That alone puts it ahead of every other "AI editor" I've tried — there's no learning curve for the basics.&lt;/p&gt;

&lt;p&gt;I opened a project, and the first thing Cursor did was index my entire codebase. For my medium-sized project (~2,000 files), this took maybe 30 seconds. I've heard horror stories about large monorepos taking hours, but for a typical project, it was fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Blew Me Away
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Tab completion that reads your mind
&lt;/h3&gt;

&lt;p&gt;This is the feature that sold me within the first hour. Cursor's Tab doesn't just autocomplete the current line — it predicts your &lt;em&gt;next edit&lt;/em&gt;. You accept a suggestion, press Tab again, and it jumps to the next logical place you'd want to change something.&lt;/p&gt;

&lt;p&gt;It's hard to explain until you experience it. You start writing a function, Tab completes it, then Tab jumps you to where you need to add the import, then Tab takes you to the test file. It feels like pair programming with someone who's already read your code.&lt;/p&gt;

&lt;p&gt;Their custom Tab model was trained with reinforcement learning to show 21% fewer suggestions but with a 28% higher accept rate. In practice, that means less noise and more "yes, that's exactly what I wanted."&lt;/p&gt;

&lt;h3&gt;
  
  
  Agent mode is the real deal
&lt;/h3&gt;

&lt;p&gt;Cmd+I opens the agent, and this is where Cursor separates itself from Copilot. You can say "refactor this component to use React hooks instead of class components" and it will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read the relevant files&lt;/li&gt;
&lt;li&gt;Plan the changes&lt;/li&gt;
&lt;li&gt;Edit multiple files&lt;/li&gt;
&lt;li&gt;Run your linter to check for errors&lt;/li&gt;
&lt;li&gt;Fix any issues it finds&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It doesn't just suggest code — it &lt;em&gt;executes&lt;/em&gt;. With version 2.4's subagents, it can even spin up parallel tasks. Need to update the component AND its tests AND the documentation? It handles all three simultaneously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Codebase awareness
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;@&lt;/code&gt; symbol is incredibly powerful. Type &lt;code&gt;@filename&lt;/code&gt; to reference a specific file, &lt;code&gt;@codebase&lt;/code&gt; to search semantically across your project, or &lt;code&gt;@docs&lt;/code&gt; to pull in documentation. This context management is what makes Cursor's suggestions actually relevant instead of generic.&lt;/p&gt;

&lt;p&gt;I found myself using &lt;code&gt;@codebase&lt;/code&gt; constantly — "find everywhere we handle authentication" or "show me how we format dates across the project." It's like having a senior dev who's memorized every line of your code.&lt;/p&gt;

&lt;h3&gt;
  
  
  .cursorrules changed everything
&lt;/h3&gt;

&lt;p&gt;On day 2, I created a &lt;code&gt;.cursorrules&lt;/code&gt; file in my project root. This is basically a system prompt that tells Cursor how you want it to behave. I added things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Use TypeScript strict mode, never use &lt;code&gt;any&lt;/code&gt;"&lt;/li&gt;
&lt;li&gt;"Prefer functional components with hooks"&lt;/li&gt;
&lt;li&gt;"Always add error handling"&lt;/li&gt;
&lt;li&gt;"Follow the existing naming conventions in this project"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference was night and day. Before the rules file, suggestions were generic. After, they matched my project's style perfectly. This is the single biggest tip I can give any new Cursor user: write your rules file on day one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Frustrated Me
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Performance on larger projects
&lt;/h3&gt;

&lt;p&gt;By day 3, I opened a bigger project at work — around 8,000 files. Cursor started struggling. The indexing took several minutes, and I noticed lag when typing. GPU usage spiked to 90% during code application. Some developers report memory consumption hitting 7GB+ with hourly crashes on large codebases.&lt;/p&gt;

&lt;p&gt;I had to tune things: added folders to &lt;code&gt;.cursorignore&lt;/code&gt;, disabled some extensions, and increased Node.js memory limits. After that it was usable, but it shouldn't require manual tuning to handle a normal enterprise project.&lt;/p&gt;

&lt;h3&gt;
  
  
  The constant updates
&lt;/h3&gt;

&lt;p&gt;Cursor pushes updates almost daily, and each one requires a restart. If you're running dev servers in the integrated terminal — which I always am — that means restarting your servers too. It's a small thing, but by day 5 it was genuinely annoying.&lt;/p&gt;

&lt;p&gt;Some updates also moved UI elements around or changed how features worked. The Cursor forum has threads from frustrated users saying the interface changes too frequently. I get that they're iterating fast, but stability matters when this is your daily tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI quality is inconsistent
&lt;/h3&gt;

&lt;p&gt;When Cursor is good, it's &lt;em&gt;incredible&lt;/em&gt;. But it has bad days. Sometimes the agent would confidently make changes that broke things in subtle ways — passing tests but introducing logic errors. One afternoon, the suggestions felt noticeably worse than the morning, which makes me think it depends on which model is handling your request and how loaded the servers are.&lt;/p&gt;

&lt;p&gt;The Cursor forum has posts from power users calling the Composer feature "an absolute garbage producing slop machine" during bad periods. That's harsh, but I understand the frustration when you're paying $20/month and the quality fluctuates.&lt;/p&gt;

&lt;h3&gt;
  
  
  It can make you lazy
&lt;/h3&gt;

&lt;p&gt;This is the sneaky one. By day 4, I caught myself accepting suggestions without fully reading them. The Tab completion is so good that you start trusting it blindly. I had to consciously slow down and review what it was generating, especially for business logic.&lt;/p&gt;

&lt;p&gt;One user on Reddit put it perfectly: "It helps a lot if you change how you work. It feels useless if you treat it like a fancy autocomplete." You need to think of it as a junior developer who's very fast but needs code review.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pricing Reality
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free&lt;/strong&gt;: 2,000 completions (enough to try it, not enough to use it)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro&lt;/strong&gt;: $20/month — unlimited completions, 500 fast requests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro+&lt;/strong&gt;: $60/month — more agent usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ultra&lt;/strong&gt;: $200/month — heavy agent users&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business&lt;/strong&gt;: $40/user/month — team features, admin controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For most solo developers, Pro at $20/month is the sweet spot. You'll only feel limited during intense multi-file refactoring sessions. But be aware — heavy agent usage can burn through your allowance fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cursor vs GitHub Copilot
&lt;/h2&gt;

&lt;p&gt;I used Copilot for over a year before this, so here's the honest comparison:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Cursor&lt;/th&gt;
&lt;th&gt;GitHub Copilot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Inline completions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Excellent (+ next-edit prediction)&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multi-file editing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Native, powerful&lt;/td&gt;
&lt;td&gt;⚠️ Limited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Codebase understanding&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Deep (indexes everything)&lt;/td&gt;
&lt;td&gt;Surface-level&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Agent mode&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full autonomous agent&lt;/td&gt;
&lt;td&gt;Basic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;IDE support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cursor only (VS Code fork)&lt;/td&gt;
&lt;td&gt;VS Code, JetBrains, Neovim, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Price&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$20/month&lt;/td&gt;
&lt;td&gt;$10-19/month&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Model choice&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;GPT-5, Claude, Gemini&lt;/td&gt;
&lt;td&gt;Primarily OpenAI&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Copilot wins&lt;/strong&gt; if you need IDE flexibility, want the cheapest option, or work in JetBrains. &lt;strong&gt;Cursor wins&lt;/strong&gt; if you do complex multi-file work and don't mind being locked to one editor.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Verdict After 7 Days
&lt;/h2&gt;

&lt;p&gt;Cursor made me mass faster at the boring parts of coding — boilerplate, refactoring, test writing, documentation. I'd estimate it saved me 1-2 hours per day on a typical workday. For $20/month, that's absurd ROI.&lt;/p&gt;

&lt;p&gt;But it didn't make me a better programmer. The hard parts — architecture decisions, debugging subtle logic errors, understanding business requirements — are still 100% on me. Cursor is a productivity multiplier, not a replacement for knowing what you're doing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Would I keep paying?&lt;/strong&gt; Yes. Going back to vanilla VS Code after a week of Cursor feels like coding with one hand tied behind your back. That's not marketing — that's what it actually feels like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who should try it:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any developer writing code daily (the free tier is enough to decide)&lt;/li&gt;
&lt;li&gt;Teams doing lots of refactoring or working across large codebases&lt;/li&gt;
&lt;li&gt;Solo developers who want to ship faster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Who should skip it:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers who primarily work in JetBrains IDEs&lt;/li&gt;
&lt;li&gt;Teams with strict security policies that don't allow code to be sent to external APIs&lt;/li&gt;
&lt;li&gt;People who expect AI to write entire applications without guidance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tips If You're Starting
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Write a &lt;code&gt;.cursorrules&lt;/code&gt; file immediately&lt;/strong&gt; — this is the single biggest quality improvement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learn the &lt;code&gt;@&lt;/code&gt; references&lt;/strong&gt; — &lt;code&gt;@file&lt;/code&gt;, &lt;code&gt;@codebase&lt;/code&gt;, &lt;code&gt;@docs&lt;/code&gt; make the AI actually useful&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't accept suggestions blindly&lt;/strong&gt; — review everything, especially business logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use agent mode for refactoring, Tab for writing&lt;/strong&gt; — each has its sweet spot&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add large folders to &lt;code&gt;.cursorignore&lt;/code&gt;&lt;/strong&gt; — node_modules, build artifacts, vendor deps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Treat it like a junior dev&lt;/strong&gt; — fast and eager, but needs supervision&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Related:&lt;/strong&gt; &lt;a href="https://www.aimadetools.com/blog/claude-code-vs-cursor-2026/?utm_source=devto" rel="noopener noreferrer"&gt;Claude Code vs Cursor — Which One Wins in 2026?&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.aimadetools.com/blog/cursor-ai-one-week-review/?utm_source=devto" rel="noopener noreferrer"&gt;https://www.aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cursor</category>
      <category>aitools</category>
      <category>review</category>
      <category>coding</category>
    </item>
    <item>
      <title>AI Dev Weekly #4: Anthropic Leaks Everything, OpenAI Raises $122B, and Qwen 3.6 Drops Free</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Thu, 02 Apr 2026 07:01:53 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/ai-dev-weekly-4-anthropic-leaks-everything-openai-raises-122b-and-qwen-36-drops-free-2ff3</link>
      <guid>https://dev.to/ai_made_tools/ai-dev-weekly-4-anthropic-leaks-everything-openai-raises-122b-and-qwen-36-drops-free-2ff3</guid>
      <description>&lt;p&gt;&lt;em&gt;AI Dev Weekly is a Thursday series where I cover the week's most important AI developer news — with my take as someone who actually uses these tools daily.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Anthropic had a rough week. Two separate leaks, a $122 billion competitor, and a Chinese model that just went free. Let's get into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Anthropic leaks Claude Code's entire source code to npm
&lt;/h2&gt;

&lt;p&gt;On Monday, security researcher Chaofan Shou discovered that Claude Code's npm package contained a 60MB source map file that mapped the minified production code back to its original TypeScript source. All 512,000 lines of it. Across 1,906 internal files.&lt;/p&gt;

&lt;p&gt;This wasn't a hack. The Bun runtime that Claude Code uses generates source maps by default, and someone forgot to strip them before publishing version 2.1.88 to the public npm registry. By the time Anthropic pulled the package, developers had already mirrored the code to GitHub and started picking it apart.&lt;/p&gt;

&lt;p&gt;What they found: internal APIs, telemetry systems, encryption logic, unreleased agent features, and system prompts. The code revealed how Claude Code handles tool execution, permission management, and the auto mode safety classifier that shipped just last week.&lt;/p&gt;

&lt;p&gt;This is the second time Anthropic has leaked source code in under a year. The first was the Cowork data exfiltration vulnerability in January.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My take:&lt;/strong&gt; The irony is thick. The company that positions itself as the safety-first AI lab keeps shipping code with basic packaging errors. A &lt;code&gt;.npmignore&lt;/code&gt; file or a build step that strips source maps would have prevented this entirely. The leaked code itself is well-written TypeScript — there's nothing embarrassing about the engineering. The embarrassment is that it happened at all, and that it happened twice.&lt;/p&gt;

&lt;p&gt;For developers using Claude Code: nothing changes practically. No customer data was exposed, and the tool still works the same way. But if you're evaluating AI coding tools for enterprise use, "accidentally published our entire codebase to npm" is a hard thing to explain to your security team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Also leaked: Claude "Mythos" — Anthropic's unreleased tier above Opus
&lt;/h2&gt;

&lt;p&gt;The source code leak came days after a separate incident. On March 26, security researchers found nearly 3,000 unpublished files — including draft blog posts and internal memos — publicly accessible due to a misconfigured content management system.&lt;/p&gt;

&lt;p&gt;The files revealed a model internally called "Claude Mythos" (also codenamed "Capybara"). According to the leaked draft blog post, Mythos is "larger and more intelligent than our Opus models — which were, until now, our most powerful." It's currently in early access testing with cybersecurity partners.&lt;/p&gt;

&lt;p&gt;Anthropic confirmed Mythos is real and called it a "step change" in capability. No public release date has been set.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My take:&lt;/strong&gt; Two leaks in one week from the safety-focused AI company. The Mythos leak is actually more significant than the source code one — it confirms Anthropic has a model tier beyond Opus that they haven't announced. For developers planning around Claude's model lineup (Haiku → Sonnet → Opus), knowing there's a "Mythos" tier coming changes the calculus. Especially if you're budgeting API costs — a model above Opus won't be cheap.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenAI closes $122 billion round at $852 billion valuation
&lt;/h2&gt;

&lt;p&gt;OpenAI announced on Monday that it closed the largest private funding round in history: $122 billion in committed capital, pushing its post-money valuation to $852 billion. The round was led by Amazon, NVIDIA, and SoftBank, with Microsoft continuing its participation.&lt;/p&gt;

&lt;p&gt;The company now has 900 million weekly active users and 50 million paid subscribers. Revenue has crossed $2 billion per month. In the same announcement, OpenAI confirmed it shut down Sora, its video generation tool, citing unsustainable inference costs — reportedly $15 million per day against $2.1 million in lifetime revenue.&lt;/p&gt;

&lt;p&gt;OpenAI described its strategy as building an "AI Super App" that combines ChatGPT, Codex, and other products into a unified platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My take:&lt;/strong&gt; The Sora shutdown is the real story here. OpenAI killed a product that was burning $15M/day because it couldn't monetize it. That's a company making hard commercial decisions, not a research lab chasing cool demos. The "AI Super App" framing signals that OpenAI sees its future as a platform, not a model provider.&lt;/p&gt;

&lt;p&gt;For developers: Codex keeps getting better. GPT-5.4 became the new core model in March, GPT-5.4 Mini changed cheaper task routing, and the CLI got parallel agents, worktrees, and skills. If you're on ChatGPT Plus, Codex is included and it's genuinely competitive with Claude Code now. I've been running both headless on a VPS and the output quality is closer than you'd expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Qwen 3.6 Plus drops for free on OpenRouter
&lt;/h2&gt;

&lt;p&gt;Alibaba released Qwen 3.6 Plus Preview on March 30, and it's available for free on OpenRouter. Zero cost for input and output tokens. The model features a 1 million token context window and what Alibaba calls "improved efficiency, stronger reasoning, and more reliable agentic behavior" compared to the 3.5 series.&lt;/p&gt;

&lt;p&gt;This comes alongside Qwen 3.5 Omni, a multimodal model that processes text, images, audio, and video natively. Researchers noted it learned to write code from spoken instructions and video input without being explicitly trained to do so.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My take:&lt;/strong&gt; The Chinese open-source models keep getting more aggressive on pricing. Qwen 3.6 Plus for free is a direct challenge to every paid API. I've been testing the Qwen 3.5 series this week for a project, and the Flash model at $0.065/$0.26 per million tokens is absurdly cheap — we ran 20 minutes of agentic coding for $0.018. The Plus model at $0.26/$1.56 is still cheaper than Claude Sonnet by a wide margin.&lt;/p&gt;

&lt;p&gt;The 1M context window on the free model is the real hook. If you're building RAG pipelines or processing large codebases, there's no reason not to try it. The quality won't match Claude or GPT on complex reasoning, but for many tasks it's good enough — and free is hard to argue with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick hits
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Gemini 3.1 Pro is rolling out in Gemini CLI.&lt;/strong&gt; Google's latest model ties GPT-5.4 on the Artificial Analysis Intelligence Index at roughly one-third the API cost. It features dynamic thinking with a new &lt;code&gt;thinking_level&lt;/code&gt; parameter (low/medium/high/max) and a 1M token context window. GitHub Copilot also added Gemini 3.1 Pro support across major IDEs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google launched Gemini API Docs MCP and Agent Skills&lt;/strong&gt; — two tools that give AI coding agents real-time access to current API documentation, fixing the "generates outdated code" problem caused by training data cutoffs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;GitHub Copilot added Gemini 3.1 Pro&lt;/strong&gt; across VS Code, JetBrains, and Xcode. You can now choose between GPT, Claude, and Gemini models within Copilot.
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.aimadetools.com/blog/ai-dev-weekly-004-anthropic-leaks-openai-122b-qwen-free/?utm_source=devto" rel="noopener noreferrer"&gt;https://www.aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aidevweekly</category>
      <category>anthropic</category>
      <category>claudecode</category>
      <category>openai</category>
    </item>
    <item>
      <title>Vite vs Webpack — Which Build Tool Should You Use?</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Tue, 31 Mar 2026 10:09:15 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/vite-vs-webpack-which-build-tool-should-you-use-10f5</link>
      <guid>https://dev.to/ai_made_tools/vite-vs-webpack-which-build-tool-should-you-use-10f5</guid>
      <description>&lt;h2&gt;
  
  
  Quick Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Vite&lt;/th&gt;
&lt;th&gt;Webpack&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dev server&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Instant (native ESM)&lt;/td&gt;
&lt;td&gt;Slower (bundling)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HMR&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Near-instant&lt;/td&gt;
&lt;td&gt;Slower&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Config&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Minimal&lt;/td&gt;
&lt;td&gt;Verbose&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Production build&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rollup&lt;/td&gt;
&lt;td&gt;Webpack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Plugin ecosystem&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Growing&lt;/td&gt;
&lt;td&gt;Massive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use Vite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;New projects (it's the default for React, Vue, Svelte)&lt;/li&gt;
&lt;li&gt;You want fast dev server startup&lt;/li&gt;
&lt;li&gt;You don't need complex Webpack-specific plugins&lt;/li&gt;
&lt;li&gt;You value simplicity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use Webpack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Legacy projects already using Webpack&lt;/li&gt;
&lt;li&gt;You need Module Federation (micro-frontends)&lt;/li&gt;
&lt;li&gt;You need a specific Webpack plugin with no Vite equivalent&lt;/li&gt;
&lt;li&gt;Complex custom build pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Differences
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Dev Server:&lt;/strong&gt; Vite serves files over native ESM — no bundling during development. This means instant server start regardless of app size. Webpack bundles everything before serving, which gets slower as your app grows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration:&lt;/strong&gt; A Vite config is typically 10-20 lines. A Webpack config can be hundreds of lines with loaders, plugins, and optimization settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Production:&lt;/strong&gt; Vite uses Rollup for production builds, which produces smaller bundles. Webpack uses its own bundler.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;Use Vite for new projects. Migrate from Webpack to Vite when you can. There's very little reason to start a new project with Webpack in 2026.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; &lt;a href="https://www.aimadetools.com/blog/rspack-vs-webpack/?utm_source=devto" rel="noopener noreferrer"&gt;Rspack vs Webpack — The Rust-Powered Drop-In Replacement&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Related:&lt;/strong&gt; &lt;a href="https://www.aimadetools.com/blog/what-is-vite/?utm_source=devto" rel="noopener noreferrer"&gt;What Is Vite? A Simple Explanation&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.aimadetools.com/blog/vite-vs-webpack/?utm_source=devto" rel="noopener noreferrer"&gt;https://www.aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vite</category>
      <category>webpack</category>
      <category>comparison</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Stop Using Redux in 2026. Seriously.</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Sun, 29 Mar 2026 09:43:35 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/stop-using-redux-in-2026-seriously-58mh</link>
      <guid>https://dev.to/ai_made_tools/stop-using-redux-in-2026-seriously-58mh</guid>
      <description>&lt;p&gt;&lt;em&gt;I spent two years putting everything in Redux. Global loading states, form inputs, toast notifications — all in the store. Then I ripped it all out in a weekend and my codebase got 40% smaller. Here's why you should too.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Redux was the right answer in 2018. It's the wrong answer in 2026 for 90% of projects. Let me explain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem isn't Redux itself
&lt;/h2&gt;

&lt;p&gt;Redux is well-built, well-documented, and battle-tested. The problem is that developers reach for it by default, like it's a required dependency for React apps. It's not.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Redux actually costs you
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Boilerplate tax.&lt;/strong&gt; Even with Redux Toolkit (which is genuinely good), you're still writing slices, selectors, thunks, and connecting everything. For a todo app? That's absurd. For a dashboard with 5 API calls? Still overkill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bundle size.&lt;/strong&gt; Redux + Redux Toolkit + React-Redux = ~40KB minified. Zustand is 1.1KB. That's not a typo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mental overhead.&lt;/strong&gt; New developers on your team need to understand actions, reducers, dispatch, selectors, middleware, thunks, and the store. With Zustand, they need to understand... a hook.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Redux way&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dispatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useDispatch&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useSelector&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RootState&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;counter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;increment&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="c1"&gt;// Zustand way&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;increment&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useCounterStore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same result. One requires 4 files and 3 concepts. The other requires 1 file and 0 new concepts.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But Redux DevTools!"
&lt;/h2&gt;

&lt;p&gt;Zustand has devtools middleware. So does Jotai. This isn't a differentiator anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But Redux handles complex state!"
&lt;/h2&gt;

&lt;p&gt;Define complex. If you have deeply nested state with dozens of interdependent slices, sure, Redux's structure helps. But how many apps actually have that? Most apps have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User auth state&lt;/li&gt;
&lt;li&gt;A few API caches&lt;/li&gt;
&lt;li&gt;Some UI state (modals, sidebars)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For API caching, you should be using TanStack Query or SWR anyway — not putting API responses in Redux. That alone eliminates 60% of what most Redux stores contain.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Redux is still the right choice
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You have 50+ developers and need enforced patterns&lt;/li&gt;
&lt;li&gt;Your state is genuinely complex with many interdependent slices&lt;/li&gt;
&lt;li&gt;You're already using it and it's working fine (don't rewrite for fun)&lt;/li&gt;
&lt;li&gt;You need time-travel debugging for complex state transitions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What to use instead
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Zustand&lt;/strong&gt; — For most apps. Simple, tiny, just works. No providers, no boilerplate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jotai&lt;/strong&gt; — For atomic state. Great when you have lots of independent pieces of state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TanStack Query&lt;/strong&gt; — For server state. If half your Redux store is API responses, replace that half with TanStack Query and you might not need Redux at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;React Context&lt;/strong&gt; — For truly simple shared state (theme, locale, auth). Don't use it for frequently updating state though — it re-renders everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real question
&lt;/h2&gt;

&lt;p&gt;Before adding Redux to your next project, ask: "What specific problem does Redux solve here that Zustand doesn't?"&lt;/p&gt;

&lt;p&gt;If you can't answer that clearly, you don't need Redux. You need a smaller tool for a smaller problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related resources
&lt;/h2&gt;

&lt;h2&gt;
  
  
  - &lt;a href="https://www.aimadetools.com/blog/react-complete-guide/?utm_source=devto" rel="noopener noreferrer"&gt;React complete guide&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.aimadetools.com/blog/stop-using-redux/?utm_source=devto" rel="noopener noreferrer"&gt;https://www.aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>redux</category>
      <category>opinion</category>
      <category>statemanagement</category>
    </item>
    <item>
      <title>AI Dev Weekly #3: Claude Code Goes Auto, Cursor's Chinese Secret, and GitHub Wants Your Data</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Thu, 26 Mar 2026 10:14:12 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/ai-dev-weekly-3-claude-code-goes-auto-cursors-chinese-secret-and-github-wants-your-data-25ke</link>
      <guid>https://dev.to/ai_made_tools/ai-dev-weekly-3-claude-code-goes-auto-cursors-chinese-secret-and-github-wants-your-data-25ke</guid>
      <description>&lt;p&gt;&lt;em&gt;AI Dev Weekly is a Thursday series where I cover the week's most important AI developer news — with my take as someone who actually uses these tools daily.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Three stories this week, and they all share a theme: the companies building your coding tools are making big decisions about trust — and not all of them are being upfront about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code gets auto mode, Channels, and $2.5B in revenue
&lt;/h2&gt;

&lt;p&gt;Anthropic dropped three features on Monday. The headline is auto mode — a middle ground between the default "approve every file write" and the &lt;code&gt;dangerously-skip-permissions&lt;/code&gt; flag that lets Claude do whatever it wants.&lt;/p&gt;

&lt;p&gt;Auto mode uses an AI safety classifier that evaluates every tool call in real time. Routine actions like writing files and running tests get auto-approved. Destructive operations — mass file deletion, data exfiltration, malicious code execution — get blocked. If Claude keeps trying blocked actions, it escalates to a human prompt. It also screens for prompt injection attacks, which is relevant given that &lt;a href="https://winbuzzer.com/2026/01/17/security-flaw-resurfaces-in-anthropics-new-claude-cowork-tool-days-after-launch-xcxwbn/" rel="noopener noreferrer"&gt;Cowork had a data exfiltration vulnerability&lt;/a&gt; two days after launch in January.&lt;/p&gt;

&lt;p&gt;The second feature is Claude Code Channels — you can now control Claude Code through Discord and Telegram. This is Anthropic's direct response to &lt;a href="https://en.wikipedia.org/wiki/OpenClaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt;, the open-source project that hit 100K+ GitHub stars by letting people run AI agents through chat apps. Anthropic sent OpenClaw's creator a cease-and-desist over the original name "Clawd," and he ended up joining OpenAI. Now Anthropic is building the managed alternative. Channels currently only supports Discord and Telegram, while OpenClaw covers five platforms including iMessage, Slack, and WhatsApp.&lt;/p&gt;

&lt;p&gt;The third feature: computer use for Cowork, giving Claude direct keyboard-and-mouse control of macOS desktops. It prioritizes API connectors when available and falls back to screen interaction when they're not.&lt;/p&gt;

&lt;p&gt;The revenue number is the quiet bombshell: Claude Code hit $2.5 billion in annualized revenue, up from $1 billion in early January. That's 2.5x growth in under three months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My take:&lt;/strong&gt; Auto mode is what Claude Code should have shipped with. The binary choice between "interrupt me for everything" and "YOLO mode" was the biggest friction point for long-running tasks. The AI classifier approach is smart — but it's also a black box. Anthropic hasn't published what the classifier allows or blocks, which means you're trusting an undisclosed ML model to decide what's safe to run on your machine. For side projects, fine. For production codebases, I'd want to see the rules. For a deeper look at how Claude Code compares to the competition, see my &lt;a href="https://www.aimadetools.com/blog/claude-code-vs-cursor-2026/?utm_source=devto" rel="noopener noreferrer"&gt;Claude Code vs Cursor comparison&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Channels is interesting strategically. OpenClaw proved developers want chat-based agent control. Anthropic's response is "we'll build it ourselves, with better security." Whether developers choose the managed option over the open-source one will depend on how fast Anthropic adds platform support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cursor launches Composer 2 — then gets caught hiding its origins
&lt;/h2&gt;

&lt;p&gt;Cursor shipped Composer 2 this week with impressive benchmarks on SWE-bench Multilingual and Terminal-Bench. The pitch: frontier-level coding intelligence with 200K token context, optimized for multi-file editing and long multi-step tasks.&lt;/p&gt;

&lt;p&gt;What Cursor didn't mention in the announcement: Composer 2 is built on Moonshot AI's Kimi K2.5, a Chinese open-source model.&lt;/p&gt;

&lt;p&gt;A developer named Fynn intercepted Cursor's API traffic and found the model identifier &lt;code&gt;kimi-k2p5-rl-0317-s515-fast&lt;/code&gt;. The internet did the rest. Elon Musk commented. Then Cursor co-founder Aman Sanger confirmed it — Kimi K2.5 was selected after an evaluation, followed by additional pre-training and heavy reinforcement learning. He called the omission from the blog post "an error."&lt;/p&gt;

&lt;p&gt;Kimi K2.5 is a 1 trillion parameter Mixture-of-Experts model with 32 billion active parameters per token, released under a modified MIT license by Moonshot AI (backed by Alibaba and HongShan). The commercial use was authorized through Fireworks AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My take:&lt;/strong&gt; The technical choice is defensible. Fine-tuning an open-source model and adding your own RL on top is exactly how you build a competitive coding model without training from scratch. DeepSeek proved that open-source base models can compete with closed ones. Cursor picking the best available foundation is smart engineering.&lt;/p&gt;

&lt;p&gt;The transparency failure is the problem. When you're a $29 billion company and developers trust you with their codebases, you don't "forget" to mention the base model. Especially when it's from a Chinese company — not because that's inherently bad, but because developers deserve to make informed decisions about their toolchain. If you're weighing your options, my &lt;a href="https://www.aimadetools.com/blog/github-copilot-vs-cursor-2026/?utm_source=devto" rel="noopener noreferrer"&gt;GitHub Copilot vs Cursor comparison&lt;/a&gt; covers the broader tradeoffs.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub will train on your Copilot data by default
&lt;/h2&gt;

&lt;p&gt;GitHub announced that starting April 24, 2026, interactions with Copilot Free, Pro, and Pro+ will be used to train AI models. This is opt-out, not opt-in. If you do nothing, your prompts and code interactions become training data.&lt;/p&gt;

&lt;p&gt;To opt out: go to GitHub Settings → Copilot → Features → disable "Allow GitHub to use my data for AI model training" under Privacy. Enterprise and Business plan users are not affected — their data was already excluded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My take:&lt;/strong&gt; This was inevitable. GitHub has been watching Claude Code and Cursor grow by training on real developer interactions while Copilot relied mainly on public code. The quality gap showed. Moving to opt-out instead of opt-in is the aggressive play — most developers won't change the default, which means GitHub gets a massive training dataset overnight.&lt;/p&gt;

&lt;p&gt;If you're on Copilot Free or Pro, go change that setting now if you care. If you don't care, at least know it's happening. And if this is the push you needed to evaluate alternatives, my &lt;a href="https://www.aimadetools.com/blog/how-to-replace-github-copilot-free/?utm_source=devto" rel="noopener noreferrer"&gt;how to replace GitHub Copilot&lt;/a&gt; guide covers the options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick hits
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;VS Code 1.113 shipped&lt;/strong&gt; — Two features worth knowing about. First, a Thinking Effort selector that lets you dial AI reasoning intensity up or down per request — useful when you want a quick answer without burning tokens on deep reasoning. Second, nested subagents: subagents can now call other subagents, enabling multi-step AI workflows that chain together. Both features work with shared MCP server access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apple confirmed WWDC 2026 for June 8-12&lt;/strong&gt; — The big tease: Core AI, a new framework replacing Core ML, designed for running large language models and diffusion models directly on-device. Combined with the &lt;a href="https://www.aimadetools.com/blog/ai-dev-weekly-002-garry-tan-gstack-cursor-composer-anthropic-firefox/?utm_source=devto" rel="noopener noreferrer"&gt;Claude Agent SDK for Xcode&lt;/a&gt; announced two weeks ago, iOS developers are about to get a lot more AI tooling. Also expect a major Siri revamp with iOS 27.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenClaw v2026.3.24 released&lt;/strong&gt; — New version adds improved OpenAI API compatibility, sub-agents with OpenWebUI, and native Slack and Teams integration. The open-source agent framework keeps shipping faster than the commercial alternatives can copy it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern this week
&lt;/h2&gt;

&lt;p&gt;Trust is the new battleground. Anthropic is asking you to trust a black-box classifier with your file system. Cursor asked you to trust a model without telling you where it came from. GitHub is asking you to trust that opt-out is good enough.&lt;/p&gt;

&lt;p&gt;The developers who'll navigate this well aren't the ones who blindly trust or blindly reject. They're the ones who read the settings pages, check the API traffic, and make informed choices. This week proved that even the tools you pay for aren't always transparent about what's happening under the hood.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AI Dev Weekly drops every Thursday.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Related:&lt;/strong&gt; &lt;a href="https://www.aimadetools.com/blog/ai-dev-weekly-002-garry-tan-gstack-cursor-composer-anthropic-firefox/?utm_source=devto" rel="noopener noreferrer"&gt;AI Dev Weekly #2: Garry Tan's 'God Mode', Cursor Composer 1.5, and Anthropic Finds Firefox Bugs&lt;/a&gt; | &lt;a href="https://www.aimadetools.com/blog/ai-dev-weekly-001-claude-code-dominates-musk-poaches-cursor/?utm_source=devto" rel="noopener noreferrer"&gt;AI Dev Weekly #1: Claude Code Takes the Crown, Musk Raids Cursor&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://www.aimadetools.com/blog/ai-dev-weekly-003-claude-code-auto-mode-cursor-kimi-github-data/?utm_source=devto" rel="noopener noreferrer"&gt;https://www.aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aidevweekly</category>
      <category>claudecode</category>
      <category>cursor</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Bun vs Node.js — Should You Switch in 2026?</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Wed, 25 Mar 2026 09:59:12 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/bun-vs-nodejs-should-you-switch-in-2026-42pb</link>
      <guid>https://dev.to/ai_made_tools/bun-vs-nodejs-should-you-switch-in-2026-42pb</guid>
      <description>&lt;h2&gt;
  
  
  Quick Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Bun&lt;/th&gt;
&lt;th&gt;Node.js&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Speed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Faster (Zig + JavaScriptCore)&lt;/td&gt;
&lt;td&gt;Standard (V8)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Package manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Built-in (fast)&lt;/td&gt;
&lt;td&gt;npm/pnpm/yarn&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TypeScript&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Native (no build step)&lt;/td&gt;
&lt;td&gt;Needs tsc/tsx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Test runner&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;External (Jest, Vitest)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compatibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Most npm packages work&lt;/td&gt;
&lt;td&gt;100% ecosystem&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maturity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Young&lt;/td&gt;
&lt;td&gt;Battle-tested&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use Bun
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;New projects where speed matters&lt;/li&gt;
&lt;li&gt;You want TypeScript without a build step&lt;/li&gt;
&lt;li&gt;You want an all-in-one tool (runtime + bundler + test runner + package manager)&lt;/li&gt;
&lt;li&gt;Scripts and tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use Node.js
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Production apps that need maximum stability&lt;/li&gt;
&lt;li&gt;You need 100% npm compatibility&lt;/li&gt;
&lt;li&gt;Enterprise environments&lt;/li&gt;
&lt;li&gt;You depend on Node-specific APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Differences
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Speed:&lt;/strong&gt; Bun is genuinely faster — 3-5x for many operations. Package installs, test runs, and server startup are noticeably quicker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compatibility:&lt;/strong&gt; Most npm packages work with Bun, but some native modules and Node-specific APIs don't. This gap is shrinking but still exists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All-in-One:&lt;/strong&gt; Bun replaces Node + npm + tsc + Jest in a single binary. That's compelling for developer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;Bun for new projects and tooling. Node.js for production apps where stability is critical. In 2026, Bun is mature enough for most use cases, but Node.js isn't going anywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aimadetools.com/blog/npm-cheat-sheet/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;npm cheat sheet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aimadetools.com/blog/npm-complete-guide/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;npm complete guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🛠️ &lt;strong&gt;Free tools related to this article:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aimadetools.com/blog/package-json-analyzer/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;package.json Analyzer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://aimadetools.com/blog/bun-vs-node/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;https://aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>bunjs</category>
      <category>node</category>
      <category>comparison</category>
      <category>runtime</category>
    </item>
    <item>
      <title>The Complete MiMo-V2 Family Guide — Pro, Flash, Omni, and TTS (2026)</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Mon, 23 Mar 2026 09:39:13 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/the-complete-mimo-v2-family-guide-pro-flash-omni-and-tts-2026-47i9</link>
      <guid>https://dev.to/ai_made_tools/the-complete-mimo-v2-family-guide-pro-flash-omni-and-tts-2026-47i9</guid>
      <description>&lt;p&gt;Xiaomi released four AI models in the MiMo-V2 family between December 2025 and March 2026. Together, they form a complete AI agent stack — reasoning, perception, and speech. Here's everything you need to know about each one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The family at a glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Params (total/active)&lt;/th&gt;
&lt;th&gt;Context&lt;/th&gt;
&lt;th&gt;Pricing (in/out)&lt;/th&gt;
&lt;th&gt;Open source&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MiMo-V2-Pro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Brain&lt;/td&gt;
&lt;td&gt;1T / 42B&lt;/td&gt;
&lt;td&gt;1M tokens&lt;/td&gt;
&lt;td&gt;$1.00/$3.00&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MiMo-V2-Flash&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fast worker&lt;/td&gt;
&lt;td&gt;309B / 15B&lt;/td&gt;
&lt;td&gt;56K tokens&lt;/td&gt;
&lt;td&gt;$0.10/$0.30&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MiMo-V2-Omni&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Eyes &amp;amp; ears&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;TBD&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MiMo-V2-TTS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Voice&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;TBD&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  MiMo-V2-Pro — The flagship
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Released:&lt;/strong&gt; March 18, 2026 (previously known as "Hunter Alpha")&lt;/p&gt;

&lt;p&gt;Pro is Xiaomi's frontier model, designed for complex reasoning, coding, and autonomous agent workflows. It spent a week on OpenRouter as an anonymous stealth model before Xiaomi revealed it — and the AI community &lt;a href="https://aimadetools.com/blog/ai-dev-weekly-extra-xiaomi-hunter-alpha-mimo-v2-pro/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;mistook it for DeepSeek V4&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key specs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 trillion total parameters, 42B active (MoE)&lt;/li&gt;
&lt;li&gt;1 million token context window&lt;/li&gt;
&lt;li&gt;Hybrid attention mechanism&lt;/li&gt;
&lt;li&gt;Multi-Token Prediction for speed&lt;/li&gt;
&lt;li&gt;#3 globally on PinchBench and ClawEval agent benchmarks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benchmark highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Approaches Claude Opus 4.6 on coding and agent tasks&lt;/li&gt;
&lt;li&gt;Surpasses Claude Sonnet 4.6 on most benchmarks&lt;/li&gt;
&lt;li&gt;5-8x cheaper than Opus for comparable quality&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; AI agents, complex coding, long-context processing, multi-step workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep dive:&lt;/strong&gt; &lt;a href="https://aimadetools.com/blog/what-is-mimo-v2-pro/" rel="noopener noreferrer"&gt;What Is MiMo-V2-Pro?&lt;/a&gt; | &lt;a href="https://aimadetools.com/blog/how-to-use-mimo-v2-pro-api/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;How to Use the MiMo-V2-Pro API&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  MiMo-V2-Flash — The open-source workhorse
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Released:&lt;/strong&gt; December 17, 2025&lt;/p&gt;

&lt;p&gt;Flash is the model that put Xiaomi on the AI map. Open-source, blazing fast, and absurdly cheap — it became one of the most popular models on OpenRouter within weeks of launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key specs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;309B total parameters, 15B active (MoE)&lt;/li&gt;
&lt;li&gt;56K token context window&lt;/li&gt;
&lt;li&gt;150 tokens/sec inference speed&lt;/li&gt;
&lt;li&gt;Hybrid sliding-window attention (128-token window, 5:1 ratio)&lt;/li&gt;
&lt;li&gt;Weights available on HuggingFace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benchmark highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;73.4% on SWE-Bench Verified (#1 open-source)&lt;/li&gt;
&lt;li&gt;Comparable to Claude Sonnet 4.5 at 3.5% of the cost&lt;/li&gt;
&lt;li&gt;Top 2 among open-source models on agent benchmarks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; High-volume coding tasks, self-hosting, prototyping, cost-sensitive production workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep dive:&lt;/strong&gt; &lt;a href="https://aimadetools.com/blog/what-is-mimo-v2-flash/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;What Is MiMo-V2-Flash?&lt;/a&gt; | &lt;a href="https://aimadetools.com/blog/mimo-v2-flash-vs-deepseek-v3/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;MiMo-V2-Flash vs DeepSeek V3&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  MiMo-V2-Omni — The multimodal perceiver
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Released:&lt;/strong&gt; March 18, 2026&lt;/p&gt;

&lt;p&gt;Omni is Xiaomi's multimodal model — it natively processes text, images, video, and audio within a unified architecture. While Pro thinks and Flash codes, Omni sees and hears.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Text, image, video, and audio input in one model&lt;/li&gt;
&lt;li&gt;10+ hours of continuous audio processing&lt;/li&gt;
&lt;li&gt;GUI interaction — can navigate and operate browser interfaces&lt;/li&gt;
&lt;li&gt;Cross-modal reasoning (processes visual and textual information together)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Designed for:&lt;/strong&gt; Browser automation, video analysis, document understanding, voice-controlled agents.&lt;/p&gt;

&lt;p&gt;Xiaomi positions Omni as the "executor" in their agent stack — it perceives the environment and carries out actions that Pro plans.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep dive:&lt;/strong&gt; &lt;a href="https://aimadetools.com/blog/what-is-mimo-v2-omni/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;What Is MiMo-V2-Omni?&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  MiMo-V2-TTS — The voice
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Released:&lt;/strong&gt; March 18, 2026&lt;/p&gt;

&lt;p&gt;MiMo-V2-TTS is Xiaomi's text-to-speech model, designed to give AI agents a human-like voice. It's not a general-purpose TTS system — it's specifically built for agent interaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Emotional nuance in speech output&lt;/li&gt;
&lt;li&gt;Real-time adaptability (adjusts tone based on context)&lt;/li&gt;
&lt;li&gt;Designed for conversational AI, not just reading text aloud&lt;/li&gt;
&lt;li&gt;Integrates with Pro and Omni for end-to-end agent communication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TTS completes the agent loop: Pro reasons, Omni perceives, and TTS communicates. For Xiaomi's smart home and automotive products, this means AI assistants that sound natural rather than robotic.&lt;/p&gt;

&lt;h2&gt;
  
  
  How they work together
&lt;/h2&gt;

&lt;p&gt;Xiaomi designed these models as a system, not as standalone products:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User request
    ↓
MiMo-V2-Pro (plans the task, breaks it into steps)
    ↓
MiMo-V2-Omni (perceives environment, executes GUI actions)
    ↓
MiMo-V2-TTS (communicates results to user)
    ↓
MiMo-V2-Flash (handles high-volume subtasks cheaply)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is Xiaomi's play for their "person-vehicle-home" ecosystem. The AI in your Xiaomi phone, car, and smart home devices all run on this stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Which model should you use?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For coding and development:&lt;/strong&gt; Start with &lt;a href="https://aimadetools.com/blog/what-is-mimo-v2-flash/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;Flash&lt;/a&gt;. It's open source, fast, and cheap. Upgrade to &lt;a href="https://aimadetools.com/blog/what-is-mimo-v2-pro/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;Pro&lt;/a&gt; when you need better quality or longer context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For AI agents:&lt;/strong&gt; &lt;a href="https://aimadetools.com/blog/what-is-mimo-v2-pro/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;Pro&lt;/a&gt; for planning and reasoning. Consider Omni if your agent needs to interact with visual interfaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For cost optimization:&lt;/strong&gt; Use Flash for 80% of tasks, Pro for the remaining 20%. See &lt;a href="https://aimadetools.com/blog/mimo-v2-pro-vs-mimo-v2-flash/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;MiMo-V2-Pro vs Flash&lt;/a&gt; for the detailed comparison.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For self-hosting:&lt;/strong&gt; Flash is your only option — it's the only open-source model in the family.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;A year ago, Xiaomi was known for phones and rice cookers. Now they have a four-model AI family that competes with Anthropic and OpenAI on specific benchmarks. The lead researcher behind MiMo came from DeepSeek, and the architectural DNA shows.&lt;/p&gt;

&lt;p&gt;What makes the MiMo family interesting isn't any single model — it's the system. Xiaomi is building an integrated AI stack for their hardware ecosystem, and they're making the individual models available to developers along the way. Whether that strategy succeeds depends on execution, but the technical foundation is impressive.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; &lt;a href="https://aimadetools.com/blog/mimo-v2-pro-vs-claude-opus-4-6/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;MiMo-V2-Pro vs Claude Opus 4.6&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; &lt;a href="https://aimadetools.com/blog/mimo-v2-pro-vs-claude-vs-gpt/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;MiMo-V2-Pro vs Claude vs GPT&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; [AI Model Comparison 2026](&lt;a href="https://aimadetools.com/blog/ai-model-comparison/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=" rel="noopener noreferrer"&gt;https://aimadetools.com/blog/ai-model-comparison/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  weekly)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://aimadetools.com/blog/mimo-v2-family-guide/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;https://aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aimodels</category>
      <category>xiaomi</category>
      <category>mimo</category>
      <category>guide</category>
    </item>
    <item>
      <title>HTMX vs React — Do You Really Need a JavaScript Framework?</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Sun, 22 Mar 2026 09:38:37 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/htmx-vs-react-do-you-really-need-a-javascript-framework-1p3i</link>
      <guid>https://dev.to/ai_made_tools/htmx-vs-react-do-you-really-need-a-javascript-framework-1p3i</guid>
      <description>&lt;h2&gt;
  
  
  Quick Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;HTMX&lt;/th&gt;
&lt;th&gt;React&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Approach&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;HTML attributes&lt;/td&gt;
&lt;td&gt;JavaScript components&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bundle size&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~14KB&lt;/td&gt;
&lt;td&gt;~40KB + your code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Server&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Returns HTML fragments&lt;/td&gt;
&lt;td&gt;Returns JSON (usually)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Very low&lt;/td&gt;
&lt;td&gt;Higher&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Interactivity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Good for most apps&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Very low&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use HTMX
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Server-rendered apps (Django, Rails, Laravel, Go)&lt;/li&gt;
&lt;li&gt;CRUD apps, admin panels, dashboards&lt;/li&gt;
&lt;li&gt;You want simplicity over complexity&lt;/li&gt;
&lt;li&gt;Your team is stronger in backend than frontend&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use React
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Highly interactive UIs (real-time collaboration, complex forms)&lt;/li&gt;
&lt;li&gt;You need a mobile app too (React Native)&lt;/li&gt;
&lt;li&gt;Rich client-side state management&lt;/li&gt;
&lt;li&gt;Complex component composition&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Differences
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Mental Model:&lt;/strong&gt; HTMX extends HTML — you add attributes like &lt;code&gt;hx-get&lt;/code&gt;, &lt;code&gt;hx-post&lt;/code&gt;, &lt;code&gt;hx-swap&lt;/code&gt; to make elements dynamic. React replaces HTML with a JavaScript component tree.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server:&lt;/strong&gt; HTMX servers return HTML fragments. React servers typically return JSON that the client renders. HTMX is simpler but means your server does more rendering work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When HTMX Falls Short:&lt;/strong&gt; Real-time collaborative editing, complex drag-and-drop, offline-first apps, anything that needs heavy client-side state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;HTMX is perfect for the 90% of web apps that don't need React's complexity. If you're building a content site, admin panel, or CRUD app, HTMX with a server framework is simpler and faster to build. React for genuinely complex interactive UIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aimadetools.com/blog/react-complete-guide/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;React complete guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; [shadcn/ui vs Material UI — Which React Component Library?](&lt;a href="https://aimadetools.com/blog/shadcn-vs-material-ui/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=" rel="noopener noreferrer"&gt;https://aimadetools.com/blog/shadcn-vs-material-ui/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  weekly)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://aimadetools.com/blog/htmx-vs-react/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;https://aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>htmx</category>
      <category>react</category>
      <category>comparison</category>
      <category>frontend</category>
    </item>
    <item>
      <title>What Is MiMo-V2-Pro? Xiaomi's Trillion-Parameter AI Model Explained</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Fri, 20 Mar 2026 19:59:50 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/what-is-mimo-v2-pro-xiaomis-trillion-parameter-ai-model-explained-2kg</link>
      <guid>https://dev.to/ai_made_tools/what-is-mimo-v2-pro-xiaomis-trillion-parameter-ai-model-explained-2kg</guid>
      <description>&lt;p&gt;MiMo-V2-Pro is a large language model built by Xiaomi — yes, the phone company. It has over 1 trillion total parameters, a 1 million token context window, and it's specifically designed for autonomous AI agent tasks. It launched on March 18, 2026, after a week of anonymous stealth testing on OpenRouter under the codename "Hunter Alpha."&lt;/p&gt;

&lt;p&gt;If you've been following AI news, you probably heard about the mystery model that everyone thought was DeepSeek V4. It wasn't. It was Xiaomi.&lt;/p&gt;

&lt;h2&gt;
  
  
  The basics
&lt;/h2&gt;

&lt;p&gt;MiMo-V2-Pro is a &lt;strong&gt;mixture-of-experts (MoE)&lt;/strong&gt; model. That means it has 1 trillion total parameters split across many "expert" sub-networks, but only activates 42 billion parameters for any given request. This is the same architectural approach used by DeepSeek V3 and Mixtral — it delivers near-frontier performance at a fraction of the compute cost.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Spec&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total parameters&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~1 trillion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Active parameters&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;42 billion (MoE)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context window&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1 million tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max output&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;32,000 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hybrid attention MoE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Built by&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Xiaomi (MiMo AI division)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Led by&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Luo Fuli (ex-DeepSeek)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Released&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;March 18, 2026&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;Like all large language models, MiMo-V2-Pro predicts the next token in a sequence. What makes it different is the MoE architecture and its focus on agentic tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mixture of Experts:&lt;/strong&gt; Instead of running every parameter for every request (like GPT-5.4 or Claude Opus), MiMo-V2-Pro routes each token through a subset of specialized "expert" networks. Only 42 billion of the 1 trillion parameters activate per inference. This means you get the knowledge capacity of a trillion-parameter model at the inference cost of a ~40B model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid attention:&lt;/strong&gt; The model uses a mixed attention mechanism optimized for long-context reasoning and tool use. This is what enables the 1 million token context window without the quality degradation that some models show at extreme context lengths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent-first design:&lt;/strong&gt; MiMo-V2-Pro was built from the ground up for multi-step autonomous tasks — not chat. It's designed to plan, use tools, execute sequences, and recover from errors. Think: coding agents, research pipelines, automated workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hunter Alpha story
&lt;/h2&gt;

&lt;p&gt;On March 11, 2026, a model called "Hunter Alpha" appeared on OpenRouter with no attribution. It was free, had a 1M context window, and performed surprisingly well. The AI community immediately assumed it was DeepSeek V4 doing a stealth test.&lt;/p&gt;

&lt;p&gt;The speculation made sense — the model had a similar "feel" to DeepSeek's architecture, and it was clearly Chinese-built. It processed over 1 trillion tokens during its anonymous run, topping OpenRouter's usage charts.&lt;/p&gt;

&lt;p&gt;On March 18, Xiaomi revealed the truth: Hunter Alpha was an early test build of MiMo-V2-Pro. The DeepSeek connection wasn't wrong — it was just indirect. Luo Fuli, who leads Xiaomi's MiMo AI division, was a core contributor to DeepSeek's R1 and V-series models before joining Xiaomi in late 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it ranks
&lt;/h2&gt;

&lt;p&gt;MiMo-V2-Pro has posted strong benchmark results across agent-focused evaluations:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Benchmark&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;Global Rank&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Artificial Analysis Intelligence Index&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;49&lt;/td&gt;
&lt;td&gt;#8 worldwide, #2 Chinese&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;PinchBench&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~81–84&lt;/td&gt;
&lt;td&gt;#3 globally&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ClawEval&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;61.5&lt;/td&gt;
&lt;td&gt;#3 globally&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GDPval-AA (agentic Elo)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1434&lt;/td&gt;
&lt;td&gt;#1 Chinese model&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For context, PinchBench and ClawEval are agent-focused benchmarks that test multi-step task completion — not just chat quality. MiMo-V2-Pro ranks right behind Claude Opus 4.6 variants on these tests, and ahead of several GPT-5.x iterations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;This is where it gets interesting. MiMo-V2-Pro is aggressively priced:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Context length&lt;/th&gt;
&lt;th&gt;Input (per 1M tokens)&lt;/th&gt;
&lt;th&gt;Output (per 1M tokens)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;≤ 256K tokens&lt;/td&gt;
&lt;td&gt;$1.00&lt;/td&gt;
&lt;td&gt;$3.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256K – 1M tokens&lt;/td&gt;
&lt;td&gt;$2.00&lt;/td&gt;
&lt;td&gt;$6.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For comparison, Claude Opus 4.6 costs $5/$25 and GPT-5.4 costs $2.50/$15. MiMo-V2-Pro is roughly &lt;strong&gt;5–8x cheaper on output&lt;/strong&gt; than Opus and &lt;strong&gt;3–5x cheaper&lt;/strong&gt; than GPT-5.4.&lt;/p&gt;

&lt;h2&gt;
  
  
  The MiMo-V2 family
&lt;/h2&gt;

&lt;p&gt;MiMo-V2-Pro isn't alone. Xiaomi launched three models simultaneously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MiMo-V2-Pro&lt;/strong&gt; — The flagship text reasoning model (this article)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MiMo-V2-Omni&lt;/strong&gt; — A multimodal model that processes text, images, video, and audio natively. Priced at $0.40/$2.00 per million tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MiMo-V2-TTS&lt;/strong&gt; — A text-to-speech model with emotion control, singing capability, and Chinese dialect support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's also &lt;strong&gt;MiMo-V2-Flash&lt;/strong&gt;, an open-source smaller model (309B total, 15B active) that scores 73.4% on SWE-bench Verified — making it the top open-source coding model.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to access it
&lt;/h2&gt;

&lt;p&gt;MiMo-V2-Pro is available through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Xiaomi's MiMo API platform&lt;/strong&gt; at platform.xiaomimimo.com&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenRouter&lt;/strong&gt; (where it originally appeared as Hunter Alpha)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MiMo Studio&lt;/strong&gt; (Xiaomi's web interface)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The API is OpenAI-compatible, so you can swap it into existing code that uses the OpenAI SDK with minimal changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters
&lt;/h2&gt;

&lt;p&gt;Three reasons MiMo-V2-Pro is significant:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. A phone company built a frontier model.&lt;/strong&gt; Xiaomi isn't an AI lab. They're a consumer electronics company. The fact that they can build a model that competes with Anthropic and OpenAI on agent benchmarks — while pricing it at a fraction of the cost — says something about how accessible frontier AI development has become.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The agent era is real.&lt;/strong&gt; MiMo-V2-Pro is explicitly not a chatbot. It's an agent model. Every major lab is now building for autonomous multi-step execution, not conversation. If you're still thinking of AI as "a thing I chat with," you're behind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Price pressure is accelerating.&lt;/strong&gt; At $1/$3 per million tokens, MiMo-V2-Pro puts serious pressure on Western pricing. Even if the quality is slightly below Opus 4.6, the 5–8x cost difference makes it viable for high-volume production workloads where "good enough" beats "best but expensive."&lt;/p&gt;

&lt;h2&gt;
  
  
  Should you use it?
&lt;/h2&gt;

&lt;p&gt;If you're building AI agents, automated pipelines, or high-volume processing tasks — absolutely worth testing. The price-to-performance ratio is compelling, especially for tasks that don't require the absolute best model.&lt;/p&gt;

&lt;p&gt;For critical coding tasks where accuracy matters most, Claude Opus 4.6 and GPT-5.4 still have the edge. But for everything else, MiMo-V2-Pro just made the conversation a lot more interesting.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;For a detailed head-to-head, see &lt;a href="https://aimadetools.com/blog/mimo-v2-pro-vs-claude-vs-gpt/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;MiMo-V2-Pro vs Claude vs GPT: Where Xiaomi's Model Actually Stands&lt;/a&gt;, &lt;a href="https://aimadetools.com/blog/mimo-v2-pro-vs-claude-opus-4-6/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;MiMo-V2-Pro vs Claude Opus 4.6&lt;/a&gt;, or &lt;a href="https://aimadetools.com/blog/mimo-v2-pro-vs-deepseek-v3/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;MiMo-V2-Pro vs DeepSeek V3&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; &lt;a href="https://aimadetools.com/blog/ai-model-comparison/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;AI Model Comparison 2026: Claude vs ChatGPT vs Gemini&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt; [AI Dev Weekly Extra: Xiaomi's Hunter Alpha Was Never DeepSeek V4](&lt;a href="https://aimadetools.com/blog/ai-dev-weekly-extra-xiaomi-hunter-alpha-mimo-v2-pro/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=" rel="noopener noreferrer"&gt;https://aimadetools.com/blog/ai-dev-weekly-extra-xiaomi-hunter-alpha-mimo-v2-pro/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  weekly)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://aimadetools.com/blog/what-is-mimo-v2-pro/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;https://aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aimodels</category>
      <category>xiaomi</category>
      <category>mimo</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Build a Discord Bot That Roasts Your Code With AI</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Fri, 20 Mar 2026 14:43:54 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/build-a-discord-bot-that-roasts-your-code-with-ai-2jjo</link>
      <guid>https://dev.to/ai_made_tools/build-a-discord-bot-that-roasts-your-code-with-ai-2jjo</guid>
      <description>&lt;p&gt;Every dev Discord server needs a code roast bot. Someone pastes a code snippet, the bot tears it apart with the energy of a senior developer who hasn't had coffee yet. It's educational, it's entertaining, and it's surprisingly easy to build.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll build a Discord bot that watches for code blocks in messages, reviews them with AI in a roast-style tone, and replies with brutally honest feedback. Think code review meets comedy.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we're building
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;roast&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
javascript&lt;br&gt;
  function getUser(id) {&lt;br&gt;
    var user = null;&lt;br&gt;
    fetch('/api/users/' + id).then(res =&amp;gt; {&lt;br&gt;
      user = res.json();&lt;br&gt;
    });&lt;br&gt;
    return user;&lt;br&gt;
  }&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Bot:
  🔥 CODE ROAST 🔥

  Oh no. Oh no no no.

  1. You're returning `user` before the fetch completes.
     This function returns `null` every single time. You've
     built a function whose only job is to return null. Impressive.

  2. `var` in 2026? Did you find this code in a time capsule?

  3. `res.json()` returns a Promise too, which you're not
     awaiting. So even if the timing worked, `user` would
     be a Promise object, not actual data.

  Here's what you probably meant:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
javascript&lt;br&gt;
  async function getUser(id) {&lt;br&gt;
    const res = await fetch(&lt;code&gt;/api/users/${id}&lt;/code&gt;);&lt;br&gt;
    return res.json();&lt;br&gt;
  }&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
  Rating: 2/10 — The indentation was nice though. 💀
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 20+&lt;/li&gt;
&lt;li&gt;A Discord account&lt;/li&gt;
&lt;li&gt;An Anthropic API key&lt;/li&gt;
&lt;li&gt;A Discord server where you have admin permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create the Discord bot
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://discord.com/developers/applications" rel="noopener noreferrer"&gt;discord.com/developers/applications&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click "New Application" → name it "Code Roaster"&lt;/li&gt;
&lt;li&gt;Go to "Bot" tab → click "Add Bot"&lt;/li&gt;
&lt;li&gt;Copy the bot token&lt;/li&gt;
&lt;li&gt;Under "Privileged Gateway Intents," enable &lt;strong&gt;Message Content Intent&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Go to "OAuth2" → "URL Generator" → select &lt;code&gt;bot&lt;/code&gt; scope and &lt;code&gt;Send Messages&lt;/code&gt; + &lt;code&gt;Read Message History&lt;/code&gt; permissions&lt;/li&gt;
&lt;li&gt;Copy the generated URL and open it to invite the bot to your server&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Set up the project
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;roast-bot &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;roast-bot
npm init &lt;span class="nt"&gt;-y&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;discord.js @anthropic-ai/sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Build the bot
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;index.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;GatewayIntentBits&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;discord.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Anthropic&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@anthropic-ai/sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;intents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;GatewayIntentBits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Guilds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;GatewayIntentBits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GuildMessages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;GatewayIntentBits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MessageContent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;anthropic&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ROAST_PROMPT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`You are a code review bot with the personality of a brutally honest senior developer who's seen too much bad code. Your job is to roast the code — point out every issue in a funny, sarcastic way. But always be educational: explain WHY something is wrong and show the fix.

Rules:
- Be funny and sarcastic, but never mean-spirited or personal
- Always explain the actual technical issue behind each roast
- End with a corrected version of the code if possible
- Give a rating out of 10
- Keep it under 300 words
- Use emojis sparingly for effect

Roast this code:`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Listen for !roast command&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;messageCreate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;!roast&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// Extract code block&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;codeMatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/``&lt;/span&gt;&lt;span class="err"&gt;`
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="nx"&gt;endraw&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;?([&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;?)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="nx"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="s2"&gt;```/);
  if (!codeMatch) {
    message.reply('Paste a code block with your message. Example:\n!roast\n&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="err"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="nx"&gt;js&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;nyour&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt; &lt;span class="nx"&gt;here&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="err"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="err"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;`');
    return;
  }

  const language = codeMatch[1] || 'unknown';
  const code = codeMatch[2].trim();

  if (code.length &amp;lt; 10) {
    message.reply("That's barely code. Give me something to work with. 😤");
    return;
  }

  if (code.length &amp;gt; 3000) {
    message.reply("I'm not reading all that. Keep it under 100 lines. 📏");
    return;
  }

  // Show typing indicator
  message.channel.sendTyping();

  try {
    const response = await anthropic.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 800,
      messages: [{
        role: 'user',
        content: `&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;ROAST_PROMPT&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;nLanguage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;language&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="se"&gt;\`\`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;language&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\n&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;\n&lt;/span&gt;&lt;span class="se"&gt;\`\`\`&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
      &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;roast&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// Discord has a 2000 char limit&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;roast&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1900&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;roast&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;[\s\S]{1,1900}&lt;/span&gt;&lt;span class="sr"&gt;/g&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
      &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;part&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;part&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`🔥 **CODE ROAST** 🔥\n\n&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;roast&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;My roasting circuits overheated. Try again. 🫠&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Also support !review for a nicer tone&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;messageCreate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;!review&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;codeMatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/``&lt;/span&gt;&lt;span class="err"&gt;`
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="nx"&gt;endraw&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;w&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="p"&gt;?([&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;S&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;?)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="nx"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="s2"&gt;```/);
  if (!codeMatch) {
    message.reply('Paste a code block with your message.');
    return;
  }

  message.channel.sendTyping();

  try {
    const response = await anthropic.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 800,
      messages: [{
        role: 'user',
        content: `&lt;/span&gt;&lt;span class="nx"&gt;Review&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt; &lt;span class="nx"&gt;constructively&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Point&lt;/span&gt; &lt;span class="nx"&gt;out&lt;/span&gt; &lt;span class="nx"&gt;bugs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;improvements&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;best&lt;/span&gt; &lt;span class="nx"&gt;practices&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Be&lt;/span&gt; &lt;span class="nx"&gt;helpful&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;professional&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Keep&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="nx"&gt;concise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;nLanguage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;codeMatch&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;unknown&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="nx"&gt;n&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="se"&gt;\`\`&lt;/span&gt;&lt;span class="s2"&gt;\n&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;codeMatch&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;\n&lt;/span&gt;&lt;span class="se"&gt;\`\`\`&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
      &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`📝 **Code Review**\n\n&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Something went wrong. Try again.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ready&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`🔥 Roast bot is online as &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Run it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-discord-bot-token
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-anthropic-key
node index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to your Discord server and try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;roast&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
javascript&lt;br&gt;
for (var i = 0; i &amp;lt; arr.length; i++) {&lt;br&gt;
  setTimeout(function() { console.log(arr[i]); }, 1000);&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
plaintext&lt;/p&gt;
&lt;h2&gt;
  
  
  The two modes
&lt;/h2&gt;

&lt;p&gt;The bot has two commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;!roast&lt;/code&gt;&lt;/strong&gt; — brutal, funny, educational. For entertainment and learning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;!review&lt;/code&gt;&lt;/strong&gt; — professional, constructive. For when you actually want help.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same AI, different personality. The prompt controls everything.&lt;/p&gt;
&lt;h2&gt;
  
  
  Tuning the roast level
&lt;/h2&gt;

&lt;p&gt;Adjust the prompt to control intensity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mild roast:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Be gently sarcastic, like a patient mentor who's slightly disappointed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
plaintext&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medium roast (default):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Be brutally honest and funny, like a senior dev who's seen too much.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
plaintext&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nuclear roast:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Channel the energy of a developer who just found production code
written by an intern with no code review. Hold nothing back.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
javascript&lt;/p&gt;
&lt;h2&gt;
  
  
  Rate limiting
&lt;/h2&gt;

&lt;p&gt;Prevent spam by adding a cooldown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cooldowns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;COOLDOWN_MS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// 30 seconds&lt;/span&gt;

&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;messageCreate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;!roast&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lastUsed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;cooldowns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;lastUsed&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;COOLDOWN_MS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;remaining&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ceil&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;COOLDOWN_MS&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;lastUsed&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Cooldown: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;remaining&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s. Your code isn't going anywhere. 😏`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;cooldowns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

  &lt;span class="c1"&gt;// ... rest of the handler&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying
&lt;/h2&gt;

&lt;p&gt;Same as any Node.js bot — a VPS with pm2, a Docker container, or Railway/Fly.io:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# pm2&lt;/span&gt;
pm2 start index.js &lt;span class="nt"&gt;--name&lt;/span&gt; roast-bot

&lt;span class="c"&gt;# Docker&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; roast-bot &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--env-file&lt;/span&gt; .env roast-bot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What you learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How to create a Discord bot with &lt;code&gt;discord.js&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;How to extract code blocks from Discord messages with regex&lt;/li&gt;
&lt;li&gt;How to use different AI prompts to control tone and personality&lt;/li&gt;
&lt;li&gt;How to handle Discord's message length limits&lt;/li&gt;
&lt;li&gt;How to add rate limiting to prevent abuse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The bot is about 80 lines of core logic. It's the kind of thing that makes a dev Discord server 10x more fun — and people actually learn from the roasts because every joke comes with a real explanation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aimadetools.com/blog/javascript-complete-guide/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;JavaScript complete guide&lt;/a&gt; — the language behind the bot&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aimadetools.com/blog/npm-complete-guide/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;npm complete guide&lt;/a&gt; — managing dependencies&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aimadetools.com/blog/docker-complete-guide/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;Docker complete guide&lt;/a&gt; — deploying the bot&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🛠️ &lt;strong&gt;Free tools related to this article:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aimadetools.com/blog/package-json-analyzer/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;package.json Analyzer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://aimadetools.com/blog/build-discord-code-roast-bot/" rel="noopener noreferrer"&gt;https://aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>tutorial</category>
      <category>aitools</category>
      <category>discord</category>
    </item>
    <item>
      <title>React vs Angular — Which Should You Learn in 2026?</title>
      <dc:creator>Joske Vermeulen</dc:creator>
      <pubDate>Wed, 18 Mar 2026 10:43:19 +0000</pubDate>
      <link>https://dev.to/ai_made_tools/react-vs-angular-which-should-you-learn-in-2026-44oc</link>
      <guid>https://dev.to/ai_made_tools/react-vs-angular-which-should-you-learn-in-2026-44oc</guid>
      <description>&lt;h2&gt;
  
  
  Quick Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;React&lt;/th&gt;
&lt;th&gt;Angular&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Type&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Library&lt;/td&gt;
&lt;td&gt;Full framework&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;JavaScript/TypeScript&lt;/td&gt;
&lt;td&gt;TypeScript (required)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Created by&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Meta (Facebook)&lt;/td&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;State management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;External (Redux, Zustand, etc.)&lt;/td&gt;
&lt;td&gt;Built-in (Signals, RxJS)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bundle size&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Smaller (pick what you need)&lt;/td&gt;
&lt;td&gt;Larger (batteries included)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lower entry, higher ceiling&lt;/td&gt;
&lt;td&gt;Steeper initial curve&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  When to Use React
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You want flexibility to choose your own tools&lt;/li&gt;
&lt;li&gt;You're building a startup or small-to-medium app&lt;/li&gt;
&lt;li&gt;You want the largest job market and ecosystem&lt;/li&gt;
&lt;li&gt;You prefer a gradual learning curve&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When to Use Angular
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You're building a large enterprise application&lt;/li&gt;
&lt;li&gt;You want everything included out of the box (routing, forms, HTTP, testing)&lt;/li&gt;
&lt;li&gt;Your team prefers opinionated structure&lt;/li&gt;
&lt;li&gt;You're already in a TypeScript-heavy environment&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Differences
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Philosophy:&lt;/strong&gt; React gives you a view layer and lets you pick everything else. Angular gives you the whole kitchen — routing, forms, HTTP client, dependency injection, testing utilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State:&lt;/strong&gt; React uses hooks (useState, useReducer) plus external libraries. Angular has Signals (new), RxJS observables, and services with dependency injection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Templates:&lt;/strong&gt; React uses JSX (JavaScript + HTML). Angular uses HTML templates with its own syntax (ngIf, ngFor, @if, &lt;a class="mentioned-user" href="https://dev.to/for"&gt;@for&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;React for flexibility and ecosystem size. Angular for large teams that want convention over configuration. In 2026, React still has the larger market share, but Angular is solid for enterprise work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aimadetools.com/blog/react-complete-guide/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=%0Aweekly" rel="noopener noreferrer"&gt;React complete guide&lt;/a&gt;
---&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://aimadetools.com/blog/react-vs-angular/?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=&amp;lt;br&amp;gt;%0Aweekly" rel="noopener noreferrer"&gt;https://aimadetools.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>angular</category>
      <category>comparison</category>
      <category>frontend</category>
    </item>
  </channel>
</rss>
