<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mathew Dony</title>
    <description>The latest articles on DEV Community by Mathew Dony (@mdmathewdc).</description>
    <link>https://dev.to/mdmathewdc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mdmathewdc"/>
    <language>en</language>
    <item>
      <title>Is AI Quietly Killing Open Source?</title>
      <dc:creator>Mathew Dony</dc:creator>
      <pubDate>Sun, 11 Jan 2026 04:02:43 +0000</pubDate>
      <link>https://dev.to/mdmathewdc/is-ai-quietly-killing-open-source-2pa3</link>
      <guid>https://dev.to/mdmathewdc/is-ai-quietly-killing-open-source-2pa3</guid>
      <description>&lt;p&gt;Open Source, both the free and the paid premium kind, is the backbone of modern software development. Personally, AI-assisted development has increased my enjoyment of coding by 10x. But I cannot help wondering whether LLMs are slowly pushing Open Source culture toward a quiet decline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tailwind Situation
&lt;/h2&gt;

&lt;p&gt;If you have been anywhere near tech X/Twitter recently, you have probably seen the Tailwind drama. Tailwind CSS is a hugely popular utility-first CSS framework that lets you write styles inline. Since the AI boom, it has arguably become the default way to build quick web apps. Inline styles live right next to the markup, which makes them especially easy for AI to generate and modify. For vibe-coded apps, Tailwind feels like a perfect match.&lt;/p&gt;

&lt;p&gt;Despite this popularity, Tailwind recently &lt;a href="https://github.com/tailwindlabs/tailwindcss.com/pull/2388#issuecomment-3717222957" rel="noopener noreferrer"&gt;laid off around 75 percent of its team&lt;/a&gt;. This was not because the framework is failing. In fact, it is more popular than ever, with roughly 75 million downloads per month.&lt;/p&gt;

&lt;p&gt;The real issue is that AI tools have become so good at writing Tailwind that developers no longer need to visit the documentation as often. Those docs were the primary channel for promoting Tailwind's paid products. Fewer human eyeballs meant less revenue, even as usage continued to explode.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Joy of Open Source
&lt;/h2&gt;

&lt;p&gt;Before AI, many Open Source projects started with a single developer scratching a personal itch. They solved a problem, realized others might benefit, and shared the solution as a GitHub repository. Over time, the library gained attention, contributors joined in, and an ecosystem formed around it.&lt;/p&gt;

&lt;p&gt;Even when multiple libraries solved the same problem, competition existed in a healthy way. Better APIs, better documentation or a more pleasant developer experience could make one project more popular than another. Developers chose their favorites and often contributed back. Sponsorships from individuals and organizations helped sustain projects because those libraries directly solved real pain points. Documentation was the central reference point. If you wanted to use the library, you had to read it.&lt;/p&gt;

&lt;p&gt;Then LLMs entered the picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Attention Moves Elsewhere
&lt;/h2&gt;

&lt;p&gt;LLM-powered coding agents break this loop. Agents read the docs, not people. They integrate libraries in seconds, without attribution, without brand recognition, and without sending traffic back to the source. No one knows who built the tool. No one browses the docs. The funnel collapses.&lt;/p&gt;

&lt;p&gt;This is not hypothetical. Tailwind is already a concrete example. Massive adoption. Dramatically lower revenue. Documentation traffic down. Engineering teams laid off. Even proposals to optimize docs for LLMs were rejected, because doing so would further accelerate the erosion of the only remaining monetization channel. Optimizing for agents can feel like building the infrastructure for your own obsolescence.&lt;/p&gt;

&lt;p&gt;As someone who has contributed to multiple Open Source projects, this rings true for me. When I first started building with Tailwind, I constantly referred to the documentation just to remember utility class names. Now, I ask my AI tool instead. And when a library offers premium features behind a paywall, I often ask my agent to build something equivalent. I would rather spend money on tokens than pay for the feature itself - I suspect many other developers are doing the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cracks in the OSS Business Model
&lt;/h2&gt;

&lt;p&gt;Two of the most common Open Source monetization strategies have been:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Open core&lt;/strong&gt;: Give away the core library, then sell premium features or hosted offerings once you reach critical mass.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expertise moat&lt;/strong&gt;: Become the recognized expert in your own library, leading to consulting, speaking, or career leverage.
Both models depend on attention. Human attention. People reading your docs, recognizing your name, and associating you with a solution. LLMs change where attention lives. Documentation trained the models, and the models now answer questions directly. Projects that rely on docs-to-premium conversion are especially exposed. Tailwind is not alone here. Prisma, Drizzle, Strapi and many others face the same structural pressure. I also cannot help wondering if this situation has parallels with the Stack Overflow situation, a platform that depended almost entirely on human questions and answers, and which has seen question volume drop to near all-time lows since its launch in September 2008.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;When downstream monetization breaks, the only remaining leverage point is access itself. Open Source historically gave away access and hoped to monetize attention later. Agents bypass the later step entirely.&lt;/p&gt;

&lt;p&gt;The likely outcome is not the end of shared code, but a shift in form. Libraries start to look more like APIs. Metered usage. Paid access for agents. Closed or semi-closed systems where the gate is explicit. In this context, Cloudflare's &lt;a href="https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/" rel="noopener noreferrer"&gt;Pay Per Crawl&lt;/a&gt; looks like a particularly interesting development. It acknowledges the reality that AI crawlers are already consuming vast amounts of content and attempts to rebalance the relationship by giving content owners a way to control access and get compensated for automated usage, rather than letting value be extracted for free.&lt;/p&gt;

&lt;p&gt;The irony is hard to ignore though. Open Source trained the models that now threaten its sustainability. In many ways, it built its own replacement. Whether this leads to a healthier ecosystem or a more fragmented, paywalled future is still unclear. But it does feel like we are standing at the end of one era of Open Source and at the uncomfortable beginning of another.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>llm</category>
      <category>tailwindcss</category>
    </item>
    <item>
      <title>Orchestrating AI Agents to create Memes</title>
      <dc:creator>Mathew Dony</dc:creator>
      <pubDate>Fri, 05 Dec 2025 06:48:15 +0000</pubDate>
      <link>https://dev.to/mdmathewdc/orchestrating-ai-agents-to-create-memes-j38</link>
      <guid>https://dev.to/mdmathewdc/orchestrating-ai-agents-to-create-memes-j38</guid>
      <description>&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcl4alunccxjqp44gwqt.png" alt="The meme agent in action" width="723" height="716"&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Link: &lt;a href="https://mathewdony.com/blog/orchestrating-ai-agents-to-create-memes" rel="noopener noreferrer"&gt;https://mathewdony.com/blog/orchestrating-ai-agents-to-create-memes&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've been diving deep into AI agents and MCP servers over the past few weeks. Practice makes perfect, so I wanted to build an agent orchestrator system where multiple agents can talk to each other based on what they're meant to do. One fun idea that came out of this was a setup that analyzes the user's emotions based on their answer to the question "What's happening with you?" with the help of a specialised agent, which then generates a meme that matches the vibe.&lt;/p&gt;

&lt;h2&gt;
  
  
  What even is an AI agent?
&lt;/h2&gt;

&lt;p&gt;At this point, you've probably used some kind of LLM powered application either directly or indirectly. If you haven't thought much about it, in simple words, an AI agent is just an application that uses an LLM as its brain.&lt;/p&gt;

&lt;p&gt;If you compare it to a human, the LLM is the thinking part of the mind. It takes whatever you see, hear, or feel, makes sense of it, and decides what to do next. In a human body, your eyes and ears gather information, your nerves send that information to the brain, the brain interprets it, and then your muscles carry out the action.&lt;/p&gt;

&lt;p&gt;An AI agent follows the same pattern. You have inputs from the user or the environment, some code that cleans and structures that input, the LLM that figures out what's happening, and functions that act on whatever decision it makes. The whole loop ends up feeling surprisingly close to how we operate every day.&lt;/p&gt;

&lt;p&gt;So the brain is the clever part - but just like in a real human body, the brain is basically useless without everything around it. A brain with no senses can't understand the world. A brain with no muscles can't act on anything it decides. Intelligence by itself can only go so far if it has no way to see, no way to move and no way to interact.&lt;/p&gt;

&lt;p&gt;This is exactly the limitation early LLMs ran into. They were smart on their own, but the moment you asked them something grounded in the real world, like the current weather, they had no way to actually go out and get that information. The thinking part was there, but the senses and the muscles weren't.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter tools.
&lt;/h3&gt;

&lt;p&gt;Tools were the missing pieces that enabled the LLM to actually do things. Think of the LLM as the brain and tools as the hands, sensors, and external abilities it never had. Once you connect tools to the brain, it can suddenly reach out, fetch data, take actions and handle tasks it previously couldn't even attempt.&lt;/p&gt;

&lt;p&gt;As tools became more common, it became clear that we needed a standard way for agents and LLMs to communicate with them. Anthropic introduced the &lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;MCP protocol&lt;/a&gt; to solve this, giving everyone a unified schema for defining and using tools.&lt;/p&gt;

&lt;p&gt;I think that's enough background. Let's build the meme generator.&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating the Meme MCP Server
&lt;/h2&gt;

&lt;p&gt;I started by building the MCP server itself. I used ImgFlip's caption_image API under the hood to generate memes. The server exposes a single tool that my Meme agent can call. I published it on npm as &lt;a href="https://www.npmjs.com/package/imgflip-meme-mcp" rel="noopener noreferrer"&gt;imgflip-meme-mcp&lt;/a&gt; - it exposes a tool &lt;code&gt;generate_meme&lt;/code&gt; which takes a template id, captions, and API credentials. Simple and tidy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 3 Agent squad
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41cbvm1hir3xvw75fu3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41cbvm1hir3xvw75fu3y.png" alt="The 3 agent architecture" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The end user only talks to the Supervisor agent. They never interact with the Emotion agent or the Meme agent directly which keeps the experience clean and lets the chaos happen behind the curtain.&lt;/p&gt;

&lt;p&gt;Technically, I could have bundled everything into one giant agent with two tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Summarise the user's emotions&lt;/li&gt;
&lt;li&gt;Generate a meme&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But I split them up for 3 solid reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Too many tools in one agent actually &lt;a href="https://github.blog/ai-and-ml/github-copilot/how-were-making-github-copilot-smarter-with-fewer-tools/" rel="noopener noreferrer"&gt;makes it worse&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Specialised agents are easier to tune and scale up&lt;/li&gt;
&lt;li&gt;I can mix and match cheap and expensive models depending on the job. For example, an agent that summarises text doesn't need the same model power as one that tries to be funny and creative.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building the 3 Agents
&lt;/h2&gt;

&lt;p&gt;The Meme agent gets access to my remote MCP server so it can call the meme generator tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;memeAgent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createAgent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GPT-5&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;generateMemeTool&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;systemPrompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Create a funny meme&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we have the Emotion agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;emotionAgent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createAgent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GPT-3.5&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;systemPrompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Analyze what the user is feeling&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might have noticed that I used a stronger model for the Meme agent since creativity needs a bit more juice and a later knowledge cutoff. The Emotion agent is doing light work so I gave it something cheaper like GPT-3.5.&lt;/p&gt;

&lt;p&gt;Finally, the Supervisor agent - this one doesn't generate memes or analyse emotions itself. Instead, I wrapped the above agents as tools and gave it to the Supervisor agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supervisorAgent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createAgent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Gemini-3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;summarizeEmotionTool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;generateMemeTool&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;systemPrompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;You are a Supervisor that is tasked with creating a meme based on the emotions of the user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Supervisor only sees high level tools like &lt;code&gt;summarizeEmotionTool&lt;/code&gt; and &lt;code&gt;generateMemeTool&lt;/code&gt;. It's not even aware of the low level &lt;code&gt;generateMemeTool&lt;/code&gt; buried in the MCP server.&lt;/p&gt;

&lt;p&gt;Everything ends up neat, modular, and way easier to debug when something inevitably goes wrong.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Try out the Meme agent on my blog: &lt;a href="https://mathewdony.com/blog/orchestrating-ai-agents-to-create-memes" rel="noopener noreferrer"&gt;https://mathewdony.com/blog/orchestrating-ai-agents-to-create-memes&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/mdmathewdc/ai-agents-orchestrator" rel="noopener noreferrer"&gt;GitHub: ai-agents-orchestrator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://npmjs.com/package/imgflip-meme-mcp" rel="noopener noreferrer"&gt;npm: imgflip-meme-mcp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.langchain.com/" rel="noopener noreferrer"&gt;Langchain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;If these agents become self aware, at least they'll have a sense of humor&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>showdev</category>
      <category>ai</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
