<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: BekahHW</title>
    <description>The latest articles on DEV Community by BekahHW (@bekahhw).</description>
    <link>https://dev.to/bekahhw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bekahhw"/>
    <language>en</language>
    <item>
      <title>How AI Tools talk to Each Other</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Tue, 31 Mar 2026 15:58:07 +0000</pubDate>
      <link>https://dev.to/bekahhw/how-ai-tools-talk-to-each-other-836</link>
      <guid>https://dev.to/bekahhw/how-ai-tools-talk-to-each-other-836</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;For a more interactive version of this post, visit &lt;a href="https://bekahhw.com/how-ai-tools-communicate" rel="noopener noreferrer"&gt;https://bekahhw.com/how-ai-tools-communicate&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This weekend, my daughter ran in her first high school track meet. One of the other girls relay teams was disqualified for dropping the baton. I don't know much about track, so I was surprised to learn that dropping the baton can result in a DQ (disqualification). The thing that really sucks is that those girls were the fastest team, even after having to recover the dropped baton. But, at the end of the meet, it doesn't matter how fast each runner is if the baton doesn't make it across the finish line without the team getting DQed. The team has to work together, and the baton is the thing that connects them.&lt;/p&gt;

&lt;p&gt;It's kind of like what's happening when AI tools communicate. The intelligence of each individual tool matters less than whether they can pass information to each other cleanly. And most beginners don't realize this until something breaks and they're staring at an error message with no idea where to start.&lt;/p&gt;

&lt;p&gt;Most AI tool communication happens through a small number of patterns. Once you recognize them, debugging stops feeling like magic and starts feeling like plumbing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everything is a Message
&lt;/h2&gt;

&lt;p&gt;If you've ever wondered why some AI tools feel instant while others make you wait, or why a multi-step AI workflow sometimes just… stops mid-chain, it comes down to three fundamental communication patterns.&lt;/p&gt;

&lt;p&gt;When one piece of an AI system needs to talk to another, it sends a message. That message is almost always structured as JSON, which sounds intimidating but is really just organized text.&lt;/p&gt;

&lt;p&gt;Think about ordering food at a restaurant. You don't just say "I want stuff." You say "I want a burger, medium, no onions, with fries." That structure is what lets the kitchen actually process your order. JSON is the same idea. It organizes information into labeled fields so the receiving tool knows exactly what it's looking at.&lt;/p&gt;

&lt;p&gt;A simple JSON message might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"search"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"query"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"best pizza in New York"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"results"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API, or Application Programming Interface, is the agreement between two tools about what fields to expect and what format they'll be in.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice. Say you're building a workflow where someone submits a form on your site, and you want an AI to draft a personalized response. Your form tool sends a message to the LLM that might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Jordan"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"question"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"How do I get started with open source?"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"experience_level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"beginner"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The LLM knows to look for those fields because your API agreement says they'll be there. It uses name to personalize the reply, question to know what to answer, and experience_level to calibrate how technical to get.&lt;/p&gt;

&lt;p&gt;Now imagine your form tool sends this instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Jordan"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"inquiry"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"How do I get started with open source?"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"beginner"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfcoj4ov34efelmoplz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfcoj4ov34efelmoplz6.png" alt="Field Name mismatch" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The LLM is now confused because it was expecting "name," "question," and "experience_level." The LLM goes looking for name and finds nothing. It goes looking for question and finds nothing. The chain breaks, not because anything was wrong with the content, but because the tools weren't speaking the same language.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqqdz7c66sr2e43fzwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sqqdz7c66sr2e43fzwk.png" alt="Field Name Fix" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When something breaks in a tool chain, it's almost always because one tool sent a message the next tool didn't understand. Wrong format. Missing field. Unexpected data type. The fix is rarely complicated. But you have to know that's where to look.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Ways AI Tools Communicate
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9nrx05ykicpzd4ucvd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9nrx05ykicpzd4ucvd1.png" alt="3 patterns diagram" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Request/Response
&lt;/h3&gt;

&lt;p&gt;One tool asks, the other answers. You send a prompt, you get text back, you pass it to the next step. Think of it like sending a text message and waiting for a reply before doing anything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  Streaming
&lt;/h3&gt;

&lt;p&gt;Instead of waiting for the full response, the output arrives piece by piece. This is why ChatGPT seems to type its answer in real time rather than making you wait for the whole thing to appear at once. It's useful when you're generating long content or building something that needs to feel responsive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Events
&lt;/h3&gt;

&lt;p&gt;Instead of asking and waiting, a tool watches for something to happen and then reacts. A new email arrives. A file is uploaded. A timer fires. The agent picks it up and acts without anyone pressing a button. This is how you build things that run in the background autonomously.&lt;/p&gt;

&lt;p&gt;Most builders start with request/response and eventually add streaming when their interface feels sluggish, or events when they want something to run without manual triggering. But the real magic happens when you combine them. You can have a tool chain that starts with an event trigger, streams output to the user, and then sends a final request/response message to update a database.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Breaks Multi-Step Chains
&lt;/h2&gt;

&lt;p&gt;Each of those three patterns works fine in isolation. Tool chains fail in very predictable ways. If you know the patterns, you know where to look. The problem shows up when you chain tools together and the context window (the AI's working memory) fills up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmdkw89jff49dohdc5w3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmdkw89jff49dohdc5w3.png" alt="Diagnosing broken chain" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Context window overflow.
&lt;/h3&gt;

&lt;p&gt;Every LLM can only "see" a certain amount of text at once. Imagine trying to read a book but you can only ever see 10 pages at a time. If you keep shoving earlier chapters into the window to maintain "memory," you eventually run out of room for the chapter you're actually trying to read. Builders who chain multiple tools together can accidentally fill the context window with outputs from earlier steps, leaving no room for the actual task. Smart builders decide what to pass forward and what to leave behind.&lt;/p&gt;

&lt;h3&gt;
  
  
  Malformed outputs.
&lt;/h3&gt;

&lt;p&gt;If step three in your chain expects an organized JSON object and step two returns a casual paragraph of text, step three breaks. It's like asking someone to fill out a form, but instead of using the form fields, they just write you a letter. The information might be there, but the system can't process it. This is why explicitly telling the LLM how to format its output, something like "respond only in JSON with these exact fields," matters more than most people expect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency compounding.
&lt;/h3&gt;

&lt;p&gt;Each step takes time. Three tools that each take two seconds is at minimum six seconds total, plus overhead. If you're building something people interact with in real time, that adds up fast. Builders solve this with caching, which means storing results you've already computed so you don't recalculate them, and parallelism, which means running independent steps at the same time instead of one after another.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vague instructions at the orchestration level.
&lt;/h3&gt;

&lt;p&gt;The LLM decides which tool to call next based on the instructions you've given it. Vague instructions lead to the wrong tool getting called, or the right tool getting called with the wrong inputs. Think of it like giving someone directions. "Head toward the big building" leaves too much room for interpretation. "Turn left at the red light, go two blocks, turn right at the gas station" gets you where you need to go. The precision of your orchestration prompt determines whether your agent behaves reliably or keeps guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mental shift that changes how you AI
&lt;/h2&gt;

&lt;p&gt;When you start thinking in tool chains, you stop asking "what can I get the AI to do?" and start asking "what does each step need to receive, and what does it need to output?"&lt;/p&gt;

&lt;p&gt;That's a systems question. And it's actually a more useful frame than prompt craft alone, because it forces you to get specific about your requirements before you write a single instruction.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AI Vocab 102</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Tue, 24 Mar 2026 17:41:45 +0000</pubDate>
      <link>https://dev.to/bekahhw/ai-102-4o0</link>
      <guid>https://dev.to/bekahhw/ai-102-4o0</guid>
      <description>&lt;p&gt;If you read &lt;a href="https://dev.to/bekahhw/ai-vocab-101-eh2"&gt;the vocabulary post&lt;/a&gt;, you know what a prompt is. You know the difference between a model and a model family. You've got the words now.&lt;/p&gt;

&lt;p&gt;This post is about what to do with them.&lt;/p&gt;

&lt;p&gt;Having vocabulary for the pieces doesn't automatically tell you how the pieces move. You can know what a prompt is and still write ones that produce wildly inconsistent results. You can understand what an agent is and still not know why yours keeps breaking at step three. The gap between "it kind of works" and "it actually works" isn't usually a vocabulary problem anymore. It's a structure problem.&lt;br&gt;
That structure comes down to three things and how they talk to each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc2st5wt614h124vezy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc2st5wt614h124vezy9.png" alt="Diagram showing the three components of an AI system: the model, the prompt, and the tools" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These three concepts build on each other. You cannot have a workflow without prompts. You cannot have tool chaining without workflows. Understanding them in order is the fastest path to building things that actually behave the way you intended.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Prompt?
&lt;/h2&gt;

&lt;p&gt;A prompt is your instruction to the LLM. It's the text you write before you press send. But it's also a lot more than that, because the LLM doesn't "know" what you mean the way another person would. It pattern-matches on what you've written and generates the most statistically likely useful response.&lt;/p&gt;

&lt;p&gt;That sounds mechanical. And it is. But it's also why how you write the prompt changes the output dramatically.&lt;/p&gt;

&lt;p&gt;Think of it like talking to a contractor. "Build me a kitchen" and "Build me a 12x14 kitchen with white shaker cabinets, quartz countertops, and an island with seating for four" will get you very different results, even if you're talking to the same person.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8fwk6zplm23kmpu26t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8fwk6zplm23kmpu26t.png" alt="anatomy of a prompt diagram" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The LLM fills in whatever you leave blank. Sometimes that's fine. Often it's the source of that feeling when you get a response that's almost what you wanted but weirdly off.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an AI Workflow?
&lt;/h2&gt;

&lt;p&gt;A workflow is what happens when you stop treating the AI like a single-shot answer machine and start treating it like a collaborator on a multi-step process.&lt;/p&gt;

&lt;p&gt;Most real tasks aren't one prompt deep. "Write a blog post for me" sounds like one instruction, but if you actually want a good output, it's more like: research the topic, outline the structure, draft the intro, write the body, edit for tone, format for publishing. That's six distinct steps.&lt;/p&gt;

&lt;p&gt;A workflow is those steps, defined in sequence. The output of one step becomes the input of the next.&lt;/p&gt;

&lt;p&gt;This is the shift that changes everything for people who are building with AI seriously. You stop asking "what should I prompt?" and start asking "what are the steps this task actually requires?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4udezkq72yq2t922jqi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4udezkq72yq2t922jqi8.png" alt="workflow diagram" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've been frustrated that the AI doesn't produce what you actually want in one shot, this is probably why. You're expecting one step to do the work of five.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Tool Chaining?
&lt;/h2&gt;

&lt;p&gt;Tool chaining is what happens when you connect the AI to other tools, and those tools pass information back and forth automatically. The AI isn't just generating text. It's calling a search API, reading the results, feeding those results into the next prompt, then writing output to a database or sending an email.&lt;/p&gt;

&lt;p&gt;Each tool in that chain does one thing. The AI reasons about what tool to use next and what to pass to it.&lt;/p&gt;

&lt;p&gt;Think of it like an assembly line where the AI is the foreman deciding which station does what, and in what order.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4ut8pssrhueqzdw57l5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4ut8pssrhueqzdw57l5.png" alt="tool chaining diagram" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The difference between a workflow and tool chaining is that a workflow can be manual. You can paste outputs from step to step yourself. Tool chaining is when that handoff becomes automatic, which is what people mean when they start talking about "AI agents."&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It All Together
&lt;/h2&gt;

&lt;p&gt;Here's what a lot of people miss: these three things aren't separate techniques. They're nested.&lt;/p&gt;

&lt;p&gt;Every tool chain is made of workflows. Every workflow is made of prompts. If your prompts are vague, your workflows produce inconsistent outputs. If your workflows aren't structured, your tool chains break in unpredictable places.&lt;br&gt;
This is not just about being more technical. It's about building something that actually behaves the same way twice.&lt;/p&gt;

&lt;p&gt;What are you building right now where the output feels inconsistent? That inconsistency probably lives in one of these three layers. &lt;/p&gt;

&lt;p&gt;The people who move forward aren’t smarter. They just start thinking in systems instead of prompts.&lt;/p&gt;

&lt;p&gt;In the next post, we’ll make that concrete by walking through the actual tools and how they pass information between each other.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AI Vocab 101</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Thu, 19 Mar 2026 22:25:31 +0000</pubDate>
      <link>https://dev.to/bekahhw/ai-vocab-101-eh2</link>
      <guid>https://dev.to/bekahhw/ai-vocab-101-eh2</guid>
      <description>&lt;p&gt;I've been having a lot of conversations with non-tech people recently about AI. What I keep running into is the same pattern: smart, curious people who are genuinely trying to understand what's happening, but who don't have the vocabulary to name what they don't know. And when you can't name it, you can't ask the right question, which means you stay stuck at the surface.&lt;/p&gt;

&lt;p&gt;The car wash test is a perfect example of this.&lt;/p&gt;

&lt;p&gt;A few months ago, screenshots flooded social media of people asking ChatGPT, Claude, and Grok a deceptively simple question: the car wash is 40 meters from my house. Should I walk or drive? The chatbots said walk. &lt;/p&gt;

&lt;p&gt;What many people in the conversation didn't understand is that the people getting bad results weren't using a bad AI. They were using a &lt;em&gt;lesser model&lt;/em&gt;, probably the free tier of a product, without knowing that's what they were doing. And without vocabulary, there's no way to even articulate that distinction.&lt;/p&gt;

&lt;p&gt;Here's likely what actually happened. "ChatGPT" isn't one thing. It's a product that runs on a &lt;em&gt;family&lt;/em&gt; of models. In ChatGPT, there are three models: GPT-5 Instant, GPT-5 Thinking, and GPT-5 Pro, and a routing layer selects which to use based on the your question. On top of that, the current flagship family looks like this:&lt;/p&gt;

&lt;p&gt;Think of GPT-5.4 like a full-service restaurant kitchen. GPT-5.4 mini is the fast-casual version: quicker, cheaper, good enough for most everyday questions. GPT-5.4 nano is even lighter, like a food truck setup. And GPT-5.4 pro is the version that takes extra time to think through the really hard problems, like a chef who slow-cooks instead of microwaving.&lt;/p&gt;

&lt;p&gt;The key difference: free users don't get the full kitchen. They get routed to whichever option is fastest and cheapest at that moment. That version &lt;em&gt;can&lt;/em&gt; answer a car wash question correctly, but it's also more likely to give inconsistent results on anything with nuance. Paying users get reliable access to the better models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flosfan1c1n1xj719jz63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flosfan1c1n1xj719jz63.png" alt="GPT 5 model explanation" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So when someone says "ChatGPT told me X" and someone else says "ChatGPT told me Y," they may have been talking to completely different models, without either of them knowing it. That's not a gotcha. That's just what happens when you don't have the vocabulary to describe what you're actually using.&lt;/p&gt;

&lt;p&gt;This is why vocabulary matters. Not to be pedantic about terminology, but because the words give you handles on things you can actually change.&lt;/p&gt;

&lt;p&gt;Here are the terms that help close that gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Is
&lt;/h2&gt;

&lt;p&gt;Three words that get used interchangeably. They shouldn't be.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Artificial intelligence&lt;/em&gt; is the broad category. Any system performing tasks we'd normally associate with human reasoning, recognizing images, detecting fraud, recommending what to watch next. LLMs are one kind of AI. The algorithm shaping your social media feed is another kind entirely. Think of AI as "transportation." It's the whole category. LLMs are like cars specifically, while recommendation algorithms (for example, what shows to watch next) are like trains.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A large language model&lt;/em&gt;, or LLM, is AI trained specifically on enormous amounts of text. It works with words, reading, predicting, generating. GPT-5.4, Claude, Gemini, Llama: all LLMs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A model&lt;/em&gt; is the specific trained artifact underneath the product. When someone asks "which model are you using," they're not asking about the company. They want the exact version, because different models in the same family behave differently, cost differently, and have different knowledge cutoffs. This is like asking whether you're driving a 2024 Civic or a 2026 Accord. They might be the same manufacturer, but very different capabilities.&lt;/p&gt;

&lt;p&gt;These nest. AI contains LLMs. LLMs come in specific models. They are not synonyms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6ipdzo86csaw52sckic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6ipdzo86csaw52sckic.png" alt="AI, LLMs, and models as nested categories" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Model Thinks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Token.&lt;/strong&gt; The LLM doesn't read words the way you do. It reads tokens: chunks of text that might be a full word, part of a word, a punctuation mark, or a space. Everything about LLM capacity and pricing is measured in tokens, not words or characters. Think of tokens like syllables in speech. Sometimes they're a whole word ("cat"), sometimes they're a fragment ("un-break-able").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context window.&lt;/strong&gt; The total amount of text, in tokens, the model can hold in working memory at once. Your prompt, the conversation history, any documents you've passed in, the response being generated: all of it counts. When the window fills, older content gets dropped. This is why long conversations sometimes feel like the AI forgot something from earlier. It didn't forget. It ran out of room. Imagine a whiteboard where you can only write so much before you have to start erasing from the top to make space at the bottom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a8g8d563p7naxk94rdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a8g8d563p7naxk94rdl.png" alt="Diagram showing context window filling up over a conversation" width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hallucination.&lt;/strong&gt; When the model generates text that is confident, fluent, and wrong. Not lying: it has no concept of truth or intent to deceive. It's pattern-matching on what a plausible response looks like, and sometimes that leads somewhere inaccurate. Hallucinations range from small factual errors to completely fabricated citations. Knowing this term means you can stop calling everything you distrust a "hallucination" and start distinguishing between "the model reasoned badly" versus "the model stated something false with full confidence." It's like when you confidently give someone directions to a restaurant that closed three years ago. It's not malicious, just working from outdated information.&lt;/p&gt;

&lt;h2&gt;
  
  
  How You Work With It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prompt.&lt;/strong&gt; Your instruction to the model. Everything it receives before it starts generating. Prompt quality is one of the highest-leverage variables in any AI system. Vague prompts don't just produce vague outputs: they produce unpredictable ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent.&lt;/strong&gt; An AI system that can take actions, not just generate text. It has access to tools, search, email, databases, APIs, and decides which to use and in what order. The defining characteristic is that it can affect the world outside the conversation. If an LLM is like a consultant who gives advice, an agent is like an assistant who can actually book your flight, send the email, and update the spreadsheet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Harness.&lt;/strong&gt; The scaffolding you build around an LLM to make it useful in a specific context. System prompt, retrieval logic, error handling, tool connections: all of it together. The model is the engine. The harness is everything that makes it go where you want. Think of a Formula 1 car: the engine is powerful, but useless without the steering wheel, brakes, suspension, and chassis that let you actually control it.&lt;/p&gt;

&lt;h3&gt;
  
  
  More Advanced Terms If You're Building With AI
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;API (Application Programming Interface).&lt;/strong&gt; The formal connection point between two pieces of software. This isn't AI-specific. It's how all modern software connects, from weather apps to payment processors. But it's essential vocabulary for AI because almost every AI tool you use is either calling an API (to get the model's response) or offering one (so other tools can connect to it). When tools say they "integrate," they almost always mean they share an API connection. Think of it like the electrical outlet in your wall. It's a standardized interface that lets different appliances plug in and get power without rewiring your house each time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP (Model Context Protocol).&lt;/strong&gt; A way to let AI access your stuff: files, calendar, email. It's trying to make these connections easier, but it's early days and each company still does it a bit differently. You might see tools advertising MCP support. Just know it means the tool is trying to play nice with AI, even if the setup isn't always smooth yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Lesson from the Car Wash
&lt;/h2&gt;

&lt;p&gt;The conversation around that test wasn't really about whether AI could reason through a simple question. It was about people evaluating something they couldn't fully name.&lt;/p&gt;

&lt;p&gt;If you know the difference between a model and a model family, you ask "which version were they using?" instead of "is AI smart or dumb?" If you understand context windows, you stop blaming the AI when it forgets something from earlier in a long conversation. If you know what hallucination actually means, you stop using it as a catch-all for every output you don't trust.&lt;/p&gt;

&lt;p&gt;That's what vocabulary does. It turns vague frustration into specific, solvable problems.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AI Has Entered the AI Development Loop</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Wed, 04 Mar 2026 19:12:10 +0000</pubDate>
      <link>https://dev.to/bekahhw/ai-has-entered-the-ai-development-loop-2f5b</link>
      <guid>https://dev.to/bekahhw/ai-has-entered-the-ai-development-loop-2f5b</guid>
      <description>&lt;p&gt;It feels like we crossed a recursive threshold in February and the internet yawned.&lt;/p&gt;

&lt;p&gt;In February 2026, OpenAI published this in &lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/" rel="noopener noreferrer"&gt;their blog&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"GPT-5.3-Codex is our first model that was instrumental in creating itself... our team was blown away by how much Codex was able to accelerate its own development."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That line matters more than most of the benchmarks that followed.&lt;/p&gt;

&lt;p&gt;It doesn’t mean the model designed itself or trained itself. Humans still ran the research program. But it does mean something new: a model helping debug the experiments, analyze the results, and build the internal tools used to develop the next model.&lt;/p&gt;

&lt;p&gt;In other words, AI has started participating in the process that improves AI. Not designing itself. Not training itself. But participating directly in the development loop.&lt;/p&gt;

&lt;p&gt;It’s a subtle shift, but it changes the development loop in ways people haven’t fully processed yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Happened with Codex
&lt;/h2&gt;

&lt;p&gt;Early versions of Codex were used by the team to debug and monitor their own training runs, track patterns, propose fixes, and build custom apps for researchers to compare behaviors against prior models. The model managed deployment work, fixed bugs, handled cache issues, and scaled dynamically during traffic surges. It built data pipelines, visualized thousands of data points, and summarized insights in minutes.&lt;/p&gt;

&lt;p&gt;Humans still set the goals and approved the changes. But the feedback loop was tight enough that the team described themselves as "blown away" by how much it accelerated their workflow.&lt;/p&gt;

&lt;p&gt;The important part isn’t that the model “built itself.” It didn’t.&lt;/p&gt;

&lt;p&gt;The important part is that AI is now participating in the same engineering process that produces the next generation of AI.&lt;/p&gt;

&lt;p&gt;For decades researchers have talked about recursive improvement — systems that help design or improve their successors. Until recently that mostly lived in theory or narrow experiments like AutoML and evolutionary optimization.&lt;/p&gt;

&lt;p&gt;What’s different here is that the loop has moved from theory into the practical mechanics of AI development.&lt;/p&gt;

&lt;p&gt;A model helping run experiments.&lt;br&gt;&lt;br&gt;
A model helping debug infrastructure.&lt;br&gt;&lt;br&gt;
A model helping analyze results that feed into the next model.&lt;/p&gt;

&lt;p&gt;That shortens the distance between building an AI system and improving it.&lt;/p&gt;

&lt;p&gt;And once that loop tightens enough, the limiting factor on progress starts to shift.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Near-Term Is Already Here
&lt;/h2&gt;

&lt;p&gt;Inside major AI labs, development workflows are already changing.&lt;/p&gt;

&lt;p&gt;Leadership comments and internal reports suggest that a large share of internal code is now AI-assisted. Engineers increasingly describe their role less as writing every line of code and more as supervising systems that generate, test, and iterate on it.&lt;/p&gt;

&lt;p&gt;GPT-5.3-Codex is also the first OpenAI model rated "High capability" under their Preparedness Framework specifically for identifying software vulnerabilities. That’s one reason they launched a $10M API credit program aimed at security researchers the same week.&lt;/p&gt;

&lt;p&gt;But the more important shift is development velocity.&lt;/p&gt;

&lt;p&gt;When a model helps build the tools, pipelines, and analyses that support AI research, the iteration cycle compresses. Experiments run faster. Failures get diagnosed quicker. Teams can test more ideas in the same amount of time.&lt;/p&gt;

&lt;p&gt;That's not a new pattern to software engineering. Compilers eventually compile themselves. Build systems generate other build systems. Tooling improves the tooling that follows it.&lt;/p&gt;

&lt;p&gt;What’s new is the intelligence now sitting inside that loop. That matters because progress in AI has often been limited less by ideas than by how quickly researchers can run experiments, interpret results, and try again.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Research Velocity Bottleneck
&lt;/h3&gt;

&lt;p&gt;What makes this significant is research velocity. Progress in AI has often been limited less by ideas than by how quickly researchers can run experiments, interpret results, and try again. Training runs take time. Infrastructure breaks. Data pipelines fail. Evaluations produce thousands of signals that humans have to sift through before the next iteration begins.&lt;/p&gt;

&lt;p&gt;When a model starts helping with those steps — debugging experiments, summarizing outcomes, generating analysis tools — the iteration cycle compresses. More experiments can run in the same amount of time. More hypotheses get tested. The frontier moves not because any single model is dramatically smarter, but because the feedback loop around improvement gets faster.&lt;/p&gt;

&lt;p&gt;AI development has historically been limited by compute, data, and human research time. If part of that research loop becomes automated, the bottleneck shifts again.&lt;/p&gt;

&lt;p&gt;This pattern shows up repeatedly in technological progress. Semiconductor advances accelerated when fabrication and testing cycles became automated. Software development accelerated when continuous integration systems started running builds and tests automatically. In both cases, the breakthrough was both better ideas and shortening the loop between trying something and learning whether it worked.&lt;/p&gt;

&lt;p&gt;AI entering its own development loop looks similar. When the systems being improved start helping run the improvement process, iteration speeds up. And when iteration speeds up, progress compounds.&lt;/p&gt;

&lt;p&gt;The question now isn’t whether a single model is dramatically smarter than the last one. It's how quickly the next iteration can happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Medium-Term Is Where It Gets Uncomfortable
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://x.com/sama/status/1983584366547829073?lang=en" rel="noopener noreferrer"&gt;Sam Altman has publicly said they have a goal of an "AI research intern" capability by September 2026 and "true automated AI researcher" by March 2028&lt;/a&gt;. As the loop tightens,  the cost of pushing the frontier drops. This will mean that either more companies can compete or the leaders pull further ahead because their iteration cycles compound faster. Meanwhile, parts of the engineering stack are already shifting.&lt;/p&gt;

&lt;p&gt;That transition isn’t happening slowly. As we’ve seen repeatedly with technological shifts, organizations often adapt under competitive pressure rather than through careful planning which tends to produce messy transitions and uneven outcomes. And the ripple effects won’t stop at tech. Any field built around complex, repeatable knowledge work will feel some version of the same pressure.&lt;/p&gt;

&lt;p&gt;What felt like a 5–10 year horizon for broad disruption is now measured in 1–3 years for many industries. This is why the anxiety feels bigger than “just devs.” It’s not isolated; it’s systemic acceleration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Long-Term Is the Part Nobody Wants to Say Out Loud
&lt;/h2&gt;

&lt;p&gt;If AI systems eventually assist with the full research loop the feedback cycle tightens further. Hypothesis generation, experiment design, training runs, evaluation, all of it. That doesn’t automatically mean runaway intelligence. Capabilities could compound in ways that are genuinely hard to reason about in advance. But it does mean the systems advancing AI become partially automated themselves.&lt;/p&gt;

&lt;p&gt;That has implications people don’t fully understand yet.&lt;/p&gt;

&lt;p&gt;Neither path is guaranteed. What's not up for debate is that when AI writes the code that trains the next AI, auditing gets harder. Tiny undetected biases, optimization pressures, and specification gaming can propagate across iterations.&lt;/p&gt;

&lt;p&gt;OpenAI and others have safeguards in place. The real question is whether those safeguards scale as quickly as the systems themselves.&lt;/p&gt;

&lt;p&gt;That’s not a rhetorical question. It’s an open one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why You Should Care Now
&lt;/h2&gt;

&lt;p&gt;My instinct is usually to frame these shifts in ways that feel manageable, maybe even exciting. And some of it is exciting. But preparing people for what's actually coming means being honest that the timeline is compressed, the impacts are uneven, and the people least prepared for the disruption will feel it most.&lt;/p&gt;

&lt;p&gt;The anxiety you might feel reading this isn't irrational. It's information. The question is what you do with it.&lt;/p&gt;

&lt;p&gt;The roles that will matter most aren't necessarily the ones that write the most code. They're the ones that can evaluate what AI produces critically, catch what automated systems miss, and understand enough about the systems they're building on to ask the right questions. That's worth investing in now, not when the next wave lands.&lt;/p&gt;

&lt;p&gt;Machines are now helping build the machines that come after them.&lt;/p&gt;

&lt;p&gt;That’s not the future. That’s February 2026.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>AI Ate the Homework: What Communities Are Actually For Now</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Fri, 27 Feb 2026 22:05:16 +0000</pubDate>
      <link>https://dev.to/bekahhw/ai-ate-the-homework-what-communities-are-actually-for-now-11hi</link>
      <guid>https://dev.to/bekahhw/ai-ate-the-homework-what-communities-are-actually-for-now-11hi</guid>
      <description>&lt;p&gt;When I was learning to code, one of the things that motivated me most was the sense of community. I found a ton of value in the Twitter community, where people answered questions, shared resources, and celebrated each other's wins. I also found incredible support in online coding communities. A huge part of this was the ability to ask questions and get help from others who had been where I was. They brought empathy and experience in a way that documentation and tutorials couldn't, and made me feel like I could do it even when I didn't believe that.&lt;/p&gt;

&lt;p&gt;A huge part of Virtual Coffee's early growth was people finding each other to ask questions, get help, and learn together. It was a safe space to say "I don't know how to do this" or "Is this interview experience 'normal'?" and have someone patiently walk you through it.&lt;/p&gt;

&lt;p&gt;Not only did having your question answered give you the information you needed, it gave you validation. You weren't alone. You were struggling with something that other people struggled with too. But. it also felt good to help. And in a lot of ways, you experienced growth and it felt tangible when you were able to answer someone else's question. Successful communities saw collective knowledge sharing, mutual aid, opportunities to learn together.&lt;/p&gt;

&lt;p&gt;By 2024, something had fundamentally shifted.&lt;/p&gt;

&lt;p&gt;ChatGPT could answer your JavaScript question in three seconds. Claude could debug your code and explain why. The questions that used to fill Discord and Slack, "how do I center a div?" or "what's the difference between let and const?" or "why isn't my API call working?" suddenly had a faster, always-available answer. And now, you prompt your LLM and get code that works, explanations that make sense, and debugging help without needing to wait for someone to see your question and respond.&lt;/p&gt;

&lt;p&gt;And with that shift came a new tension nobody quite knew how to name: the growing frustration when someone asks a question that AI could have answered, and the growing anxiety about asking questions when you're not sure if you've "done enough work first."&lt;/p&gt;

&lt;p&gt;The bar rose.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Tell the Story
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developers.slashdot.org/story/25/01/10/1729248/stackoverflow-usage-plummets-as-ai-chatbots-rise" rel="noopener noreferrer"&gt;Stack Overflow traffic dropped 14% month-over-month from March to April 2023, right after GPT-4 launched. By December 2024, new questions had dropped 60% year-over-year. The volume of questions is down 75% from its 2017 peak and 76% since ChatGPT's launch in November 2022.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers weren't being difficult. They were being rational.&lt;/p&gt;

&lt;p&gt;Why post a question on Stack Overflow and wait for someone to answer when ChatGPT gives you working code in seconds? Why search through Discord message history when Claude can explain the concept in plain English, tailored to your specific context? Why ask a community and risk judgment and assholes on the internet when AI is always available, non-judgmental, and fast?&lt;/p&gt;

&lt;p&gt;AI could now handle most of the questions communities used to. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Unspoken Contract Changed
&lt;/h2&gt;

&lt;p&gt;Here's what this shift did to the implicit contract of online communities:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In 2020-2021:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You asked questions, even basic ones, and people were happy to help&lt;/li&gt;
&lt;li&gt;The community was the primary resource for learning and problem-solving&lt;/li&gt;
&lt;li&gt;At Virtual Coffee, we embraced horizontal mentrship—everyone could ask and everyone could answer&lt;/li&gt;
&lt;li&gt;Asking for help was normal and expected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;In 2025-2026:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're expected to try AI first before "wasting" people's time&lt;/li&gt;
&lt;li&gt;The community is for questions AI &lt;em&gt;can't&lt;/em&gt; answer&lt;/li&gt;
&lt;li&gt;There's an unspoken frustration at questions ChatGPT could handle&lt;/li&gt;
&lt;li&gt;Asking for help requires demonstrating you've done your homework&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We started to see that community members who were tired of answering the same basic questions when AI could do it faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Communities Are Actually For Now
&lt;/h2&gt;

&lt;p&gt;So if AI handles basic questions, what are communities actually &lt;em&gt;for&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;The answer should be: judgment, experience, connection, and the questions AI can't answer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Should I take this job or stay at my current role?"&lt;/li&gt;
&lt;li&gt;"How do you actually work with this technology in production?"&lt;/li&gt;
&lt;li&gt;"What's the culture like at {company}?"&lt;/li&gt;
&lt;li&gt;"I'm burned out. How did you work through it?"&lt;/li&gt;
&lt;li&gt;"Here's this cool thing I built and I think it could help others. What do you think?"&lt;/li&gt;
&lt;li&gt;"How do you navigate sick kids and a feature launch???"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are inherently human questions requiring human judgment, lived experience, and contextual understanding. They're the questions that make communities valuable. They're the questions that foster connection and belonging. They're the questions that create shared understanding and collective wisdom.&lt;/p&gt;

&lt;p&gt;But here's the problem: many communities haven't consciously made this shift. They're still structured around Q&amp;amp;A patterns that AI now handles better. They're still trying to be "the place developers get answers" when that race is lost.&lt;/p&gt;

&lt;p&gt;Product communities are particularly stuck. They're trying to serve two populations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Drive-by users&lt;/strong&gt; who just need their build to work and will never engage beyond that&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community seekers&lt;/strong&gt; who want connection, depth, and belonging&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These need different things. The drive-by user benefits from AI-first + good docs. The community seeker needs human connection. Trying to serve both with the same strategy doesn't work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sustainability Crisis
&lt;/h2&gt;

&lt;p&gt;This creates a sustainability problem that's quietly breaking communities:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For community builders:&lt;/strong&gt;&lt;br&gt;
You're caught between welcoming everyone and managing finite volunteer energy. When someone asks a question ChatGPT could answer in 3 seconds, do you answer it (and enable learned helplessness) or redirect them (and risk seeming unwelcoming)? There's no good answer, and the constant navigation is exhausting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For community members:&lt;/strong&gt;&lt;br&gt;
You're navigating unwritten rules about what's "appropriate" to ask. You feel guilty asking for help because maybe you didn't try hard enough. You see others get redirected to AI and worry you'll be next. The psychological safety that made communities work is eroding.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Questions
&lt;/h2&gt;

&lt;p&gt;Where does this leave us? With some hard questions we need to actually ask:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;About AI expectations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do we honor that AI makes many questions obsolete without making people feel unwelcome?&lt;/li&gt;
&lt;li&gt;What's our responsibility when not everyone has the same AI access?&lt;/li&gt;
&lt;li&gt;How do we shift from "Q&amp;amp;A community" to "judgment and experience community"?&lt;/li&gt;
&lt;li&gt;What questions actually need humans now?&lt;/li&gt;
&lt;li&gt;Is "try ChatGPT first" gatekeeping or reasonable boundary?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;About community purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are we trying to be everything when we should be something specific?&lt;/li&gt;
&lt;li&gt;Can drive-by Q&amp;amp;A and deep connection coexist in one space?&lt;/li&gt;
&lt;li&gt;What happens when 80% of your community just wants fast answers?&lt;/li&gt;
&lt;li&gt;How do we serve people who need basic help without burning out the helpers?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;About sustainability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can volunteer-run communities survive when the "easy" questions (that felt good to answer) are gone?&lt;/li&gt;
&lt;li&gt;How do we make helping feel rewarding again when all that's left are hard questions?&lt;/li&gt;
&lt;li&gt;What's the minimum viable community when AI handles the basics?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Actually Works Now
&lt;/h2&gt;

&lt;p&gt;The communities thriving in 2026 aren't the ones fighting AI or pretending it doesn't exist. They're the ones that:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accepted the shift in purpose.&lt;/strong&gt; They're not trying to be Stack Overflow. They're spaces for nuanced discussion, career advice, lived experience, and human judgment calls. They've made peace with AI handling the basics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay welcoming while having boundaries.&lt;/strong&gt; "Hey, ChatGPT might be faster for this!" is fine. "Why are you wasting our time?" is not. There's a way to redirect to AI tools while maintaining psychological safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separate transaction from connection.&lt;/strong&gt; Some spaces are for quick help (and that's fine). Some spaces are for deeper belonging (and that's different). Trying to be both creates friction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accepted different participation levels.&lt;/strong&gt; Drive-by questions are fine. People who only show up when they need something are fine. The always-engaged ideal is dead, and that's okay.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Built for the people who actually need them now.&lt;/strong&gt; People navigating complex career decisions. People working with niche technologies where AI training is thin. People who need human judgment, not just answers. People without AI access. Not &lt;em&gt;everyone&lt;/em&gt;, because not everyone needs human community for Q&amp;amp;A anymore.&lt;/p&gt;

&lt;p&gt;The bar that nobody asked for—AI capability—did change what communities are for. But it didn't eliminate the need for community. It just clarified it.&lt;/p&gt;

&lt;p&gt;We don't need communities to answer "how do I center a div?" anymore. We need them for "should I take this job?" and "how do I not burn out?" and "what's it actually like to work there?" &lt;/p&gt;

&lt;p&gt;And honestly? Those are better questions. They just require us to be more human, not less.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>community</category>
    </item>
    <item>
      <title>Why Capable AI Keeps Getting Blocked</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/bekahhw/why-capable-ai-keeps-getting-blocked-m7e</link>
      <guid>https://dev.to/bekahhw/why-capable-ai-keeps-getting-blocked-m7e</guid>
      <description>&lt;p&gt;Amazon bans Claude Code internally. Enterprises quietly block Copilot. Security teams flag agentic workflows before they ever make it to production. SDK usage restrictions start showing up in internal policy docs that nobody announced out loud.&lt;/p&gt;

&lt;p&gt;Different companies and reasons, but the same underlying instinct.&lt;/p&gt;

&lt;p&gt;When something feels uncontrollable, the first response is rarely “let’s understand it better.” It’s “let’s shut it down.” When elevators became were first introduced, people refused to ride them alone. Building operators had to hire elevator attendants because people needed a human present to feel safe (keeping the human in the loop). The technology worked. The trust infrastructure didn’t exist yet.&lt;/p&gt;

&lt;p&gt;It’s not a failure of vision yet. It’s a pretty rational response to a real problem, but it’s where things go from here that matters. The question isn’t “why are companies banning AI tools?” The question is “what would have to be true for those tools to not need banning in the first place?”&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem isn’t the Tools
&lt;/h2&gt;

&lt;p&gt;The last two years focused almost entirely on capability.&lt;/p&gt;

&lt;p&gt;Bigger models.&lt;br&gt;&lt;br&gt;
Autonomous agents.&lt;br&gt;&lt;br&gt;
Sophisticated chaining.&lt;/p&gt;

&lt;p&gt;Those bets paid off, and the systems are genuinely powerful.&lt;/p&gt;

&lt;p&gt;But capability without visibility is risk with a good PR story.&lt;/p&gt;

&lt;p&gt;We already learned this lesson in distributed systems. You don’t deploy a microservice without logs. You don’t scale a database without monitoring. You don’t run Kubernetes without observability. Those systems became trusted not because they were powerful, but because operators could see what they were doing.&lt;/p&gt;

&lt;p&gt;AI agents haven’t reached that level of maturity.&lt;/p&gt;

&lt;p&gt;An agent can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modify dozens of files&lt;/li&gt;
&lt;li&gt;Call external APIs&lt;/li&gt;
&lt;li&gt;Chain multiple model decisions&lt;/li&gt;
&lt;li&gt;Execute tools across a session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And when the session ends, most of that reasoning disappears.&lt;/p&gt;

&lt;p&gt;If something goes wrong:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can you replay the exact decision path?&lt;/li&gt;
&lt;li&gt;Can you inspect intermediate model outputs?&lt;/li&gt;
&lt;li&gt;Can you produce a structured audit trail for security?&lt;/li&gt;
&lt;li&gt;Can you deterministically reproduce the outcome?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In many environments, the answer is no.&lt;/p&gt;

&lt;p&gt;So institutions respond the way institutions always do when power outruns accountability: they restrict access. That’s institutions doing what institutions do when power outruns accountability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Next Phase of AI Maturity
&lt;/h2&gt;

&lt;p&gt;The bans aren’t the story. They’re a signal that we’ve entered a new phase of AI maturity, one where the capability questions are largely settled and the infrastructure questions are just getting started. Brian Douglas wrote more about this shift in his post &lt;a href="https://papercompute.com/blog/push-the-code-era-is-over/" rel="noopener noreferrer"&gt;The Push Code Era is Over&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What needs to exist isn’t another wrapper or another interface. It’s the same thing every distributed system eventually needed: operator-grade tooling. Full request and response recording. Durable execution trails. Deterministic replay. The primitives that let you run powerful systems with confidence instead of just running them with hope.&lt;/p&gt;

&lt;p&gt;It’s not just about more capable agents. It’s about agents that are actually safe to operate at scale, ones that security teams can audit, that legal teams can defend, and that developers can trust with real work.&lt;/p&gt;

&lt;p&gt;The question worth asking right now isn’t which tools are going to get banned next. It’s what would have to be true for those tools to not need banning in the first place.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>When Cloud Agents Are the Right Tool (And When They Aren’t)</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Fri, 30 Jan 2026 16:25:26 +0000</pubDate>
      <link>https://dev.to/bekahhw/when-cloud-agents-are-the-right-tool-and-when-they-arent-42dg</link>
      <guid>https://dev.to/bekahhw/when-cloud-agents-are-the-right-tool-and-when-they-arent-42dg</guid>
      <description>&lt;p&gt;In a recent episode of &lt;em&gt;&lt;a href="https://sequoiacap.com/podcast/making-the-case-for-the-terminal-as-ais-workbench-warps-zach-lloyd/" rel="noopener noreferrer"&gt;Training Data, Making the Case for the Terminal as AI’s Workbench&lt;/a&gt;&lt;/em&gt;, one of the key takeaways highlights the impact of cloud agents on the software industry.&lt;/p&gt;

&lt;p&gt;That framing matters, because it marks a shift many teams are already feeling but haven’t named yet. Increasingly, useful AI work happens &lt;strong&gt;after a deploy&lt;/strong&gt;, when an alert fires, when a dependency update lands, or when a backlog quietly grows.&lt;/p&gt;

&lt;p&gt;This work doesn’t belong to a single developer session — it belongs to the system. And once AI work moves into the background like this, a new problem appears:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;How do you run, observe, control, and trust AI that’s operating continuously?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s the real job of cloud agents, and it’s also where teams tend to misuse them.&lt;/p&gt;

&lt;p&gt;They promise automation, scale, and relief from the endless stream of alerts, security issues, and operational cleanup work that shows up after code ships. But like most powerful tools, they’re easy to misuse — and when that happens, teams either over-automate or swear them off entirely.&lt;/p&gt;

&lt;p&gt;The problem isn’t cloud agents themselves. It’s knowing when they’re actually the right tool. This post is a practical guide for software teams deciding where cloud agents help, where they don’t, and how to start without creating new risks. &lt;/p&gt;

&lt;h2&gt;
  
  
  First: What We Mean by “Cloud Agents”
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;cloud agent&lt;/strong&gt; is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an AI-driven process that runs on remote infrastructure,&lt;/li&gt;
&lt;li&gt;can be triggered by tasks, schedules, or external events,&lt;/li&gt;
&lt;li&gt;uses reasoning over changing inputs to produce reviewable outcomes across shared engineering systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unlike local or IDE-based agents, cloud agents can operate &lt;strong&gt;continuously and reactively&lt;/strong&gt;, even long after a PR has merged. They're most useful for repetitive work that isn’t tied to a single coding session and affects a team. (You can learn more about them in our [Cloud Agent Taxonomy](&lt;a href="https://docs.continue.dev/guides/cloud-agents/cloud-agents-taxonomy" rel="noopener noreferrer"&gt;https://docs.continue.dev/guides/cloud-agents/cloud-agents-taxonomy&lt;/a&gt; or watch our &lt;a href="https://youtu.be/bV6Cendry6c" rel="noopener noreferrer"&gt;What is a Cloud Agent? video&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Cloud Agents Are the Right Tool
&lt;/h2&gt;

&lt;p&gt;Cloud agents are most effective when work meets three conditions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It keeps coming back
&lt;/li&gt;
&lt;li&gt;It follows known rules
&lt;/li&gt;
&lt;li&gt;It already has human review built in 
Here are the clearest signs you should be using one:  &lt;a href="https://blog.continue.dev/when-cloud-agents-are-the-right-tool-and-when-they-arent/" rel="noopener noreferrer"&gt;oai_citation:6‡Continue Blog&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out our &lt;a href="https://docs.continue.dev/guides/cloud-agents/when-to-use-cloud-agents" rel="noopener noreferrer"&gt;When to Use Cloud Agents Guide&lt;/a&gt; for a checklist to help you decide if it's the right fit for your team.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1. The Same Problem Keeps Reappearing
&lt;/h3&gt;

&lt;p&gt;If you’ve fixed the same issue more than once, it’s no longer a bug — it’s a pattern.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The same class of Sentry errors showing up every week
&lt;/li&gt;
&lt;li&gt;Repeated dependency or vulnerability fixes
&lt;/li&gt;
&lt;li&gt;CI failures caused by known, predictable issues
&lt;/li&gt;
&lt;li&gt;Analytics anomalies that require the same investigation steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cloud agents are good for work that keeps coming back. They help resolve the issues that are &lt;a href="https://blog.continue.dev/continue-cloud-agents-automate-dev-tasks/" rel="noopener noreferrer"&gt;backlogged on your to-do list but still need to be done&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Cloud Agents can end the repetition. Often, if there's an external trigger (Snyk alert, GitHub PR, etc.), there's a good indication a cloud agent can support or handle the work.  &lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Work Is Reviewable
&lt;/h3&gt;

&lt;p&gt;A good rule of thumb:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;If you’d be comfortable reviewing this work in a PR, a cloud agent can probably help.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Cloud agents work best when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;outputs are diffs, comments, or structured changes
&lt;/li&gt;
&lt;li&gt;a human can review the result before it ships
&lt;/li&gt;
&lt;li&gt;the blast radius is clearly scoped&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Documentation: "Update the README based on PR changes"&lt;/li&gt;
&lt;li&gt;Migration: "Generate TypeScript interfaces for any new API schemas"&lt;/li&gt;
&lt;li&gt;Triage: "Label new issues based on their content"&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.continue.dev/security-chores-cloud-agents/" rel="noopener noreferrer"&gt;Security fixes: "Fix new issues with known remediation paths"&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Review is the safety rail. Without it, automation becomes risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Work Doesn’t Require Product Judgment
&lt;/h3&gt;

&lt;p&gt;Cloud agents are &lt;strong&gt;not product managers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;They fit well for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;applying known rules
&lt;/li&gt;
&lt;li&gt;following established patterns
&lt;/li&gt;
&lt;li&gt;enforcing consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They’re a poor fit for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;deciding what features to build
&lt;/li&gt;
&lt;li&gt;interpreting ambiguous user intent
&lt;/li&gt;
&lt;li&gt;making trade-offs that require deep business context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the question is “&lt;strong&gt;What should we do?&lt;/strong&gt;” → a human should answer it. &lt;/p&gt;

&lt;p&gt;If the question is “&lt;strong&gt;Can we apply a known fix again?&lt;/strong&gt;” → a cloud agent likely can.  &lt;/p&gt;

&lt;h3&gt;
  
  
  4. The Cost of Delay Is Higher Than the Cost of Review
&lt;/h3&gt;

&lt;p&gt;Some work is painful not because it’s hard, but because it lingers. Security backlogs, error queues, and operational debt tend to grow quietly. Cloud agents help when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;delays increase risk&lt;/li&gt;
&lt;li&gt;issues pile up faster than teams can address them&lt;/li&gt;
&lt;li&gt;the work isn’t urgent enough to block feature development, but still matters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In these cases, cloud agents act as a pressure release valve, not a replacement for engineering judgment.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Cloud Agents Are Not the Right Tool
&lt;/h2&gt;

&lt;p&gt;Just as important: knowing when &lt;strong&gt;not&lt;/strong&gt; to use them. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. One-Off, Exploratory Work
&lt;/h3&gt;

&lt;p&gt;If a task is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;brand new
&lt;/li&gt;
&lt;li&gt;poorly understood
&lt;/li&gt;
&lt;li&gt;unlikely to repeat&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…then automation is premature. &lt;/p&gt;

&lt;p&gt;Cloud agents add value when they can amortize effort over time. For truly one-off investigations or experiments, a local or interactive workflow is usually better.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Highly Coupled, High-Blast-Radius Changes
&lt;/h3&gt;

&lt;p&gt;Cloud agents should &lt;strong&gt;not&lt;/strong&gt; be the first line of defense for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;major architectural changes
&lt;/li&gt;
&lt;li&gt;cross-cutting refactors
&lt;/li&gt;
&lt;li&gt;anything where small mistakes have large consequences&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These changes need deep human context, deliberate sequencing, and explicit ownership first. Automation can follow later after the pattern is proven.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Work Without Clear Ownership or Review
&lt;/h3&gt;

&lt;p&gt;If no one is responsible for reviewing outcomes, cloud agents will create friction over time.&lt;/p&gt;

&lt;p&gt;Before introducing automation, a team should ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who reviews this?&lt;/li&gt;
&lt;li&gt;Where does the output live?&lt;/li&gt;
&lt;li&gt;What happens if it goes wrong?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cloud agents work best where ownership and visibility are explicit. &lt;/p&gt;

&lt;h2&gt;
  
  
  A Safer Way to Start
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmellbt15utrvwzx06ok5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmellbt15utrvwzx06ok5.png" alt="the four steps" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most teams succeed with cloud agents by following a progression:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with one narrow problem: A single error class. One security rule. &lt;a href="https://blog.continue.dev/task-decomposition/" rel="noopener noreferrer"&gt;One repetitive task&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Run the agent manually at first: Observe outputs. Tune prompts. Build trust.&lt;/li&gt;
&lt;li&gt;Require review for every run: Treat outputs like any other code change.&lt;/li&gt;
&lt;li&gt;Automate only after repetition is proven: Automation is a milestone, not a default.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Teams Centralize Cloud Agents
&lt;/h2&gt;

&lt;p&gt;As usage grows, teams discover cloud agents need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;visibility
&lt;/li&gt;
&lt;li&gt;history
&lt;/li&gt;
&lt;li&gt;coordination
&lt;/li&gt;
&lt;li&gt;a shared place to review outcomes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without a central hub, agents become hard to track, tough to trust, and easy to forget about.&lt;/p&gt;

&lt;p&gt;This is why managing cloud agents through a shared control layer where runs, reviews, schedules, and adjustments live together can help teams create a more effective cloud agent experience.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Agent "Sweet Spot": Deterministic &amp;amp; Event-Driven
&lt;/h2&gt;

&lt;p&gt;Use cloud agents when work repeats, is reviewable, and benefits from consistency. Avoid them when judgment, novelty, or high-risk changes are involved. If you get that boundary right, cloud agents stop feeling risky and start feeling like they're alleviating pressure on your team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8scovzs4cimdeqzzne5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8scovzs4cimdeqzzne5.png" alt="Automation pipeline" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cloud agents in Continue live in &lt;a href="https://hub.continue.dev" rel="noopener noreferrer"&gt;Mission Control&lt;/a&gt;. They are designed for automated execution without human interaction while still keeping a human in the loop. Now you can monitor and manage cloud agent activity so your team can ship as fast as they can code.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>5 Security Chores You Should Offload to Cloud Agents (Before They Burn You Out)</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Thu, 15 Jan 2026 21:31:59 +0000</pubDate>
      <link>https://dev.to/bekahhw/5-security-chores-you-should-offload-to-cloud-agents-before-they-burn-you-out-566j</link>
      <guid>https://dev.to/bekahhw/5-security-chores-you-should-offload-to-cloud-agents-before-they-burn-you-out-566j</guid>
      <description>&lt;p&gt;Let's talk about the "Security Sandwich."&lt;/p&gt;

&lt;p&gt;On one side, you have excellent detection tools like Snyk and PostHog telling you exactly what’s wrong. On the other side, you have... you. You manually reading a JSON payload, finding the file, checking if the patch breaks the build, and writing a PR description.&lt;/p&gt;

&lt;p&gt;The bottleneck isn't finding vulnerabilities anymore; it’s the sheer manual labor of fixing them.&lt;/p&gt;

&lt;p&gt;This is where &lt;a href="https://docs.continue.dev/guides/cloud-agents/cloud-agents-taxonomy" rel="noopener noreferrer"&gt;Cloud Agents&lt;/a&gt; come in. Unlike a simple script or a CI job (see the &lt;a href="https://docs.continue.dev/guides/cloud-agents/when-to-use-cloud-agents?ref=blog.continue.dev#cloud-agents-vs-alternatives" rel="noopener noreferrer"&gt;Cloud Agents Comparison Matrix&lt;/a&gt; to learn more), Cloud Agents can adapt their behavior based on code context, make judgment calls, and explain their decisions in human-reviewable outputs. It can read your code, understand your rules, and make decisions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡&lt;em&gt;Definition&lt;/em&gt;: Cloud Agents &lt;br&gt;
Cloud Agents are AI-driven processes that run on remote infrastructure. They are triggered by tasks, schedules, or external events, and use reasoning over changing inputs to produce reviewable outcomes (such as pull requests, reports, or summaries) across shared engineering systems.&lt;br&gt;
Here are five security chores you can stop doing yourself today.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  1. The "Smart" Vulnerability Patch
&lt;/h2&gt;

&lt;p&gt;Standard auto-fixers are often too aggressive. They bump a version in package.json and walk away, leaving you to deal with the breaking changes.&lt;/p&gt;

&lt;p&gt;A Cloud Agent approaches a vulnerability like a senior engineer would. When we use the &lt;a href="https://hub.continue.dev/integrations/snyk" rel="noopener noreferrer"&gt;Snyk Integration Agent&lt;/a&gt;, we don't just tell it to "fix it." We give it a strict 3-step protocol:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Investigate: Understand the CVE and the consequences.&lt;/li&gt;
&lt;li&gt;Implement: Fix the immediate issue without "over-cleaning" or making breaking changes.&lt;/li&gt;
&lt;li&gt;Report: Open a PR with a structured summary.&lt;/li&gt;
&lt;li&gt;The Result: instead of a generic "Bump v1.2 to v1.3" message, you get a PR that looks like this:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PR Title: [Snyk] Fix prototype pollution in minimist

Issue Type: Security Vulnerability

Priority: High

Summary: Updated minimist to v1.2.6 to resolve CVE-2021-44906. Verified that no breaking changes were introduced to command-line argument parsing logic.

Snyk Issue Details: (Hidden in collapsible toggle)
The agent does the grunt work of formatting and context-gathering, so you just have to review the logic. This isn’t just automation. It’s contextual remediation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 Learn More: &lt;a href="https://docs.continue.dev/guides/cloud-agents/when-to-use-cloud-agents?ref=blog.continue.dev" rel="noopener noreferrer"&gt;When to Use Cloud Agents&lt;/a&gt; | &lt;a href="https://docs.continue.dev/guides/cloud-agents/automated-security-remediation-with-snyk?ref=blog.continue.dev" rel="noopener noreferrer"&gt;Automated Security Remediation with Snyk&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  2. Dependency Hygiene (The "Quiet" Update)
&lt;/h2&gt;

&lt;p&gt;Waiting for a critical alert to update dependencies is like waiting for your car to break down before changing the oil.&lt;/p&gt;

&lt;p&gt;You can schedule a Cloud Agent to run weekly on a "Cron" trigger. Its job?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scan for deprecated (but not yet vulnerable) packages.&lt;/li&gt;
&lt;li&gt;Read the changelogs.&lt;/li&gt;
&lt;li&gt;Attempt the upgrade in a PR.&lt;/li&gt;
&lt;li&gt;Crucial Step: the agent investigates the dependency, what it's being used for, what other packages will be impacted, and advises on the best path forward with context.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent does the work to avoid breaking changes with dependency updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. UI Hardening (The “Forgotten Input” Sweep)
&lt;/h2&gt;

&lt;p&gt;Cross-Site Scripting (XSS) isn’t usually caused by one big mistake. It’s caused by small inconsistencies over time. Reviewing every form field by hand in a mature codebase is the definition of a chore. Instead of manual spot-checks, you can deploy a Cloud Agent to enforce secure UI patterns automatically by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scanning src/components for all  and  elements&lt;/li&gt;
&lt;li&gt;Verifying they use your sanctioned wrapper component (for example, )&lt;/li&gt;
&lt;li&gt;Refactoring any raw HTML inputs to the safe version&lt;/li&gt;
&lt;li&gt;Opening a reviewable PR with a full diff and summary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This doesn’t eliminate XSS by itself. It enforces consistency so unsafe UI patterns don’t quietly re-enter the codebase over time. This kind of sweep is especially valuable in legacy codebases, where the real risk is drift. This goes beyond automation with contextual remediation.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. The "Monday Morning" Triage
&lt;/h2&gt;

&lt;p&gt;If you come back from the weekend to 50 new alerts, you usually just skim them. That’s dangerous.&lt;/p&gt;

&lt;p&gt;Instead of drowning in notifications, use an agent to summarize and group them. You can prompt an agent to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull all open Snyk issues.&lt;/li&gt;
&lt;li&gt;Group them by "affected service" or "vulnerability type" (e.g., XSS, SQLi).&lt;/li&gt;
&lt;li&gt;Generate a summary for review.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You start your week reading a one-page executive summary, not 50 raw logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Audit &amp;amp; Compliance Prep
&lt;/h2&gt;

&lt;p&gt;"Audit" is a scary word because it usually implies a frantic scramble to document who accessed what and when.&lt;/p&gt;

&lt;p&gt;Because Cloud Agents run on infrastructure you control and log every step they take, they generate their own audit trail. You can create a specialized "Audit Agent" that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checks if all recent PRs have a linked issue.&lt;/li&gt;
&lt;li&gt;Verifies that all new API endpoints include proper error handling and input validation.&lt;/li&gt;
&lt;li&gt;Generates a markdown report of your current security posture.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Start
&lt;/h2&gt;

&lt;p&gt;You don't need to build these from scratch. Here are some ways you can get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://hub.continue.dev/integrations/snyk" rel="noopener noreferrer"&gt;Connect the Snyk Integration in Continue Mission Control&lt;/a&gt; to immediately remediate high and critical issues.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://hub.continue.dev/agents/new" rel="noopener noreferrer"&gt;Create a Custom Agent&lt;/a&gt;: Create a prompt that tells the agent what to do, set your trigger and repository, and create guardrails with rules (Check out the &lt;a href="https://hub.continue.dev/snyk/snyk-mcp" rel="noopener noreferrer"&gt;Snyk MCP&lt;/a&gt;, &lt;a href="https://hub.continue.dev/snyk/secure-at-inception" rel="noopener noreferrer"&gt;Snyk Secure at Inception Rules&lt;/a&gt; if you're using Snyk)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stop being the bottleneck. Let the agent handle the chores so you can handle the architecture. Cloud Agents aren’t ideal for simple, deterministic checks. Those still belong in CI or linters, which you can read more about here.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
    </item>
    <item>
      <title>The Platform Gap: How to Scale Your Engineering Without Scaling Headcount (Yet)</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Tue, 23 Dec 2025 20:49:27 +0000</pubDate>
      <link>https://dev.to/bekahhw/the-platform-gap-how-to-scale-your-engineering-without-scaling-headcount-yet-2d9l</link>
      <guid>https://dev.to/bekahhw/the-platform-gap-how-to-scale-your-engineering-without-scaling-headcount-yet-2d9l</guid>
      <description>&lt;p&gt;In 2006, Amazon CTO Werner Vogels gave an interview that would define a generation of engineering culture. He famously said, &lt;em&gt;"You build it, you run it."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It became the rallying cry for the DevOps movement, promising to tear down the wall between developers and operations. And for a long time, we accepted it as gospel.&lt;/p&gt;

&lt;p&gt;But as &lt;strong&gt;Humanitec&lt;/strong&gt; points out in their excellent analysis, &lt;em&gt;&lt;a href="https://humanitec.com/newsletter/vol-55-you-build-it-you-run-it-or-why-you-should-check-your-sources" rel="noopener noreferrer"&gt;"You build it, you run it" comes with a warning label&lt;/a&gt;&lt;/em&gt;. When Vogels said that, Amazon was a fraction of its current size. There were no microservices. The cloud was in its infancy. The cognitive load required to "run it" was manageable.&lt;/p&gt;

&lt;p&gt;Fast forward to today. "Running it" now means managing Kubernetes manifests, IAM roles, security compliance, database migrations, and observability pipelines.&lt;/p&gt;

&lt;p&gt;If you are a full-stack team without a dedicated Platform Engineering group, you aren't just "running it"—you are drowning in it. You are living in the &lt;strong&gt;Platform Gap&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is the awkward growth phase where you have real infrastructure headaches but not enough budget to hire the team to solve them. You build the features, but you also fight the fires. You are the architect &lt;em&gt;and&lt;/em&gt; the janitor.&lt;/p&gt;

&lt;p&gt;At Continue, we believe &lt;strong&gt;&lt;a href="https://docs.continue.dev/mission-control" rel="noopener noreferrer"&gt;Mission Control&lt;/a&gt;&lt;/strong&gt; is the answer to this gap. It helps small teams survive the maintenance tax by automating the "run" so they can focus on the "build."&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Automate Triage Until You Can Hire (Sentry + GitHub)
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;The Reality:&lt;/em&gt; You don't have an SRE on-call rotation yet.&lt;br&gt;
&lt;em&gt;The Gap:&lt;/em&gt; When production breaks, your lead developer stops coding to fix it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Bridge:&lt;/strong&gt;&lt;br&gt;
Connect &lt;strong&gt;&lt;a href="https://docs.continue.dev/mission-control/integrations/sentry" rel="noopener noreferrer"&gt;Sentry&lt;/a&gt;&lt;/strong&gt; to Mission Control to handle the "noise" of production. Instead of alerting a human for every issue, you create a "First Responder" Agent.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trigger:&lt;/strong&gt; A new exception in Sentry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action:&lt;/strong&gt; The agent analyzes the stack trace and the codebase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; It opens a PR with a proposed fix and links it to the issue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This doesn't replace deep architectural debugging. It just clears the low-hanging fruit so your team isn't dying a death by a thousand cuts.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Automate Compliance Without the Bottleneck (Snyk + GitHub)
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;The Reality:&lt;/em&gt; You don't have a DevSecOps lead.&lt;br&gt;
&lt;em&gt;The Gap:&lt;/em&gt; Security patches pile up because nobody has time to prioritize dependency upgrades.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Bridge:&lt;/strong&gt;&lt;br&gt;
Use the &lt;strong&gt;&lt;a href="https://docs.continue.dev/mission-control/integrations/snyk" rel="noopener noreferrer"&gt;Snyk integration&lt;/a&gt;&lt;/strong&gt; to automate your security baseline.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trigger:&lt;/strong&gt; Snyk detects a vulnerability in an npm package.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action:&lt;/strong&gt; A "Security Agent" runs the upgrade and verifies the build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; A PR appears with the fix and context.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures you don't get blocked by security audits when you are trying to close a partnership or raise your next round. It keeps you compliant while you focus on growth.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Maintain Data Hygiene Automatically (Supabase &amp;amp; PostHog)
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;The Reality:&lt;/em&gt; You don't have a Data Engineer.&lt;br&gt;
&lt;em&gt;The Gap:&lt;/em&gt; Analytics tracking breaks, and RLS policies get outdated, creating tech debt that hurts later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Bridge:&lt;/strong&gt;&lt;br&gt;
Use &lt;strong&gt;&lt;a href="https://docs.continue.dev/mission-control/workflows" rel="noopener noreferrer"&gt;Mission Control Workflows&lt;/a&gt;&lt;/strong&gt; to keep your house clean automatically.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.continue.dev/guides/supabase-mcp-database-workflow" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt;:&lt;/strong&gt; An agent periodically audits your Row Level Security (RLS) to ensure new tables aren't left exposed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.continue.dev/mission-control/integrations/posthog" rel="noopener noreferrer"&gt;PostHog&lt;/a&gt;:&lt;/strong&gt; An agent watches user sessions for friction points and logs tickets for the frontend team.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion: Making "You Build It" Possible Again
&lt;/h3&gt;

&lt;p&gt;Werner Vogels wasn't wrong, but he was speaking to a different world. In 2025, "You build it, you run it" is only sustainable if you have a platform that handles the heavy lifting.&lt;/p&gt;

&lt;p&gt;For enterprise giants, that platform is a team of 50 engineers building internal developer portals.&lt;br&gt;
For the lean full-stack team, that platform is &lt;strong&gt;Mission Control&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By using Agents to &lt;a href="https://blog.continue.dev/introducing-tasks-and-workflows/" rel="noopener noreferrer"&gt;standardize workflows&lt;/a&gt; and automate maintenance, you bridge the Platform Gap. You get the autonomy Vogels promised without the burnout he didn't foresee.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridge the gap.&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://hub.continue.dev/integrations" rel="noopener noreferrer"&gt;Connect your tools in Mission Control&lt;/a&gt; and start automating your maintenance tax today.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Bug Reports Should Fix Themselves: Dogfooding Our Slack Cloud Agent with GitHub and Linear</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Tue, 16 Dec 2025 16:42:47 +0000</pubDate>
      <link>https://dev.to/bekahhw/bug-reports-should-fix-themselves-dogfooding-our-slack-cloud-agent-with-github-and-linear-icj</link>
      <guid>https://dev.to/bekahhw/bug-reports-should-fix-themselves-dogfooding-our-slack-cloud-agent-with-github-and-linear-icj</guid>
      <description>&lt;p&gt;It's a painful experience when you’re in the zone, and a notification pops up in Slack that you need to fix. Continue's cloud agents turn Slack conversations into GitHub pull requests.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By connecting Slack and GitHub via &lt;a href="https://hub.continue.dev/integrations/slack?ref=blog.continue.dev" rel="noopener noreferrer"&gt;Continue's Mission Control Integrations&lt;/a&gt;, developers can fix bugs, address security issues, and ship changes without leaving the tools where they already work.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Maybe it’s a bug report, a 404 on a new page, or a logic error in an endpoint. Sure, it’s important, but it’s an interruption. Usually, the workflow looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read the Slack message&lt;/li&gt;
&lt;li&gt;Sigh&lt;/li&gt;
&lt;li&gt;Open Jira or Linear and create a ticket&lt;/li&gt;
&lt;li&gt;Open your IDE, stash your current work, and check out a new branch&lt;/li&gt;
&lt;li&gt;Reproduce the bug&lt;/li&gt;
&lt;li&gt;Fix it&lt;/li&gt;
&lt;li&gt;Push a commit, open a PR, and switch context back&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That “quick fix” just cost you 45 minutes of flow state.&lt;br&gt;
At Continue, we are building a world where your tools talk to Continue, and Continue takes action. We believe that if you can describe the fix in Slack, you shouldn't have to leave Slack to implement it.&lt;/p&gt;

&lt;p&gt;Here is how we use our own Slack integration to turn bug reports into Pull Requests without leaving the conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Drive-By" Fix: Turning Slack Bug Reports into Pull Requests
&lt;/h2&gt;

&lt;p&gt;Sometimes, the fix is obvious, but the friction to implement it is high. The same Slack agent that opens Pull Requests can also create and update Linear issues, assign owners, change states, and link work across systems.&lt;/p&gt;

&lt;p&gt;Recently, our team noticed a bug where the inbox list view was loading as empty because it defaulted to the wrong tab. I posted a screen recording of the issue.&lt;/p&gt;

&lt;p&gt;Nate knew exactly what the problem was. In a traditional workflow, Nate would have to stop what he was doing to go fix a default state in &lt;code&gt;InboxPageClient.tsx&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Instead, the conversation went like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrvsv2nvqr9ei3lck5cq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrvsv2nvqr9ei3lck5cq.png" alt="Slack screenshot" width="398" height="880"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Continue:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Looked at the selected repo: continuedev/remote-config-server&lt;/li&gt;
&lt;li&gt;Action: Created ticket CON-5031.&lt;/li&gt;
&lt;li&gt;Action: Changed default preset from "review" to "all".&lt;/li&gt;
&lt;li&gt;Result: GitHub PR created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nate didn't open his IDE. He didn't stash his changes. He delegated the implementation to the cloud agent and moved on. Minutes later, the fix was deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Slack as a Control Plane for Linear
&lt;/h2&gt;

&lt;p&gt;The same Slack agent that creates Pull Requests can also work directly with Linear. In one Slack message, I asked Continue to update a Linear issue assigned to me, change the status, and leave a comment. I didn’t open Linear, search for the issue, or change context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyts8q4w51k48wy0om9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyts8q4w51k48wy0om9t.png" alt="Slack screenshot. update linear ticket" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Behind the scenes, the Slack agent executed the request.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the Slack agent did via the Linear MCP:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Identified the correct Linear issue based on name and assignment&lt;/li&gt;
&lt;li&gt;Updated the issue status to In Progress&lt;/li&gt;
&lt;li&gt;Added a structured comment with actionable next steps&lt;/li&gt;
&lt;li&gt;Linked related internal tickets for traceability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a simple example, but it’s foundational. You can update status, assign owners, changing priority, link issues, directly from Slack. Slack becomes the interface. Linear becomes programmable. The agent handles &lt;a href="https://blog.continue.dev/ai-is-glue/" rel="noopener noreferrer"&gt;the glue&lt;/a&gt;. If you want to take it a step further, you can call &lt;code&gt;@Continue&lt;/code&gt; in the Linear issue and ask it to draft a PR as well. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pfseoukgytdpmh85gwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pfseoukgytdpmh85gwb.png" alt="The Old Way: Context Switch Tax v. The Continue Way: Flow State" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From context switching to flow state. Continue turns Slack conversations into actions across your tools, so bug reports don’t pull you out of the zone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Logic &amp;amp; Security Flaws with Cloud Agents
&lt;/h2&gt;

&lt;p&gt;It’s easy to assume AI agents are only good for simple one-liners. But we use them for architectural fixes, too.&lt;br&gt;
Dallin, another engineer on the team, spotted a flaw in how we were handling permissions on an edit agent endpoint. The endpoint was checking the currently selected organization on the client side, rather than verifying the user's rights to the specific agent file on the backend.&lt;/p&gt;

&lt;p&gt;This is a nuance that requires understanding the codebase. Dallin tagged &lt;code&gt;@Continue&lt;/code&gt; with the context:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The edit agent file trpc endpoint has a flaw. It checks the currently selected org and sends that, it should just check if user has rights to edit that agent file on the backend..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent didn't just hallucinate a patch. If we look at the Mission Control view for this session, we can see what the cloud agent did:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search: It searched for editAgentFile and updateAgentFile to locate the relevant router.&lt;/li&gt;
&lt;li&gt;Read: It read src/trpc/routers/agent/agentRouter.ts to understand the current implementation.&lt;/li&gt;
&lt;li&gt;Analyze: It found the updateVisibility mutation and saw it was accepting orgOwnerId as an input parameter (the security flaw).&lt;/li&gt;
&lt;li&gt;Fix: It removed the parameter from the mutation and verified that packageProcedure was already handling the authorization correctly.&lt;/li&gt;
&lt;li&gt;Clean up: It even updated the NewAgentFileForm on the frontend to stop passing the now-removed parameter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent removed the insecurity, cleaned up the TypeScript errors caused by the change, and opened a PR. Dallin reviewed the code, gave it a thumbs up, and the security hole was patched.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this Matters for Developer Workflow and Flow State
&lt;/h2&gt;

&lt;p&gt;The goal of Continuous AI isn't to replace developers; it's to replace friction.&lt;/p&gt;

&lt;p&gt;When you &lt;a href="https://docs.continue.dev/mission-control/integrations/?ref=blog.continue.dev" rel="noopener noreferrer"&gt;connect your tools—like Slack, Linear, and GitHub—to Mission Control&lt;/a&gt;, you aren't just creating a chatbot. You are creating a programmable layer of automation that has context of your codebase with a cloud agent.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Thread Context: The agent reads the thread. If you discuss the bug before tagging &lt;a class="mentioned-user" href="https://dev.to/continue"&gt;@continue&lt;/a&gt;, it uses that conversation as context.&lt;/li&gt;
&lt;li&gt;Mission Control: You can watch the agent work in real-time. If it gets stuck, you can jump in. If it succeeds, you get a PR link.&lt;/li&gt;
&lt;li&gt;Flow State: You stay where you are productive.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We built this integration because &lt;a href="https://blog.continue.dev/the-hidden-cost-of-tool-switching/" rel="noopener noreferrer"&gt;we were tired of the "context switch tax."&lt;/a&gt; If you want to stop trading flow state for bug fixes, &lt;a href="https://hub.continue.dev/integrations/slack?ref=blog.continue.dev" rel="noopener noreferrer"&gt;try connecting Slack to Continue today&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Invisible Load: How AI Workflows Can Replace Your Team's 'Glue Person'</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Thu, 06 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/bekahhw/the-invisible-load-how-ai-workflows-can-replace-your-teams-glue-person-1a5k</link>
      <guid>https://dev.to/bekahhw/the-invisible-load-how-ai-workflows-can-replace-your-teams-glue-person-1a5k</guid>
      <description>&lt;p&gt;Every team has one, and, tbh, a lot of relationships have one. The person who remembers. For teams, it might be the person who remembers to update the changelog. Who posts the weekly standup summary. Who follows up on that GitHub issue from three weeks ago. Who makes sure the docs match the latest release. Who bridges the gap between what engineering ships and what the rest of the company knows about.&lt;/p&gt;

&lt;p&gt;We call this “glue work,” and research shows it disproportionately falls on women in teams, and mothers in relationships. It’s the invisible labor that keeps everything running smoothly, but rarely shows up in performance reviews or promotion discussions.&lt;/p&gt;

&lt;p&gt;The problem is that glue work is essential, but it shouldn’t be someone’s primary job. And it definitely shouldn’t be the reason talented people get passed over for advancement.&lt;/p&gt;

&lt;p&gt;But what if AI workflows could be the glue instead?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Glue Work Problem
&lt;/h2&gt;

&lt;p&gt;Tanya Reilly’s essay &lt;a href="https://www.noidea.dog/glue/" rel="noopener noreferrer"&gt;“Being Glue”&lt;/a&gt; describes how critical but undervalued work like coordinating between teams, translating technical concepts, maintaining documentation, ensuring follow-through often becomes invisible. The people who do it often make everything seem easy. But there’s a cost, and it’s compounding in the background.&lt;/p&gt;

&lt;p&gt;If you have one teammate who’s spending hours each week:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updating tickets after standups&lt;/li&gt;
&lt;li&gt;Posting release summaries to Slack&lt;/li&gt;
&lt;li&gt;Making sure customer support knows about new features&lt;/li&gt;
&lt;li&gt;Chasing down PR reviews&lt;/li&gt;
&lt;li&gt;Updating documentation after merges&lt;/li&gt;
&lt;li&gt;Following up on blocked issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;when none of this is their actual job, then they may be your “glue.” All of it is necessary. But when they stop doing it, things fall apart.&lt;/p&gt;

&lt;p&gt;Maybe it doesn’t seem that bad right now, but coordination work compounds as teams grow.&lt;/p&gt;

&lt;h2&gt;
  
  
  What If Workflows Could Be the Glue?
&lt;/h2&gt;

&lt;p&gt;This is where composed AI workflows change everything. Not because they eliminate coordination work, but because they automate it at the seams between tasks.&lt;/p&gt;

&lt;p&gt;In his post on &lt;a href="https://www.linkedin.com/pulse/individual-workflows-scale-linearly-composed-compound-chad-metcalf-x7fyc" rel="noopener noreferrer"&gt;composed workflows&lt;/a&gt;, Continue CEO Chad Metcalf makes an important distinction: a single workflow saves time, but composed workflows compound. When the output of one workflow can trigger the next, you’re doing more than just automating individual tasks. You’re automating the entire coordination system.&lt;/p&gt;

&lt;p&gt;Let’s look at what this actually means.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1: The Release Communication Burden
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Current state:&lt;/strong&gt; After every release, someone (usually the same someone) needs to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Draft changelog entries&lt;/li&gt;
&lt;li&gt;Update product documentation&lt;/li&gt;
&lt;li&gt;Post to company Slack&lt;/li&gt;
&lt;li&gt;Update customer-facing docs&lt;/li&gt;
&lt;li&gt;Create support team briefing&lt;/li&gt;
&lt;li&gt;Close related Linear issues&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That “someone” spends 2-3 hours per release on coordination work. Not working on more impactful work. Just making sure everyone knows what shipped.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With composed workflows:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Merge to main 
  → Extract changes from PR descriptions 
  → Draft changelog entry 
  → Update technical docs 
  → Generate customer-facing summary 
  → Post to internal Slack 
  → Update support knowledge base 
  → Close related issues
  → Post completion summary

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entire coordination chain runs automatically. The output of one workflow triggers the next. No one spends their afternoon being glue. Instead, they become the human in the loop who reviews and improves the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 2: The Follow-Up Tax
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Current state:&lt;/strong&gt; You’re in standup. Someone mentions they’re blocked on a security review. Everyone nods. The meeting ends.&lt;/p&gt;

&lt;p&gt;Three days later, nothing’s happened because nobody remembered to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a ticket&lt;/li&gt;
&lt;li&gt;Tag the security team&lt;/li&gt;
&lt;li&gt;Follow up when there’s no response&lt;/li&gt;
&lt;li&gt;Update the original ticket&lt;/li&gt;
&lt;li&gt;Let the blocked person know it’s unblocked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Guess who remembers? The glue person, who now spends their time tracking other people’s blockers instead of writing code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With composed workflows:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Blocked on X" detected in standup notes 
  → Create Linear issue with context 
  → Tag relevant team 
  → Set reminder for 2 days 
  → Follow up if no response 
  → Update original ticket when resolved 
  → Notify blocked person

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The follow-through happens automatically. Nobody has to be the memory keeper.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 3: The Support Handoff
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Current state:&lt;/strong&gt; Customer reports a bug in Slack. Engineering fixes it. But somehow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support never gets notified the fix is live&lt;/li&gt;
&lt;li&gt;Customer still thinks it’s broken&lt;/li&gt;
&lt;li&gt;Docs don’t reflect the fix&lt;/li&gt;
&lt;li&gt;The original Slack thread is forgotten&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Someone (the glue person) ends up manually connecting all these dots. Every. Single. Time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With composed workflows:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Bug report in Slack 
  → Create Linear issue 
  → PR created and merged 
  → Update documentation 
  → Generate fix summary 
  → Post to support channel 
  → Reply to original Slack thread 
  → Close Linear issue with summary

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entire loop closes itself. From report to fix to notification to documentation to closure. No human glue required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Works Where Other Solutions Don’t
&lt;/h2&gt;

&lt;h3&gt;
  
  
  It’s Not About Replacing People
&lt;/h3&gt;

&lt;p&gt;AI workflows aren’t replacing the glue person’s judgment or expertise. They’re replacing the repetitive coordination tasks that shouldn’t require judgment.&lt;/p&gt;

&lt;p&gt;When a PR merges, updating the changelog doesn’t need human insight. Following up on a blocked issue after two days doesn’t require strategic thinking. Posting a deployment summary to Slack isn’t creative work.&lt;/p&gt;

&lt;p&gt;These are mechanical transitions between tasks. Perfect candidates for automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  It Compounds Over Time
&lt;/h3&gt;

&lt;p&gt;Here’s what makes this different from other automation attempts: composition creates compounding value.&lt;/p&gt;

&lt;p&gt;Each workflow you add doesn’t just save time on that specific task. It amplifies the value of workflows already running. The deployment workflow outputs information the changelog workflow needs. The Linear sync workflow triggers the documentation update workflow. The customer support workflow feeds the product insights workflow.&lt;/p&gt;

&lt;p&gt;You’re not just automating individual tasks. You’re automating the entire coordination system.&lt;/p&gt;

&lt;h3&gt;
  
  
  It Preserves the Actual Value of “Glue”
&lt;/h3&gt;

&lt;p&gt;The valuable part of glue work isn’t the mechanical coordination. It’s the strategic connection-making, the pattern recognition, the ability to see across teams and translate context.&lt;/p&gt;

&lt;p&gt;When AI workflows handle the mechanical parts, the humans who were doing glue work can focus on the strategic parts.&lt;/p&gt;

&lt;p&gt;They can spend time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identifying patterns across customer feedback&lt;/li&gt;
&lt;li&gt;Improving team processes&lt;/li&gt;
&lt;li&gt;Mentoring junior developers&lt;/li&gt;
&lt;li&gt;Solving actually complex problems&lt;/li&gt;
&lt;li&gt;Advancing their technical skills&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The things that actually deserve their expertise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Composition Advantage
&lt;/h2&gt;

&lt;p&gt;Individual workflows are helpful. Composed workflows are transformative.&lt;/p&gt;

&lt;p&gt;As Chad Metcalf writes in his post on composed workflows: “Individual workflows give linear improvements. Composed workflows compound.” This distinction is crucial when thinking about how to replace organizational glue work.&lt;/p&gt;

&lt;p&gt;Because here’s what happens when workflows compose: the coordination work stops being a burden on any individual. Instead, it becomes part of the system itself.&lt;/p&gt;

&lt;p&gt;You’re not asking someone to remember to update the docs after deployment. The deployment workflow triggers the documentation workflow automatically. You’re not relying on someone to follow up on blocked issues. The standup workflow creates follow-up workflows with built-in reminders.&lt;/p&gt;

&lt;p&gt;This is infrastructure that scales with your team. As you grow, the glue work compounds. But with composed workflows, the automation compounds instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Team Impact: Beyond Time Savings
&lt;/h2&gt;

&lt;p&gt;Let’s talk about what this actually means for teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  No More “That Person”
&lt;/h3&gt;

&lt;p&gt;When coordination work is automated, it stops being concentrated on one or two people. The person who used to spend 10 hours a week on glue work can spend those hours on actual engineering. Or product work. Or whatever they were actually hired to do.&lt;/p&gt;

&lt;p&gt;More importantly: they stop being the bottleneck. When knowledge and coordination flows through automated systems rather than through specific people, teams become more resilient. People can take vacation without everything falling apart. New team members can onboard without needing to know who to ask for everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  Career Equity
&lt;/h3&gt;

&lt;p&gt;Glue work is often invisible during performance reviews. “Sarah keeps everything running smoothly” doesn’t translate to promotion decisions the way “shipped three major features” does.&lt;/p&gt;

&lt;p&gt;When AI workflows handle coordination, engineers can focus on work that’s more visible and valued. This has disproportionate impact on people—often women and mothers—who tend to take on more glue work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team Velocity at Scale
&lt;/h3&gt;

&lt;p&gt;Small teams can coordinate informally. “Hey, remember to update the docs” works when you’re five people.&lt;/p&gt;

&lt;p&gt;At 20 people? 50? Informal coordination breaks down. You need systems. The question is: will those systems be human-dependent, automated, or composed with a human in the loop?&lt;/p&gt;

&lt;p&gt;Composed workflows scale. Human coordination doesn’t.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Actually Do This
&lt;/h2&gt;

&lt;p&gt;Don’t start by trying to automate everything. Start with the most repetitive, predictable coordination work on your team.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Identify Your Glue Work
&lt;/h3&gt;

&lt;p&gt;Have an honest conversation. Ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What coordination tasks happen after every release?&lt;/li&gt;
&lt;li&gt;What follow-ups consistently fall through the cracks?&lt;/li&gt;
&lt;li&gt;Where does information get stuck between teams?&lt;/li&gt;
&lt;li&gt;Who’s spending time on mechanical updates vs. strategic work?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Write it down. You’re looking for patterns, not one-off tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Build One Workflow
&lt;/h3&gt;

&lt;p&gt;Pick the most painful coordination task. The one that happens constantly and requires zero judgment.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Changelog updates after merges&lt;/li&gt;
&lt;li&gt;Standup note distribution&lt;/li&gt;
&lt;li&gt;PR review reminders&lt;/li&gt;
&lt;li&gt;Linear ticket status syncing&lt;/li&gt;
&lt;li&gt;Documentation updates post-deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build a workflow that handles it end-to-end. Run it for a week. Measure intervention rate—how often does it need human correction?&lt;/p&gt;

&lt;p&gt;Get that intervention rate below 5%. Then move to step 3.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Compose
&lt;/h3&gt;

&lt;p&gt;Look at the output of your first workflow. What happens next in the coordination chain?&lt;/p&gt;

&lt;p&gt;If your workflow posts deployment notes to Slack, what normally happens after that? Maybe documentation gets updated. Maybe customer support gets briefed. Maybe Linear tickets get closed.&lt;/p&gt;

&lt;p&gt;Build workflows for those next steps. Connect them to your first workflow’s output.&lt;/p&gt;

&lt;p&gt;Now you have composition. One workflow’s completion triggers the next. The coordination chain runs automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Keep Going
&lt;/h3&gt;

&lt;p&gt;Each new workflow amplifies the ones before it. The deployment workflow feeds the documentation workflow. The documentation workflow triggers the changelog workflow. The changelog workflow posts to Slack, which triggers the customer communication workflow.&lt;/p&gt;

&lt;p&gt;You’re not just saving time on individual tasks. You’re building a system where coordination happens automatically at every transition point.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;This isn’t just about efficiency. It’s about equity.&lt;/p&gt;

&lt;p&gt;When essential coordination work is visible and automated, nobody gets stuck being the invisible glue. Everyone can focus on the work they’re actually good at, the work that gets recognized and rewarded.&lt;/p&gt;

&lt;p&gt;Teams that build composed AI workflows aren’t just shipping faster. They’re redistributing the coordination burden from people to systems. They’re making space for everyone to do their best work.&lt;/p&gt;

&lt;p&gt;Look at your team. Who’s the glue person? What would they be working on if they weren’t constantly coordinating?&lt;/p&gt;

&lt;p&gt;That’s the real value of composition.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
    </item>
    <item>
      <title>When We Became What We Do: The Identity Crisis That Companies Mistook for Opportunity</title>
      <dc:creator>BekahHW</dc:creator>
      <pubDate>Sun, 02 Nov 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/bekahhw/when-we-became-what-we-do-the-identity-crisis-that-companies-mistook-for-opportunity-hmb</link>
      <guid>https://dev.to/bekahhw/when-we-became-what-we-do-the-identity-crisis-that-companies-mistook-for-opportunity-hmb</guid>
      <description>&lt;p&gt;In March 2020, a &lt;em&gt;weird&lt;/em&gt; thing happened to how we answered the question "Who are you?"&lt;/p&gt;

&lt;p&gt;Before the pandemic, most of us had multifaceted identities. I was the person who went to the gym and lifted weights 6x a week, who took my kids to the park all the time, who volunteered at the school, attended the occasional book club, and who took a break by going grocery shopping on Saturday mornings. For many of us, our sense of self was distributed across multiple domains, including social, familial, recreational, and professional.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Great Identity Consolidation
&lt;/h2&gt;

&lt;p&gt;Lockdown stripped most of those identities away in a matter of weeks or even days. Even family gatherings and Sunday dinners and holiday traditions moved to Zoom calls or stopped entirely.&lt;/p&gt;

&lt;p&gt;Every space where we'd embody our various identities closed. I couldn't be "the person who deadlifted on Tuesdays" when there was no weightroom open. You can't be "the parent who knows everyone at school" when school is a URL. You can't be "the person who always hosts game night" when gathering is prohibited.&lt;/p&gt;

&lt;p&gt;Our professional identity didn't just persist. It intensified, because work didn't stop. In actuality, for a lot of us it intensified or became the one thing we could control. I lost my job, but I could look for another. For many people, work-from-home meant &lt;em&gt;more&lt;/em&gt; hours. Slack channels kept pinging. Standups kept happening. Deadlines didn't pause for a pandemic.&lt;/p&gt;

&lt;p&gt;Suddenly, "developer" or "designer" or "product manager" wasn't just what we &lt;em&gt;did&lt;/em&gt;. It became almost the entirety of who we &lt;em&gt;were&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Not by choice. By elimination. &lt;/p&gt;

&lt;p&gt;Remote work intensified what researchers called "identity blur" or the breakdown of boundaries between personal and professional life. Not to mention that an entire generation of early-career professionals were doing their identity formation almost exclusively through the professional lens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Tech Communities Exploded
&lt;/h2&gt;

&lt;p&gt;This identity shift created fertile ground for professional communities to thrive in unprecedented ways.&lt;/p&gt;

&lt;p&gt;Developer communities, in particular, saw explosive growth. The community infrastructure was already there: Slack, Discord, forums, virtual meetups. But suddenly, these weren't just places to ask technical questions or network for jobs. They became primary social spaces.&lt;/p&gt;

&lt;p&gt;When I started Virtual Coffee in March of 2020, I was &lt;em&gt;desperately&lt;/em&gt; looking for &lt;em&gt;people&lt;/em&gt; after I lost my first developer job. It wasn't supplementing my social life because it &lt;em&gt;was&lt;/em&gt; my social life. The Tuesday meetups became my weekly social anchor.&lt;/p&gt;

&lt;p&gt;And I wasn't alone. Across the tech ecosystem, professional communities became all-purpose gathering spaces. The hallway track at conferences had always been valuable, but now people were craving the hallway track without the conference. They wanted the coffee shop where everyone was a developer. They wanted to belong somewhere, and professional identity was one of the few identities still accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers That Told a Story
&lt;/h2&gt;

&lt;p&gt;By 2021, the indicators were everywhere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Community engagement increased&lt;/li&gt;
&lt;li&gt;Community programs saw increased recognition of their value&lt;/li&gt;
&lt;li&gt;Having an online community became more important in 2020&lt;/li&gt;
&lt;li&gt;Time on social and community platforms jumped dramatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But here's what the numbers &lt;em&gt;weren't&lt;/em&gt; saying: they weren't showing that people had discovered a permanent need for professional community. They were showing that professional community was filling a vacuum left by every &lt;em&gt;other&lt;/em&gt; kind of community disappearing.&lt;/p&gt;

&lt;p&gt;Professional community wasn't competing with other professional communities. It was competing with—and replacing—gyms, coffee shops, happy hours, hobby groups, neighborhood connections, and casual friendships.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Company Gold Rush
&lt;/h2&gt;

&lt;p&gt;Companies looked at those engagement numbers and saw opportunity.&lt;/p&gt;

&lt;p&gt;If you were a tech company in 2020-2021 watching developers flock to online communities, the conclusion seemed obvious: we need a community. If you were hiring and saw that developer engagement was at all-time highs, you might think: we need a community manager. I became a technical community manager because tech was primed to invest in community roles.&lt;/p&gt;

&lt;p&gt;The role that was previously a nice-to-have became a strategic imperative.&lt;/p&gt;

&lt;p&gt;Companies launched Slacks, Discord servers, forums, and user groups. They hired community managers, developer advocates, and DevRel teams. They built content strategies around community engagement. They measured success by the metrics they were seeing everywhere: member counts, daily active users, message volume.&lt;/p&gt;

&lt;p&gt;The logic seemed on track: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;People are engaging with online communities at unprecedented levels&lt;/li&gt;
&lt;li&gt;Our competitors are building communities&lt;/li&gt;
&lt;li&gt;Community drives product adoption and loyalty&lt;/li&gt;
&lt;li&gt;Therefore, we need to invest in community&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But this logic missed &lt;em&gt;context&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Substitution Effect Nobody Acknowledged
&lt;/h2&gt;

&lt;p&gt;What companies were observing wasn't a permanent shift in how people want to engage with brands or products. It was a temporary substitution driven by extraordinary circumstances.&lt;/p&gt;

&lt;p&gt;In 2020-2021, professional communities were standing in for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local social connections (because we couldn't gather)&lt;/li&gt;
&lt;li&gt;Recreational identities (because hobbies were paused)&lt;/li&gt;
&lt;li&gt;Casual professional networking (because conferences were canceled)&lt;/li&gt;
&lt;li&gt;Workspace camaraderie (because offices were closed)&lt;/li&gt;
&lt;li&gt;General human connection (because isolation was mandatory)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The engagement wasn't high because people suddenly valued brand communities more. It was high because professional community was the &lt;em&gt;only&lt;/em&gt; community many people had access to.&lt;/p&gt;

&lt;p&gt;Think about it this way: if you're working from home, can't go to the gym, can't meet friends for coffee, and can't attend local meetups, where do you find people? If your identity as a weight lifter is on hold because gyms are closed, and your identity as a parent is strained because you're homeschooling, what's left? Your professional identity. And where do professionals gather? Online, in communities organized around their work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Success Really Looked Like
&lt;/h2&gt;

&lt;p&gt;Companies celebrated metrics that looked like success:&lt;/p&gt;

&lt;p&gt;"Our Discord hit 10,000 members!"&lt;br&gt;
"Daily active users up 300%!"&lt;br&gt;
"Community engagement is our highest ever!"&lt;/p&gt;

&lt;p&gt;But a lot of them were ignoring the fact that people weren't necessarily there because your product was so compelling that they wanted to be part of your brand community. They were there because they were lonely, isolated, and craving connection, and your professional community was one of the only options available.&lt;/p&gt;

&lt;p&gt;This isn't to diminish the genuine connections people made or the real value these communities provided. Virtual Coffee meant--and still means--the world to me. The developer communities that emerged during this time saved people's mental health and careers.&lt;/p&gt;

&lt;p&gt;But the &lt;em&gt;reason&lt;/em&gt; for that value was specific to the moment. We were all in crisis, and professional community became a lifeline not because it was uniquely better than before, but because everything else was gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Belonging Doesn't Scale the Way Companies Hoped
&lt;/h2&gt;

&lt;p&gt;But you can't manufacture the conditions that created pandemic-era engagement.&lt;/p&gt;

&lt;p&gt;You can't make someone lonely enough to spend three hours a day in your Discord.&lt;/p&gt;

&lt;p&gt;You can't make someone isolated enough to treat your Slack like their primary social group.&lt;/p&gt;

&lt;p&gt;You can't make someone identity-starved enough to build their sense of self around your brand community.&lt;/p&gt;

&lt;p&gt;And you most definitely should not want to.&lt;/p&gt;

&lt;p&gt;As the world reopened, people's identities re-diversified. The parent identity came back with in-person school. The athlete identity returned with reopened gyms. The friend identity flourished with in-person coffee dates. The hobbyist identity resumed with craft groups and sports leagues.&lt;/p&gt;

&lt;p&gt;Professional identity didn't disappear, but it was &lt;em&gt;one of many&lt;/em&gt; identities, not the primary or only identity.&lt;/p&gt;

&lt;p&gt;But companies didn't anticipate: people wanted to keep both.&lt;/p&gt;

&lt;p&gt;The connections made during the pandemic weren't casual networking. They were trauma bonds. We didn't just share technical knowledge in those slacks and Zoom rooms. We &lt;em&gt;survived&lt;/em&gt; isolation together, processed fear together, figured out how to keep going together. We saw each other at our worst: unmuted kids screaming, laundry piles stacked and thrown across the floor, tears with lost jobs. We celebrated new babies through screens. We mourned miscarriages together in the #heavy channel. We watched marriages fall apart in real-time as people processed isolation and stress.&lt;/p&gt;

&lt;p&gt;And we lost people.&lt;/p&gt;

&lt;p&gt;And when that happened, we couldn't gather. We couldn't hug each other. We couldn't sit together at a funeral. We watched memorial services on Zoom, crying alone in our homes, typing condolences in chat windows, wishing desperately we could just &lt;em&gt;be there&lt;/em&gt; together in person.&lt;/p&gt;

&lt;p&gt;The grief was compounded by the medium. These were people who'd kept us sane, who we'd talked to nearly every day, who felt like lifelines. And when they were gone, we had to process that loss the same way we'd built the relationship, through screens, at a distance, without the physical comfort that humans need in grief.&lt;/p&gt;

&lt;p&gt;These weren't casual professional connections. These were people who'd seen us completely, who'd held space for our fear and loneliness, who'd shared their own vulnerability in ways that rarely happen in professional contexts. The bonds formed under those conditions of shared trauma, radical vulnerability, mutual survival, don't just dissolve when offices reopen.&lt;/p&gt;

&lt;p&gt;This wasn't like moving to a new city and losing touch with old friends, or kids graduating so you stop seeing other parents at pickup. Those transitions are natural drift. This was different. People wanted to maintain these deep connections while returning to physical community.&lt;/p&gt;

&lt;p&gt;But time is finite. You can't spend three hours a day in the Virtual Coffee co-working room when you're back at an actual office. You can't attend every virtual event when you're back at the gym, at your kid's school, at in-person meetups.&lt;/p&gt;

&lt;p&gt;The decline in engagement wasn't people abandoning communities they didn't care about. It was people being pulled between communities they cared about deeply and being forced to choose.&lt;/p&gt;

&lt;p&gt;Some people stayed in their pandemic communities because those bonds meant everything. Some drifted away despite wanting to stay connected. Many tried to maintain both and felt guilty about not being as present online.&lt;/p&gt;

&lt;p&gt;Professional identity didn't disappear, but it returned to being one of many identities, not the primary or only identity. And with finite time and attention, something had to give.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Misread That Shaped a Strategy
&lt;/h2&gt;

&lt;p&gt;Companies saw the engagement decline and treated it like a problem to solve rather than a natural consequence of changed circumstances.&lt;br&gt;
The mistake wasn't that companies built communities. Many of those communities genuinely serve people who need them, and many of those pandemic-era bonds are real and lasting.&lt;/p&gt;

&lt;p&gt;The mistake was thinking they could recreate or sustain trauma-bond-level engagement without the trauma.&lt;/p&gt;

&lt;p&gt;They thought the pandemic-era engagement levels were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sustainable (Actually, many required conditions no one should want to recreate)&lt;/li&gt;
&lt;li&gt;Universal (Actually, many were born of collective crisis)&lt;/li&gt;
&lt;li&gt;Reproducible (Actually, many were trauma bonds form under specific circumstances)&lt;/li&gt;
&lt;li&gt;Transferable (Actually, you can't manufacture that depth of connection on command)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Companies built community strategies around peak crisis participation and crisis-level bonding. They staffed for engagement that made sense when professional community was substituting for all community and people were processing collective trauma together. They created expectations for involvement that assumed people's entire social lives would continue to flow through professional channels with that same intensity.&lt;/p&gt;

&lt;p&gt;When engagement inevitably declined, it wasn't because the communities failed. People's lives and needs changed, and then companies panicked. They hired more community managers. They added more features. They pushed harder for participation.&lt;/p&gt;

&lt;p&gt;But you can't solve for the absence of shared crisis with better Slack integrations.&lt;/p&gt;

&lt;p&gt;And with that shift, professional communities changed in character, not just size.&lt;/p&gt;

&lt;p&gt;Many became transactional. People drop in, ask a question, get an answer, leave. They need the &lt;em&gt;resource&lt;/em&gt;, the collective knowledge, the troubleshooting help, the documentation links. But they don't want or need the &lt;em&gt;connection&lt;/em&gt;. They're not looking for the coffee shop vibe anymore. They're looking for Stack Overflow with a pulse.&lt;/p&gt;

&lt;p&gt;This isn't everyone. Remote workers still exist. Niche specialists still need their people. Global collaborators still benefit from async communication across time zones. People for whom online professional community isn't a substitute for anything, because it's genuinely where their professional community exists.&lt;/p&gt;

&lt;p&gt;But even within those groups, the engagement looks different. It's purposeful, not ambient. It's "I'm here when I need something" rather than "this is where I hang out."&lt;/p&gt;

&lt;p&gt;And then there are the community builders caught in the middle, trying to recreate that pandemic-era connection and intimacy in product-focused communities where most people just want help with their API integration. They're hosting office hours that get three attendees. They're planning social events that get polite "maybe" responses. They're fighting to build connection in spaces where the majority of members are perfectly content with drive-by interactions.&lt;/p&gt;

&lt;p&gt;It's not that these community builders are doing it wrong. &lt;em&gt;They're trying to create depth in spaces where most people want efficiency.&lt;/em&gt; They're optimizing for belonging when most users are optimizing for answers.&lt;/p&gt;

&lt;p&gt;The communities that thrived on trauma bonds and shared vulnerability can't be replicated in a product community where people just need their build to work. And expecting them to is setting community builders up to burn out trying to recreate something that only existed under very specific, unrepeatable conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What job is this community actually doing?
&lt;/h2&gt;

&lt;p&gt;In 2020-2021, professional communities were doing the job of: general social connection, professional development, casual friendship, identity formation, and mental health support.&lt;/p&gt;

&lt;p&gt;In 2025, they should be doing the job of: professional knowledge sharing, specific technical support, collaboration, and professional networking.&lt;/p&gt;

&lt;p&gt;Those are different jobs requiring different strategies, different staffing, and different success metrics.&lt;/p&gt;

&lt;p&gt;Communities that thrive post-pandemic aren't trying to be everything to everyone. They're specific about who they serve and why. They're intentional about the value they provide. They recognize that lower engagement isn't failure. It's actually right-sizing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;Not every company needs a community.&lt;/p&gt;

&lt;p&gt;Even if you needed one in 2020, you might not need one now.&lt;/p&gt;

&lt;p&gt;Even if your competitors have one, that doesn't mean you should too.&lt;/p&gt;

&lt;p&gt;Even if engagement was high during the pandemic, that doesn't mean it should be high now.&lt;/p&gt;

&lt;p&gt;The companies that will succeed with community in 2025 and beyond are the ones willing to ask: "Is this actually serving a real need, or are we chasing ghosts of pandemic-era metrics?"&lt;/p&gt;

&lt;p&gt;Because belonging doesn't scale the way we hoped. Identity doesn't consolidate because we want it to. And crisis-driven engagement isn't a sustainable business strategy.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is the second in a series exploring how our understanding of community has changed from 2020 to 2025. Next: How AI and changing expectations are breaking the communities that remain.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>community</category>
    </item>
  </channel>
</rss>
