<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hoorain</title>
    <description>The latest articles on DEV Community by Hoorain (@hoorain_mahtab17).</description>
    <link>https://dev.to/hoorain_mahtab17</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hoorain_mahtab17"/>
    <language>en</language>
    <item>
      <title>Would love hearing views from all developers!</title>
      <dc:creator>Hoorain</dc:creator>
      <pubDate>Tue, 05 May 2026 12:08:37 +0000</pubDate>
      <link>https://dev.to/hoorain_mahtab17/would-love-hearing-views-from-all-developers-1doo</link>
      <guid>https://dev.to/hoorain_mahtab17/would-love-hearing-views-from-all-developers-1doo</guid>
      <description>&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://dev.to/hoorain_mahtab17/i-built-a-multi-agent-web-monitoring-system-on-a-no-code-platform-heres-the-architecture-4n4d" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyb72euay0ltpgx4jyonj.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://dev.to/hoorain_mahtab17/i-built-a-multi-agent-web-monitoring-system-on-a-no-code-platform-heres-the-architecture-4n4d" rel="noopener noreferrer" class="c-link"&gt;
            I Built a Multi-Agent Web Monitoring System on a No-Code Platform — Here's the Architecture - DEV Community
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            *How four specialized AI agents collaborate inside MeDo to filter the  noise out of web monitoring....
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j7kvp660rqzt99zui8e.png"&gt;
          dev.to
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>I Built a Multi-Agent Web Monitoring System on a No-Code Platform — Here's the Architecture</title>
      <dc:creator>Hoorain</dc:creator>
      <pubDate>Tue, 05 May 2026 12:07:54 +0000</pubDate>
      <link>https://dev.to/hoorain_mahtab17/i-built-a-multi-agent-web-monitoring-system-on-a-no-code-platform-heres-the-architecture-4n4d</link>
      <guid>https://dev.to/hoorain_mahtab17/i-built-a-multi-agent-web-monitoring-system-on-a-no-code-platform-heres-the-architecture-4n4d</guid>
      <description>&lt;p&gt;*How four specialized AI agents collaborate inside MeDo to filter the &lt;br&gt;
noise out of web monitoring. *&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem Nobody Has Solved Well
&lt;/h2&gt;

&lt;p&gt;Every knowledge worker, founder, recruiter, and investor I know does &lt;br&gt;
the same exhausting ritual: manually checking websites to spot &lt;br&gt;
changes that matter. Competitor pricing pages. Job boards. GitHub &lt;br&gt;
trending. News sites. Regulatory filings.&lt;/p&gt;

&lt;p&gt;The tools meant to fix this Google Alerts, Visualping, RSS readers are either too noisy (alerting on every change) or too dumb &lt;br&gt;
(missing the meaningful ones). They detect &lt;em&gt;change&lt;/em&gt;, but they don't &lt;br&gt;
understand &lt;em&gt;relevance&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I wanted something different: a tool I could describe in plain &lt;br&gt;
English, that would understand what I actually care about, watch the &lt;br&gt;
web for me, and &lt;strong&gt;only&lt;/strong&gt; ping me when something genuinely matters.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;&lt;a href="https://app-beuddiiusxs1.appmedo.com" rel="noopener noreferrer"&gt;Pulse&lt;/a&gt;&lt;/strong&gt; and I want to share the architecture, because it solved the &lt;br&gt;
problem in a way I didn't expect a no-code platform to allow.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Core Insight: One Agent Can't Do This Job
&lt;/h2&gt;

&lt;p&gt;My first instinct was the obvious one: a single LLM call that takes &lt;br&gt;
in the user's request, fetches the page, decides what's relevant, &lt;br&gt;
and writes the alert. One prompt to rule them all.&lt;/p&gt;

&lt;p&gt;It didn't work. The LLM would hallucinate relevance. It would forget &lt;br&gt;
what the user originally cared about by the time it was scoring &lt;br&gt;
content. It would write narrative summaries instead of structured &lt;br&gt;
results. And it cost more tokens per call than I could afford on a &lt;br&gt;
hackathon credit budget.&lt;/p&gt;

&lt;p&gt;The fix was the same fix that's quietly powering most production AI &lt;br&gt;
systems right now: &lt;strong&gt;separate the work into specialized agents, each &lt;br&gt;
with one job, communicating through structured contracts.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The Four-Agent Pipeline
&lt;/h2&gt;

&lt;p&gt;Pulse runs every check through four agents in sequence:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[USER INPUT] → [INTERPRETER] → [SCOUT] → [ANALYST] → [REPORTER] → [ALERT]&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Each agent has a single responsibility, a hardened system prompt, and &lt;br&gt;
a strict JSON output schema. Here's what each one does and why it &lt;br&gt;
exists.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. The Interpreter Agent — Understanding Intent
&lt;/h3&gt;

&lt;p&gt;The Interpreter takes the user's natural language request and turns &lt;br&gt;
it into a structured monitoring spec. When a user types:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Watch Hacker News for AI agent posts that hit 300+ points"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Interpreter outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"pulse_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"HN AI Agent Buzz"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"source_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"source_value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://news.ycombinator.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"what_matters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Front-page posts about AI agents with 300+ points"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"what_doesnt_matter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"General programming, hardware, crypto"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"relevance_threshold"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"check_frequency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hourly"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"summary_for_user"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I'll watch HN for high-scoring AI agent posts."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This spec is the &lt;strong&gt;contract&lt;/strong&gt; every other agent reads. The user &lt;br&gt;
confirms (or refines) it through multi-turn chat — &lt;em&gt;"actually, only &lt;br&gt;
alert me on posts above 500 points"&lt;/em&gt; — and the Interpreter rewrites &lt;br&gt;
the spec accordingly.&lt;/p&gt;

&lt;p&gt;The trick that makes this reliable: a JSON-parse-and-retry loop. If &lt;br&gt;
the LLM returns malformed JSON, the system sends it back with the &lt;br&gt;
exact parse error and asks for valid JSON only. Two retries, then &lt;br&gt;
graceful fallback. This single piece of robustness is the difference &lt;br&gt;
between a demo that crashes live and a system that ships.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. The Scout Agent — Fetching, No LLM Needed
&lt;/h3&gt;

&lt;p&gt;This is the most important architectural decision in Pulse: &lt;strong&gt;the &lt;br&gt;
Scout doesn't use an LLM at all.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It just fetches the URL (or runs a search query) using a web fetch &lt;br&gt;
plugin, strips HTML to text, and truncates to 2,500 characters. Pure &lt;br&gt;
deterministic logic.&lt;/p&gt;

&lt;p&gt;Why this matters: every LLM call costs tokens, latency, and &lt;br&gt;
reliability. The cheapest, fastest, most reliable agent is the one &lt;br&gt;
that doesn't need an LLM at all. If you can solve the problem with &lt;br&gt;
code, solve it with code. Save the LLM for the parts that actually &lt;br&gt;
require reasoning.&lt;/p&gt;

&lt;p&gt;Most "agent frameworks" miss this. They treat every step as an LLM &lt;br&gt;
call. That's expensive and brittle. The cleaner architecture mixes &lt;br&gt;
LLM and deterministic nodes wherever each one is appropriate.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. The Analyst Agent — Scoring Relevance
&lt;/h3&gt;

&lt;p&gt;The Analyst is where the real intelligence lives. It receives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user's &lt;code&gt;what_matters&lt;/code&gt; and &lt;code&gt;what_doesnt_matter&lt;/code&gt; from the spec&lt;/li&gt;
&lt;li&gt;The previous baseline snapshot&lt;/li&gt;
&lt;li&gt;The new fetched content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And it outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"relevance_score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"what_changed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Three new front-page posts about agentic frameworks 
                   with 400+ points"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"matches_intent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reasoning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Posts directly match user's interest in AI agents 
                with high engagement"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Analyst is instructed to be &lt;strong&gt;strict&lt;/strong&gt;. False positives erode &lt;br&gt;
trust faster than false negatives. The system prompt explicitly says &lt;br&gt;
"when in doubt, score lower."&lt;/p&gt;

&lt;p&gt;The score is then compared to the user's threshold. &lt;strong&gt;Below &lt;br&gt;
threshold → no alert.&lt;/strong&gt; This is what separates Pulse from dumb &lt;br&gt;
change-detection tools — most changes don't matter, and Pulse knows &lt;br&gt;
not to bother you with them.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. The Reporter Agent — Writing The Alert
&lt;/h3&gt;

&lt;p&gt;Only when the threshold is crossed does the Reporter run. It writes &lt;br&gt;
the actual alert content with two output modes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Narrative mode&lt;/strong&gt; — for single events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"alert_title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Anthropic announces $450M Series C"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"alert_body"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Anthropic raised $450M led by Spark Capital. Funds 
                 will accelerate constitutional AI research. Worth 
                 watching for downstream API pricing implications."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"urgency"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"output_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"narrative"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;List mode&lt;/strong&gt; — for multi-item results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"alert_title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10 trending agentic AI projects this week"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"output_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"list"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"items"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"LangChain"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"⭐ 95K"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AutoGPT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"⭐ 168K"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The list mode was a late addition that made Pulse dramatically more &lt;br&gt;
useful. Most monitoring tools tell you &lt;em&gt;something changed&lt;/em&gt;. Pulse &lt;br&gt;
hands you the actual structured answer with working links.&lt;/p&gt;

&lt;h2&gt;
  
  
  The N8n-Style Pipeline View
&lt;/h2&gt;

&lt;p&gt;The other thing that made Pulse feel real instead of magical: I made &lt;br&gt;
the multi-agent pipeline &lt;strong&gt;visible&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There's a Pipeline View tab on every pulse that renders the four &lt;br&gt;
agents as connected nodes in an n8n-style graph. When you hit Run &lt;br&gt;
Pipeline, you watch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A glowing dot travels from User Input → Interpreter&lt;/li&gt;
&lt;li&gt;The Interpreter node pulses, then turns green with execution time&lt;/li&gt;
&lt;li&gt;The dot travels to Scout, which fetches&lt;/li&gt;
&lt;li&gt;Then to Analyst, which scores&lt;/li&gt;
&lt;li&gt;If threshold crossed → dot continues to Reporter → Alert pops out&lt;/li&gt;
&lt;li&gt;If not → the line to Reporter dims with "Threshold not met"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click any node, and a side panel slides out showing the actual JSON &lt;br&gt;
input, JSON output, and system prompt for that agent.&lt;/p&gt;

&lt;p&gt;This wasn't just for show. It made debugging dramatically easier &lt;br&gt;
during development (I could see exactly where a 400 error was &lt;br&gt;
hitting), and it turns the abstract concept of "multi-agent &lt;br&gt;
collaboration" into something you can literally watch happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned Building On MeDo
&lt;/h2&gt;

&lt;p&gt;A few honest observations from doing this on a no-code platform &lt;br&gt;
instead of writing it in code:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The good:&lt;/strong&gt; MeDo's multi-turn chat refinement is genuinely the &lt;br&gt;
best way to iterate on system prompts. I'd describe what I wanted, &lt;br&gt;
test, then tell MeDo "the Analyst is being too lenient — make it &lt;br&gt;
stricter" and the system prompt would update intelligently. That &lt;br&gt;
loop is hard to replicate in code editors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hard:&lt;/strong&gt; Structured output reliability is everything. Without &lt;br&gt;
the JSON-parse-and-retry loop, the system breaks weekly. Build that &lt;br&gt;
defensive layer first, before any other feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The surprising:&lt;/strong&gt; Building this on a no-code platform forced me to &lt;br&gt;
think more carefully about agent boundaries than I would have in &lt;br&gt;
code. When you can't easily refactor, you have to design the &lt;br&gt;
contracts right the first time. That constraint produced cleaner &lt;br&gt;
architecture, not worse.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;Pulse is live as my &lt;a href="https://app-beuddiiusxs1.appmedo.com" rel="noopener noreferrer"&gt;Build with MeDo&lt;/a&gt; &lt;br&gt;
hackathon submission. You can describe a monitoring agent in plain &lt;br&gt;
English, watch four AI agents collaborate to fulfill it, and get &lt;br&gt;
filtered alerts that actually matter.&lt;/p&gt;

&lt;p&gt;The multi-agent pipeline pattern isn't unique to my project — it's &lt;br&gt;
the architecture quietly powering most serious LLM systems right &lt;br&gt;
now. What's worth sharing is that you can ship this pattern on a &lt;br&gt;
no-code platform, and that the visible pipeline view turns the &lt;br&gt;
abstract idea of "agent orchestration" into something users can see &lt;br&gt;
and trust.&lt;/p&gt;

&lt;p&gt;Try it, fork the architecture, build your own version. The agent-&lt;br&gt;
factory pattern is going to define the next year of AI tooling, and &lt;br&gt;
the more developers who internalize it, the better.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with &lt;a href="https://medo.dev" rel="noopener noreferrer"&gt;MeDo&lt;/a&gt;. #BuiltWithMeDo&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you enjoyed this breakdown, drop a ❤️ — it helps surface this &lt;br&gt;
to other AI engineers thinking about agent architectures.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>nocode</category>
      <category>builtwithmedo</category>
    </item>
    <item>
      <title>Agentic AI vs. Generative AI in 2025</title>
      <dc:creator>Hoorain</dc:creator>
      <pubDate>Thu, 17 Jul 2025 06:28:43 +0000</pubDate>
      <link>https://dev.to/hoorain_mahtab17/agentic-ai-vs-generative-ai-in-2025-5fo7</link>
      <guid>https://dev.to/hoorain_mahtab17/agentic-ai-vs-generative-ai-in-2025-5fo7</guid>
      <description>&lt;p&gt;Most of us are familiar with &lt;strong&gt;Generative AI&lt;/strong&gt;  it powers tools like ChatGPT, DALL·E, and Copilot. These models are great at &lt;strong&gt;producing content&lt;/strong&gt;: generating code, writing blog posts, designing images, etc. But they’re mostly &lt;strong&gt;reactive&lt;/strong&gt;  they respond when prompted.&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;Agentic AI&lt;/strong&gt; a new wave of systems that don't just &lt;em&gt;generate&lt;/em&gt;, they &lt;em&gt;act&lt;/em&gt;. Agentic AI can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Set goals&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Plan steps&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Make decisions&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Adapt to changes&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Execute tasks independently&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Generative AI gives you a response.&lt;br&gt;
&lt;strong&gt;Agentic AI gets the job done.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Imagine an AI that not only writes code but also tests it, deploys it, and monitors the logs all without being told each step.&lt;/p&gt;

&lt;p&gt;As AI evolves, &lt;strong&gt;Agentic systems&lt;/strong&gt; are emerging as key players in automating workflows, optimizing decisions, and building truly autonomous tools.&lt;/p&gt;

&lt;p&gt;Check my blog: &lt;a href="https://softsolplus.com/agentic-vs-generative-ai/" rel="noopener noreferrer"&gt;https://softsolplus.com/agentic-vs-generative-ai/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to explore real-world examples or tools using agentic frameworks? Let me know and I’ll drop a follow-up! 👇&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>agentaichallenge</category>
      <category>genai</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
