DEV Community

Cover image for I Built a Multi-Agent Web Monitoring System on a No-Code Platform — Here's the Architecture
Hoorain
Hoorain

Posted on

I Built a Multi-Agent Web Monitoring System on a No-Code Platform — Here's the Architecture

*How four specialized AI agents collaborate inside MeDo to filter the
noise out of web monitoring. *

The Problem Nobody Has Solved Well

Every knowledge worker, founder, recruiter, and investor I know does
the same exhausting ritual: manually checking websites to spot
changes that matter. Competitor pricing pages. Job boards. GitHub
trending. News sites. Regulatory filings.

The tools meant to fix this Google Alerts, Visualping, RSS readers are either too noisy (alerting on every change) or too dumb
(missing the meaningful ones). They detect change, but they don't
understand relevance.

I wanted something different: a tool I could describe in plain
English, that would understand what I actually care about, watch the
web for me, and only ping me when something genuinely matters.

So I built Pulse and I want to share the architecture, because it solved the
problem in a way I didn't expect a no-code platform to allow.

The Core Insight: One Agent Can't Do This Job

My first instinct was the obvious one: a single LLM call that takes
in the user's request, fetches the page, decides what's relevant,
and writes the alert. One prompt to rule them all.

It didn't work. The LLM would hallucinate relevance. It would forget
what the user originally cared about by the time it was scoring
content. It would write narrative summaries instead of structured
results. And it cost more tokens per call than I could afford on a
hackathon credit budget.

The fix was the same fix that's quietly powering most production AI
systems right now: separate the work into specialized agents, each
with one job, communicating through structured contracts.

The Four-Agent Pipeline

Pulse runs every check through four agents in sequence:

[USER INPUT] → [INTERPRETER] → [SCOUT] → [ANALYST] → [REPORTER] → [ALERT]

Each agent has a single responsibility, a hardened system prompt, and
a strict JSON output schema. Here's what each one does and why it
exists.

1. The Interpreter Agent — Understanding Intent

The Interpreter takes the user's natural language request and turns
it into a structured monitoring spec. When a user types:

"Watch Hacker News for AI agent posts that hit 300+ points"

The Interpreter outputs:

{
  "pulse_name": "HN AI Agent Buzz",
  "source_type": "url",
  "source_value": "https://news.ycombinator.com",
  "what_matters": "Front-page posts about AI agents with 300+ points",
  "what_doesnt_matter": "General programming, hardware, crypto",
  "relevance_threshold": 7,
  "check_frequency": "hourly",
  "summary_for_user": "I'll watch HN for high-scoring AI agent posts."
}
Enter fullscreen mode Exit fullscreen mode

This spec is the contract every other agent reads. The user
confirms (or refines) it through multi-turn chat — "actually, only
alert me on posts above 500 points"
— and the Interpreter rewrites
the spec accordingly.

The trick that makes this reliable: a JSON-parse-and-retry loop. If
the LLM returns malformed JSON, the system sends it back with the
exact parse error and asks for valid JSON only. Two retries, then
graceful fallback. This single piece of robustness is the difference
between a demo that crashes live and a system that ships.

2. The Scout Agent — Fetching, No LLM Needed

This is the most important architectural decision in Pulse: the
Scout doesn't use an LLM at all.

It just fetches the URL (or runs a search query) using a web fetch
plugin, strips HTML to text, and truncates to 2,500 characters. Pure
deterministic logic.

Why this matters: every LLM call costs tokens, latency, and
reliability. The cheapest, fastest, most reliable agent is the one
that doesn't need an LLM at all. If you can solve the problem with
code, solve it with code. Save the LLM for the parts that actually
require reasoning.

Most "agent frameworks" miss this. They treat every step as an LLM
call. That's expensive and brittle. The cleaner architecture mixes
LLM and deterministic nodes wherever each one is appropriate.

3. The Analyst Agent — Scoring Relevance

The Analyst is where the real intelligence lives. It receives:

  • The user's what_matters and what_doesnt_matter from the spec
  • The previous baseline snapshot
  • The new fetched content

And it outputs:

{
  "relevance_score": 8,
  "what_changed": "Three new front-page posts about agentic frameworks 
                   with 400+ points",
  "matches_intent": true,
  "reasoning": "Posts directly match user's interest in AI agents 
                with high engagement"
}
Enter fullscreen mode Exit fullscreen mode

The Analyst is instructed to be strict. False positives erode
trust faster than false negatives. The system prompt explicitly says
"when in doubt, score lower."

The score is then compared to the user's threshold. Below
threshold → no alert.
This is what separates Pulse from dumb
change-detection tools — most changes don't matter, and Pulse knows
not to bother you with them.

4. The Reporter Agent — Writing The Alert

Only when the threshold is crossed does the Reporter run. It writes
the actual alert content with two output modes:

Narrative mode — for single events:

{
  "alert_title": "Anthropic announces $450M Series C",
  "alert_body": "Anthropic raised $450M led by Spark Capital. Funds 
                 will accelerate constitutional AI research. Worth 
                 watching for downstream API pricing implications.",
  "urgency": "medium",
  "output_type": "narrative"
}
Enter fullscreen mode Exit fullscreen mode

List mode — for multi-item results:

{
  "alert_title": "10 trending agentic AI projects this week",
  "output_type": "list",
  "items": [
    { "title": "LangChain", "url": "...", "metadata": "⭐ 95K" },
    { "title": "AutoGPT",  "url": "...", "metadata": "⭐ 168K" },
    ...
  ]
}
Enter fullscreen mode Exit fullscreen mode

The list mode was a late addition that made Pulse dramatically more
useful. Most monitoring tools tell you something changed. Pulse
hands you the actual structured answer with working links.

The N8n-Style Pipeline View

The other thing that made Pulse feel real instead of magical: I made
the multi-agent pipeline visible.

There's a Pipeline View tab on every pulse that renders the four
agents as connected nodes in an n8n-style graph. When you hit Run
Pipeline, you watch:

  • A glowing dot travels from User Input → Interpreter
  • The Interpreter node pulses, then turns green with execution time
  • The dot travels to Scout, which fetches
  • Then to Analyst, which scores
  • If threshold crossed → dot continues to Reporter → Alert pops out
  • If not → the line to Reporter dims with "Threshold not met"

Click any node, and a side panel slides out showing the actual JSON
input, JSON output, and system prompt for that agent.

This wasn't just for show. It made debugging dramatically easier
during development (I could see exactly where a 400 error was
hitting), and it turns the abstract concept of "multi-agent
collaboration" into something you can literally watch happen.

What I Learned Building On MeDo

A few honest observations from doing this on a no-code platform
instead of writing it in code:

The good: MeDo's multi-turn chat refinement is genuinely the
best way to iterate on system prompts. I'd describe what I wanted,
test, then tell MeDo "the Analyst is being too lenient — make it
stricter" and the system prompt would update intelligently. That
loop is hard to replicate in code editors.

The hard: Structured output reliability is everything. Without
the JSON-parse-and-retry loop, the system breaks weekly. Build that
defensive layer first, before any other feature.

The surprising: Building this on a no-code platform forced me to
think more carefully about agent boundaries than I would have in
code. When you can't easily refactor, you have to design the
contracts right the first time. That constraint produced cleaner
architecture, not worse.

The Result

Pulse is live as my Build with MeDo
hackathon submission. You can describe a monitoring agent in plain
English, watch four AI agents collaborate to fulfill it, and get
filtered alerts that actually matter.

The multi-agent pipeline pattern isn't unique to my project — it's
the architecture quietly powering most serious LLM systems right
now. What's worth sharing is that you can ship this pattern on a
no-code platform, and that the visible pipeline view turns the
abstract idea of "agent orchestration" into something users can see
and trust.

Try it, fork the architecture, build your own version. The agent-
factory pattern is going to define the next year of AI tooling, and
the more developers who internalize it, the better.


Built with MeDo. #BuiltWithMeDo

If you enjoyed this breakdown, drop a ❤️ — it helps surface this
to other AI engineers thinking about agent architectures.

Top comments (2)

Collapse
 
nasreen_begum_89f2528cab7 profile image
Nasreen Begum

Loved this breakdown! The insight that "the cheapest, fastest, most reliable agent is the one that doesn't need an LLM" is gold — wish more agent tutorials emphasized that. The visible pipeline view is such a smart UX choice too. Great work! 🚀

Collapse
 
hoorain_mahtab17 profile image
Hoorain

Oh thank you so much! Loved it ♥️ Will definitely work on more projects.