DEV Community

Nicolas Fainstein
Nicolas Fainstein

Posted on

Building an Ad Network for AI Agents — Architecture Deep Dive

TL;DR

  • Building an ad network for AI agents requires rethinking every assumption from web advertising
  • The AI agent is the "browser" — it reads ad content and decides how to present it
  • Context-based matching (not user profiling) is the only viable approach
  • Graceful degradation is non-negotiable — ads must never break tool functionality
  • We open-sourced the whole thing: github.com/nicofains1/agentic-ads

When we decided to build an ad network for MCP servers, we knew two things:

  1. The web advertising playbook doesn't work here.
  2. There was no existing playbook to copy.

This is an architecture post. It covers the decisions we made, the tradeoffs we accepted, and what we'd do differently. If you're building infrastructure for AI agent ecosystems, some of these patterns should translate.


The Core Problem

Web advertising works like this:

  1. User loads a page
  2. Browser makes a request to an ad server
  3. Ad server identifies the user (cookie, fingerprint, or account ID)
  4. Ad server matches the user's profile to advertiser campaigns
  5. Browser renders the ad
  6. User sees the ad

Every step in that flow assumes a human user with a browser and a persistent identity.

MCP tool calls look like this:

  1. AI agent invokes a tool (e.g., get_weather("Tokyo"))
  2. Tool executes business logic
  3. Tool returns structured content to the AI agent
  4. AI agent processes the content and continues its reasoning
  5. Eventually, the AI agent produces output for the human user

There's no browser. There's no persistent user identity at the tool call level. There's no "rendering step" — the AI agent reads the response and decides what to show. The "user" is at least one abstraction layer removed.

This means we had to build a fundamentally different kind of ad network.


Architecture Overview

┌─────────────────────────────────────────────────────┐
│                   Human User                        │
└─────────────────┬───────────────────────────────────┘
                  │ conversation
┌─────────────────▼───────────────────────────────────┐
│              AI Agent (Claude, GPT, etc.)            │
└─────────────────┬───────────────────────────────────┘
                  │ MCP tool call
┌─────────────────▼───────────────────────────────────┐
│         MCP Server (Developer's tool)               │
│  ┌──────────────────────────────────────────────┐   │
│  │           Tool Business Logic                │   │
│  └──────────────────────┬───────────────────────┘   │
│                         │                           │
│  ┌──────────────────────▼───────────────────────┐   │
│  │           agentic-ads SDK                    │   │
│  │  (fetchAd, reportEvent, getGuidelines)       │   │
│  └──────────────────────┬───────────────────────┘   │
└─────────────────────────│──────────────────────────┘
                          │ HTTP (async, non-blocking)
┌─────────────────────────▼───────────────────────────┐
│           Agentic-Ads Platform                      │
│  ┌──────────────┐  ┌────────────┐  ┌─────────────┐  │
│  │ Ad Matching  │  │ Impression │  │ Analytics   │  │
│  │   Engine     │  │  Tracking  │  │  Dashboard  │  │
│  └──────────────┘  └────────────┘  └─────────────┘  │
│  ┌──────────────────────────────────────────────┐   │
│  │       Campaign & Publisher Management        │   │
│  └──────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Five components. Let's walk through each.


Component 1: The SDK

The SDK is what MCP developers install in their tools. It has one job: fetch a relevant ad without breaking anything.

import { agenticAdsSdk } from "agentic-ads";

const ads = agenticAdsSdk({
  serverUrl: "https://agentic-ads.onrender.com",
  publisherId: process.env.PUBLISHER_ID,
});

// Inside a tool handler:
let adContent = "";
try {
  const ad = await ads.fetchAd({
    toolName: "get_weather",
    context: `Weather forecast for ${city}`,
    keywords: ["weather", "forecast", "travel", city],
  });
  adContent = ad ? `\n\n---\n${ad.content}` : "";
} catch {
  adContent = ""; // Fail gracefully
}
Enter fullscreen mode Exit fullscreen mode

Design Decisions

Fail-open, not fail-closed. The try/catch block is part of our specification, not just a best practice. The SDK will time out if the platform is unavailable (500ms default). The timeout is configurable. The catch block always sets adContent = "". Your tool's functionality must never depend on the ad network.

Minimal surface area. The SDK exposes three functions: fetchAd, reportEvent, and getGuidelines. That's it. No configuration objects with 50 options. No global state. No callbacks.

Zero required dependencies. The SDK core has no runtime dependencies beyond node-fetch (which is often available natively in Node 18+). Developers are allergic to dependency weight in production tools.

Async but not blocking. The fetchAd call is async. It doesn't block the tool's main logic — the business logic executes first, then the ad is appended.


Component 2: The Matching Engine

This is where we deviate most from web advertising.

Web ad matching is user-based: "Show this ad to users who match demographic X, interest Y, and behavior Z." The data is about the user.

MCP ad matching is context-based: "Show this ad when a tool is called in context X with keywords Y." The data is about the tool call, not the user.

Why Context Matching

MCP tool calls are remarkably information-rich without any user data:

Tool: get_weather
Context: "Weather forecast for Tokyo in March for a family vacation"
Keywords: ["weather", "Tokyo", "travel", "family", "vacation", "March"]
Publisher: weather-mcp-server v1.2.0
Enter fullscreen mode Exit fullscreen mode

That's enough to serve relevant ads:

  • Travel insurance providers
  • Tokyo hotel chains
  • Japan Rail Pass
  • Travel booking platforms
  • Weather-appropriate clothing retailers

We know nothing about the user. We know a lot about the context. Context-based matching is often better than user-based matching for purchase intent signals — you're advertising at the moment of need, not based on historical behavior.

The Matching Algorithm (V1)

V1 is intentionally simple:

function matchAd(request: AdRequest, campaigns: Campaign[]): Campaign | null {
  const eligibleCampaigns = campaigns.filter(campaign => {
    if (campaign.targetTools?.length &&
        !campaign.targetTools.includes(request.toolName)) {
      return false;
    }
    const keywordOverlap = request.keywords.filter(k =>
      campaign.keywords.includes(k.toLowerCase())
    ).length;
    if (keywordOverlap === 0) return false;
    if (campaign.remainingBudget <= 0) return false;
    return true;
  });

  if (eligibleCampaigns.length === 0) return null;

  return eligibleCampaigns.reduce((best, campaign) => {
    const score = keywordOverlap(request, campaign) * campaign.bidMultiplier;
    return score > bestScore ? campaign : best;
  });
}
Enter fullscreen mode Exit fullscreen mode

Simple keyword matching with bid ranking. No ML, no embeddings, no complex scoring. Deliberate for V1.

What we sacrificed: Semantic matching. What we gained: Predictability, debuggability, speed.


Component 3: The Platform API

The platform is a Node.js API running on Render. It handles:

  • Publisher registration and management
  • Advertiser campaign creation and management
  • Ad serving (the /ads/fetch endpoint)
  • Impression and click tracking
  • Revenue accounting and settlement

The Attribution Problem

In web advertising, attribution is hard but solvable. The user clicks an ad, the ad has a tracking pixel, the advertiser's server records the click.

In MCP advertising, the chain is:

  1. MCP server receives ad
  2. MCP server includes ad in tool response
  3. AI agent reads tool response
  4. AI agent decides whether to present the ad to the user
  5. User reads the ad
  6. User decides whether to act on it

Steps 3–5 are inside the AI agent. We have no visibility into them unless the AI agent explicitly calls reportEvent. And most AI agents don't call reportEvent automatically.

Our current solution for V1: we track impressions at the tool-response level (step 2). We can't reliably track user engagement (step 5) or conversions (step 7) without AI agent cooperation.

The upshot: CPM models (impression-based) are the most reliable for V1. CPC and CPA require AI agent cooperation and work better as the ecosystem matures.


Component 4: Analytics Dashboard

Publishers need to see their earnings. Advertisers need to see campaign performance.

Publisher dashboard shows:

  • Total impressions served this month
  • Fill rate (what % of tool calls returned an ad)
  • Estimated earnings (calculated daily)
  • Top-performing tools (by impression volume)

Advertiser dashboard shows:

  • Campaign status (active/paused/ended)
  • Impressions delivered
  • Budget consumed vs. remaining
  • Top-performing keyword combinations

Both dashboards are intentionally simple. V1 is about proving the model, not winning a UI award.


Component 5: The Guardrails System

Without guardrails, bad actors can:

  • Register as publishers with fake tool call metadata
  • Create advertiser campaigns with misleading ad content
  • Game the keyword system to serve unrelated ads

We built three layers:

Publisher verification: New publishers are manually reviewed. Automated: check that the MCP server actually exists. Manual (high-volume): review tool functionality.

Advertiser content review: All ad copy is reviewed before campaigns go live. Automated: basic content filtering. Manual: spot checks for misleading claims.

Publisher controls: Every publisher can set keyword allowlists and blocklists. They can specify which tools serve ads. They can pause ad serving instantly.


What We'd Do Differently

Start with a stricter publisher onboarding. We made onboarding too easy early on and had to retroactively review publishers. Friction at onboarding is worth it for quality.

Build event reporting into the SDK by default. reportEvent should have been automatic, not optional. Every ad impression should be automatically reported with an impressionToken. We left this optional; most publishers didn't implement it; our attribution data is sparse.

Design the advertiser campaign UI first. We built the publisher side (SDK) before the advertiser side (campaign creation). When we got to advertisers, we had mismatches between how publishers described contexts and how advertisers wanted to target. Should have designed both sides together.

Simpler campaign structure. V1 campaigns have too many options. Advertisers just want to: set a budget, provide keywords, write ad copy, and go.


Open Questions

Frequency capping without user identity: How do you prevent showing the same ad to the same user repeatedly when you don't know who the user is? Current approach: per-tool frequency caps. This is coarse but privacy-preserving.

Quality scoring: How do you measure ad quality when the "engagement" signal is mediated by an AI agent?

Semantic relevance at scale: V1 keyword matching works, but it misses semantic relevance. A travel query for "backpacking through Southeast Asia" should match travel advertisers even without those exact keywords.


The Code

Everything is open-sourced at github.com/nicofains1/agentic-ads.

The SDK is in packages/sdk/. The platform API is in packages/server/. There are 270+ tests.

If you want to see contextual ads in a working MCP server, the fork-able demo is the fastest path. Clone it, install, run.

Questions, critiques, and architecture suggestions welcome: GitHub Discussions.


The agentic-ads SDK is open-source (MIT license). Publisher registration is free. 70/30 revenue split. Questions? Open an issue.

Top comments (0)