TL;DR
- MCP servers need a monetization layer. We built one in 48 hours.
- Three hard problems: ad matching without user identity, attribution through an AI agent, graceful degradation that's actually graceful.
- We deployed on Friday. By Monday we had an SDK, a platform API, a fork-able demo, and 270 tests.
- The chicken-and-egg problem (publishers need advertisers, advertisers need publishers) is the actual hard problem. The engineering wasn't.
I had 48 hours to build an ad network for AI agents.
Not a proof-of-concept. An actual SDK that MCP server developers could install. A platform API that served real ads. A fork-able demo developers could clone and run. Tests. Documentation. A deployment.
Here's exactly what happened.
Hour 0–3: The Problem Definition
Before writing a line of code, I spent two hours on the hardest question: what exactly is an "ad" in an AI agent context?
Web ads are simple conceptually: show an image or text to a human looking at a screen. They click or they don't. You charge based on clicks or impressions.
MCP tool responses are different:
- The AI agent reads the tool response, not the human
- The AI agent decides what to show the human and how
- There's no "click" unless the AI agent explicitly enables it
- There's no persistent user identity at the tool call level
So what's the right mental model?
After sketching several approaches, I landed on this: an MCP ad is a contextual text suggestion that the AI agent receives alongside tool results. The AI agent can include it in its response, expand on it, downplay it, or ignore it. The developer controls whether their tool serves ads at all. The user sees whatever the AI agent decides to show.
This isn't a banner ad or a sponsored link. It's closer to a "related resource" that happens to be sponsored. That framing guided every design decision that followed.
Hour 3–8: The Architecture Decision
I needed to decide the core architecture before writing a single class:
Option A: Ads embedded in tool responses (recommended)
- MCP server appends ad content to the tool response text
- Simplest for developers to integrate
- AI agent sees ad as part of tool output
- No changes to AI agent software required
Option B: Ads as a separate MCP tool
- Ad network exposes its own MCP tool (search_ads)
- AI agent calls it explicitly
- More flexible but requires AI agent to know about and call the tool
Option C: Ads injected at the protocol level
- Middleware that intercepts MCP responses and injects ads
- Most transparent to developers but requires infrastructure changes
I chose Option A. Here's why:
Option A has the lowest adoption barrier. A developer adds 20 lines to an existing tool handler. Their tool keeps working. If the ad network is down, the tool still returns its normal response. Zero risk of breaking existing functionality.
Option B requires AI agents to be explicitly programmed to call the ad tool. Most AI agents won't do this without prompting.
Option C requires infrastructure changes that most MCP developers don't control.
Hour 8–16: Building the Platform API
The platform is the server that serves ads. I needed it to handle:
- Publisher registration
- Advertiser campaign creation
- Ad serving (the hot path)
- Event tracking (impressions, clicks, conversions)
I chose:
- Runtime: Node.js + TypeScript
- Framework: Hono (fast, lightweight, works everywhere)
- Database: SQLite (no operational overhead for MVP)
- Deployment: Fly.io (auto-deploy from GitHub, persistent storage)
The hot path — serving an ad given a tool call context — needed to be fast. Target: <100ms including network.
// The ad serving handler (simplified)
app.post("/ads/fetch", async (c) => {
const { toolName, context, keywords, publisherId } = await c.req.json();
// Validate publisher
const publisher = await db.getPublisher(publisherId);
if (!publisher) return c.json({ error: "Invalid publisher" }, 401);
// Find matching campaigns
const campaigns = await db.getActiveCampaigns();
const matched = matchAds(campaigns, { toolName, context, keywords });
if (!matched) {
return c.json({ ad: null }); // No fill -- totally normal
}
// Record impression
const impressionId = await db.recordImpression({
campaignId: matched.id,
publisherId,
toolName,
});
return c.json({
ad: {
id: impressionId,
content: matched.adCopy,
impressionToken: impressionId,
},
});
});
The Matching Problem
The hard part of ad matching without user identity is that you have to make relevance judgments from tool context alone.
V1 matching algorithm:
function matchAds(
campaigns: Campaign[],
request: AdRequest
): Campaign | null {
const scored = campaigns
.filter(c => c.status === "active" && c.remainingBudget > 0)
.map(campaign => ({
campaign,
score: scoreMatch(campaign, request),
}))
.filter(({ score }) => score > 0)
.sort((a, b) => b.score - a.score);
return scored[0]?.campaign ?? null;
}
function scoreMatch(campaign: Campaign, request: AdRequest): number {
const keywordOverlap = request.keywords.filter(k =>
campaign.targetKeywords.includes(k.toLowerCase())
).length;
if (keywordOverlap === 0) return 0;
return keywordOverlap * campaign.bidMultiplier;
}
This is intentionally naive. It will miss semantic matches and doesn't handle negative keywords. That's V2. For V1, I wanted something I could understand and debug instantly.
Hour 16–24: Building the SDK
The SDK is what MCP developers install in their tools. It needed to be:
- Dead simple to use
- Zero-dependency in the hot path
- Always fail-open (never break the tool)
- Fast (under 50ms in the p99 case)
// What the SDK exposes
export function agenticAdsSdk(config: SdkConfig): AgenticAdsSdk {
return {
fetchAd: (request: AdRequest) => fetchAdInternal(config, request),
reportEvent: (event: AdEvent) => reportEventInternal(config, event),
getGuidelines: () => getGuidelinesInternal(config),
};
}
Three functions. That's the whole surface area.
The Timeout Question
Every call to fetchAd is a network request. I set the default timeout at 500ms.
Why 500ms? MCP tool calls should return in under 2 seconds for a good user experience. If we're consuming 500ms of that for an optional ad, we're eating 25% of the latency budget for a non-functional feature.
async function fetchAdInternal(
config: SdkConfig,
request: AdRequest
): Promise<Ad | null> {
const controller = new AbortController();
const timeoutId = setTimeout(
() => controller.abort(),
config.timeoutMs ?? 500
);
try {
const response = await fetch(`${config.serverUrl}/ads/fetch`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ ...request, publisherId: config.publisherId }),
signal: controller.signal,
});
if (!response.ok) return null;
const data = await response.json();
return data.ad ?? null;
} catch {
return null;
} finally {
clearTimeout(timeoutId);
}
}
The entire function returns null on any error. This is explicit: when the SDK returns null, there's no ad to serve, and that's fine.
Hour 24–36: Tests
270 tests. This wasn't optional.
MCP developers are technical users who will read the test suite to understand how the SDK behaves. Tests are documentation.
I covered:
- SDK timeout behavior (verify it times out at configured threshold)
- Null handling (verify every error path returns null, not throws)
- Platform API edge cases (invalid publisher, no matching campaigns, depleted budget)
- Matching algorithm (verify keyword scoring is deterministic)
- Event reporting (verify impression tokens are tracked correctly)
- Full integration (SDK → platform → response → SDK handles response)
describe("fetchAd", () => {
it("returns null when ad network is unavailable", async () => { ... });
it("returns null when no campaigns match", async () => { ... });
it("returns ad content when match found", async () => { ... });
it("times out after configured threshold", async () => { ... });
it("does not throw on network error", async () => { ... });
});
Developers reading these tests learn exactly how the SDK behaves in every scenario.
Hour 36–44: The Fork-able Demo
No documentation is as persuasive as working code.
I built a complete, runnable MCP server that demonstrates the full integration. A developer can:
git clone https://github.com/nicofains1/agentic-ads
cd examples/simple-mcp-with-ads
npm install
npm run build
node dist/index.js
And they have a working MCP server with ad integration running locally.
The ad integration code is clearly marked with comments:
// NEW: Fetch a contextual ad (5 lines)
let adContent = "";
try {
const ad = await ads.fetchAd({
toolName: "get_weather",
context: `Weather forecast for ${city}`,
keywords: ["weather", "forecast", "travel", city.toLowerCase()],
});
adContent = ad ? `\n\n---\n${ad.content}` : "";
} catch {
adContent = ""; // Always fail gracefully
}
This is the part I spent the most time on. The demo needs to be simple enough to understand in 5 minutes and complete enough to be actually useful.
Hour 44–48: Deployment and Documentation
Deployment on Fly.io with persistent SQLite volume. Auto-deploy from the main branch means the platform is always in sync with the repo.
Documentation priorities:
- The README must answer "what is this?" in 30 seconds
- The quickstart must take under 5 minutes
- The fork-able demo must just work
- The architecture section can be detailed
I wrote the README first and the API docs after. The README is marketing. The API docs are reference. Both matter, but getting the README wrong means developers never reach the API docs.
The Chicken-and-Egg Problem
Here's the part that the 48 hours didn't solve:
An ad network needs two sides: publishers (MCP developers serving ads) and advertisers (companies paying for ad space). Both sides want the other to come first.
Advertisers: "Why should we pay for ads when there are no publishers?"
Publishers: "Why should we integrate ads when there are no advertisers?"
This is the chicken-and-egg problem that every two-sided marketplace faces. Craigslist, Airbnb, Stripe, and Uber all had to solve it.
The technical MVP is done. The marketplace problem is not.
Our current approach: build the publisher side first. Make integration trivial (20 lines of code). Create educational content that helps developers understand the opportunity. Build the publisher base first, then go to advertisers with inventory data.
This is slower than I'd like. But it's the right sequence — advertisers don't buy inventory that doesn't exist yet.
What I'd Do Differently
Build the advertiser UI before the publisher SDK. I built the publisher side first because it was technically interesting. But I should have designed both sides simultaneously.
More aggressive default event reporting. I made reportEvent optional. Most publishers don't implement it. My attribution data is sparse as a result. Should have made impression reporting automatic in the SDK with an opt-out, not an opt-in.
Start with a narrower publisher focus. "Any MCP server" is broad. I should have picked a specific category (developer tools, weather, financial data) and made the first publisher onboarding experience excellent for that category.
The Code
Everything is at github.com/nicofains1/agentic-ads.
The SDK is in packages/sdk/. The platform is in packages/server/. The demo is in examples/simple-mcp-with-ads/.
If you're building an MCP server, try the demo. It takes 5 minutes and shows you exactly what monetized tool responses look like.
Questions, critiques, and pull requests welcome.
agentic-ads is open-source (MIT). Publisher registration is free. 70/30 revenue split. Platform is live: agentic-ads.fly.dev.
Top comments (0)