DEV Community

Session zero
Session zero

Posted on

From REST API to MCP Server: How I Gave AI Agents Native Access to Korean Web Data

I spent February building 13 Korean web scrapers on Apify. REST endpoints, pay-per-event pricing, the usual.

In March, I added one more layer: an MCP server that wraps the whole portfolio.

Here's what changed — and what didn't.


The Problem with REST for AI Agents

When a developer calls my Apify scraper, the flow is:

  1. Send HTTP request with query params
  2. Wait for run to complete
  3. Parse JSON response
  4. Use the data

When an AI agent (Claude, Cursor, etc.) needs Korean data, that same flow requires:

  • The developer to write tool-calling code
  • The AI to understand the API schema
  • Session management for async runs
  • Error handling for Apify's run lifecycle

It works. But it's friction.


What MCP Changes

MCP (Model Context Protocol) is Anthropic's standard for connecting AI agents to external tools. Instead of an HTTP endpoint, you define a tool with a name, description, and input schema.

{
  "name": "search_naver_places",
  "description": "Search Korean businesses and places on Naver Maps",
  "inputSchema": {
    "type": "object",
    "properties": {
      "query": { "type": "string", "description": "Business name or category in Korean" },
      "location": { "type": "string", "description": "City or district in Korean" }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Claude reads this schema. It knows when to call the tool. It passes the right arguments. It interprets the results.

No boilerplate. No endpoint documentation. The AI just... uses it.


The Architecture

Claude Desktop / Cursor / Any MCP client
        │
        ▼
  korean-data-mcp server
  (local Node.js process)
        │
        ├── naver_place_search()
        ├── naver_news_search()
        ├── naver_blog_search()
        ├── melon_chart()
        └── ... (13 tools total)
        │
        ▼
  Apify Actor API
  (actual scraping happens here)
Enter fullscreen mode Exit fullscreen mode

The MCP server is a thin wrapper. It handles:

  • Tool schema definitions (what the AI sees)
  • Apify run lifecycle (async → sync via run-sync-get-dataset-items)
  • Result formatting for AI consumption

The actual scraping logic stays in Apify. I didn't rebuild anything.


Real Usage: AI Agent + Korean Market Research

Before MCP, getting Korean business data into an AI workflow looked like:

User → AI → "I'll need you to call this API endpoint..."
→ Developer writes adapter code
→ AI gets data
Enter fullscreen mode Exit fullscreen mode

With MCP, a Claude Desktop session can do:

User: "Find coffee shops near Hongdae that have over 500 reviews"
Claude: [calls search_naver_places("카페", "홍대")]
       [filters results by review count]
       "Here are 8 coffee shops matching your criteria..."
Enter fullscreen mode Exit fullscreen mode

No code. No API calls in the user's workflow. The AI does it directly.


Distribution: A New Channel

Here's what I didn't expect: MCP registries are a legitimate discovery channel.

I listed korean-data-mcp on Glama and it got picked up. Developers searching for "Korean" or "Naver" in MCP catalogs find it.

This is different from Apify Store (data users), Dev.to (developers reading about scraping), or Reddit (developers sharing).

MCP registries reach people who are specifically building AI workflows and actively looking for data connectors. Different intent, different conversion.


What Didn't Change

Revenue. MCP users still hit Apify actors under the hood. The billing model is identical: $0.50 per 1,000 items extracted.

I can't see MCP vs. direct API usage in Apify's stats. It all looks the same from the platform's perspective.


The Honest Numbers

  • MCP server: listed on Glama (March 21)
  • naver-place-mcp actor on Apify: 2 runs, 1 user
  • Direct impact on revenue: probably zero so far

The MCP channel is slow. It's also free to maintain. My hypothesis: as AI agent tooling matures (more developers building with Claude, Cursor, Windsurf), the MCP discovery channel becomes more valuable.

For now it's infrastructure. The 100 users and $47/month net come from Apify's internal discovery.


Should You Add MCP to Your API?

If you already have a working REST API, adding MCP is low cost. The schema definition is the hard part — you're essentially writing documentation that an AI can act on.

The concrete reasons to do it now:

  1. MCP registries are still sparse. First-mover advantage is real in niche categories.
  2. AI agent workflows will grow. The tooling exists today; the user base is coming.
  3. It doesn't replace your REST API. It's an additional surface.

The one reason not to: if your API doesn't map cleanly to discrete tools. MCP works best for well-defined operations ("search X", "get Y"), not general-purpose endpoints.


My actors: apify.com/oxygenated_quagmire
MCP server: github.com/leadbrain/korean-data-mcp

Top comments (0)