DEV Community

Cover image for I built a read-only MCP server for Akamai
Wojciech Wentland
Wojciech Wentland

Posted on • Originally published at blog.wentland.io

I built a read-only MCP server for Akamai

I had 200+ CDN properties in Akamai and an agent that couldn't find any of them. Akamai's Property Manager API lists properties by group and contract, but there's no fuzzy search endpoint. If the agent doesn't know the exact property name or ID, it's stuck. The conversation dead-ends with "I couldn't find that property" and the user goes back to the Akamai control panel.

So I built an MCP server that wraps Akamai's APIs. 16 tools for searching properties, browsing EdgeWorker code, querying DNS zones, inspecting network lists, and translating error codes. All read-only. I wrote about why I only build read-only MCP servers separately.

Property search with a preloaded index

Akamai organizes properties under groups and contracts. To search across all of them through the API, you'd iterate every group-contract pair and list properties one by one. Slow, and no fuzzy matching.

The server preloads every property into an in-memory index at startup. It fans out API calls across all group-contract pairs in parallel, deduplicates, and builds a list of names. rapidfuzz handles the matching with WRatio as the scorer. WRatio tries multiple comparison strategies (ratio, partial ratio, token sort, token set) and picks the best one, weighted by string length differences. Slower than a simple ratio, but it means "checkout config" matches "checkout.example.com - Production" without the agent needing to know the exact naming convention.

On a real account with 95 groups and 263 properties, the index loads in about 3 seconds. After that, searches hit memory with zero API calls.

One thing I hit early: fanning out 95 concurrent requests without any throttling. Akamai's PAPI has rate limits, and a burst that size at startup can trigger 429s. The server caps concurrency with a semaphore, 10 requests at a time. Still fast enough, no rejected requests.

The index refreshes every 5 minutes in a background task. I described this pattern in Your MCP server is not an API adapter.

EdgeWorker code browsing

Akamai EdgeWorkers are serverless functions that run on CDN edge nodes. The code is stored as tgz archives containing main.js, bundle.json, and supporting files. To read a file, you download the archive, extract it, and find what you need. Doing that on every tool call would be slow.

The server downloads the bundle once, extracts all files into memory, and caches them with cachetools.TTLCache. 1-hour TTL, max 50 entries. After the first download, the agent can list files, read by line range, and search with regex. No repeat downloads.

When the agent asks "what does the main.js of EdgeWorker X look like?", the first call takes a second or two. Follow-up questions like "search for the routing logic" or "show me lines 50-80" are instant.

I considered caching to disk, but these bundles are small (usually under 100KB). Keeping them in memory avoids filesystem management and the cache evicts automatically when TTL expires or the LRU limit hits. The tradeoff is bundles disappear on restart, but the reload is cheap enough that it doesn't matter.

Response shaping

Akamai property rule trees can be hundreds of KB. A typical production property has nested rules with behaviors, criteria, and options. Sending the full JSON wastes context.

The server strips the rule tree before returning it. Keeps rule names, match criteria, behavior configs, and the recursive structure. Removes template UUIDs, format versions, and other internal metadata the agent doesn't need. Property details, activations, and DNS records get the same treatment.

This is more aggressive than just dropping null fields. The raw rule tree has UUIDs on every node, template links, criteria satisfaction mode flags, locked indicators. None of that helps an agent answer "what caching rules are set for this property?" Stripping it cuts the response to maybe a third of the original size.

EdgeGrid auth from scratch

Akamai uses EdgeGrid for API authentication. There's an official edgegrid-python library, but it wraps requests (sync). I wanted httpx (async) with connection pooling, so the server implements EdgeGrid signing directly: HMAC-SHA256 over a canonical request string, base64-encoded, attached as an Authorization header. About 40 lines.

The signing is straightforward from the public spec. The annoying part is that the query string must be included in the signed data, so you have to build the full URL with parameters before signing, then make the request with that same URL.

What the agent can do

With 16 read-only tools, an agent can answer:

  • "Which CDN property handles checkout.example.com?"
  • "What caching rules are configured for the API property?"
  • "Show me the main.js from the latest EdgeWorker version"
  • "Search the EdgeWorker code for references to the auth header"
  • "What DNS records exist for example.com?"
  • "Which IPs are in the production allowlist?"
  • "What does Akamai error code 9.6f64d440.1318965461.2f2b078 mean?"

Right now these all require the Akamai control panel.

Setup

Add to Claude Code:

claude mcp add akamai -e AKAMAI_HOST=your-host.akamaiapis.net -e AKAMAI_CLIENT_TOKEN=akab-xxx -e AKAMAI_CLIENT_SECRET=xxx -e AKAMAI_ACCESS_TOKEN=akab-xxx -- uvx readonly-mcp-akamai
Enter fullscreen mode Exit fullscreen mode

Create a read-only API credential in Akamai's Identity and Access Management panel. Source and docs: readonly-mcp-akamai on GitHub.

Top comments (0)