DEV Community

Danilo Jamaal
Danilo Jamaal

Posted on

Turn Any REST API into an MCP Server in 25 Minutes

โฑ๏ธ Time: 25 minutes | ๐Ÿ“Š Level: Beginner to Intermediate | ๐Ÿ› ๏ธ Stack: TypeScript + LunarCrush API

TL;DR: Learn how to wrap any REST API into an MCP (Model Context Protocol) server so any LLM can use it natively. We'll use LunarCrush API as our example, but this pattern works for ANY REST API. Plus: combine multiple APIs, add custom tools, and make it work with any MCP client (Claude Desktop, VS Code, custom agents, and more).

  • Build time: 25 minutes
  • Lines of code: ~150 lines
  • Key tech: TypeScript + @modelcontextprotocol/sdk
  • Language agnostic: One server works for ALL clients
  • ROI potential: $500-2000/integration as a service

What You'll Build

By the end of this tutorial, you'll have:

  1. A working MCP server wrapping the LunarCrush API
  2. Custom tools beyond what the official MCP offers
  3. The pattern to wrap ANY REST API into MCP
  4. Multi-API combination (optional: add Fear & Greed, DEX data)
  5. LLM integration for conversational queries (Claude, agents, IDE extensions)
  6. Language agnostic: Your TypeScript server works with Python, Go, Rustโ€”any language with JSON-RPC support. Write once, use everywhere.

Want the full code? Skip to The LLM Shortcut to generate it with any LLM, or follow along step-by-step below.

Example conversations after setup:

You: "What's the Galaxy Score for Bitcoin?"
AI:  "Bitcoin's Galaxy Score is 72.5, indicating strong social health.
      It's ranked #1 by AltRank with 24h price change of +3.2%."

You: "Compare Bitcoin, Ethereum, and Solana"
AI:  [calls compare_topics] "Here's the comparison:
      BTC: Galaxy 72.5, Rank #1, +3.2%
      ETH: Galaxy 68.2, Rank #3, +2.1%
      SOL: Galaxy 74.1, Rank #2, +5.4%
      Solana is showing the strongest social momentum today."

You: "Show me whale activity on Solana"
AI:  [calls get_whale_posts] "3 accounts with 100K+ followers posted about SOL:
      @whale1 (500K): 'SOL looking strong here'
      @whale2 (250K): 'Accumulating on this dip'
      Smart money sentiment: Bullish"
Enter fullscreen mode Exit fullscreen mode

Table of Contents

Understanding MCP

  1. What is REST? What is MCP?
  2. REST vs MCP: When to Use Which
  3. Why Build a Custom MCP Server?
  4. Transport Types: stdio vs Streamable HTTP
  5. Can I Use MCP Without an LLM?

Building Your Server

  1. Prerequisites
  2. Step 1: Project Setup
  3. Step 2: Define Your Tools
  4. Step 3: Implement Handlers
  5. Step 4: Wire Up the Server
  6. Step 5: Test with Claude Desktop

Advanced Topics

  1. Step 6: Add Custom Tools
  2. Step 7: Combine Multiple APIs
  3. Custom Metrics: Crowd, Whales, Conviction
  4. Can I Wrap Another MCP Server?
  5. The LLM Shortcut
  6. Compatibility: Other LLMs

Reference

  1. ROI & Monetization
  2. Troubleshooting
  3. FAQ
  4. Glossary

What is a REST API? What is MCP? {#what-is-rest-what-is-mcp}

Before we build, let's understand what we're working with.

REST API (Representational State Transfer)

A REST API is how most web services expose their data. You make HTTP requests to URLs (endpoints) and get data back.

# Example: Get Bitcoin data from LunarCrush REST API
curl -H "Authorization: Bearer YOUR_KEY" \
  "https://lunarcrush.com/api4/public/topic/bitcoin/v1"

# Response: JSON data
{
  "topic": "bitcoin",
  "galaxy_score": 72.5,
  "alt_rank": 1,
  "price": 104250.00,
  ...
}
Enter fullscreen mode Exit fullscreen mode

REST is the backbone of the internet. Every major service has one: Twitter, Stripe, GitHub, LunarCrush, DeFiLlama, etc.

MCP (Model Context Protocol)

MCP is the standard for connecting AI to external data and tools. Think of it as a universal adapter between LLMs and your data. It uses JSON-RPC 2.0 over various transports (stdio for local, Streamable HTTP for production).

The key insight: MCP is a protocol, not a language - your server can be written in any language, and any client that speaks the protocol can use it.

Your MCP server doesn't store dataโ€”it's a translator layer that receives requests from AI clients, calls the REST API for fresh data, and transforms it for LLM consumption.

Instead of writing custom code for each app, you build ONE MCP server and it works everywhere.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    YOUR MCP SERVER                       โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚  Tools:                                          โ”‚   โ”‚
โ”‚  โ”‚  - get_topic(topic) โ†’ "What's Bitcoin's score?" โ”‚   โ”‚
โ”‚  โ”‚  - list_coins(sort) โ†’ "Top 10 by Galaxy Score"  โ”‚   โ”‚
โ”‚  โ”‚  - compare(a, b)    โ†’ "Compare BTC vs ETH"      โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ†‘                    โ†‘                    โ†‘
    Desktop Apps         CLI Tools           IDE Extensions
Enter fullscreen mode Exit fullscreen mode

REST vs MCP: When to Use Which {#rest-vs-mcp}

Aspect REST API MCP Server
Discovery Must read docs Self-describing (tools advertise capabilities)
Integration Reimplement per app Build once, works everywhere
AI/LLM Support None (manual prompting) Native
Schema Manual definition Type-safe, auto-validated
Updates Change every app Update one file
Future-Proof API changes break all clients Abstraction layer absorbs changes
Workflow Re-read docs for every use case Read docs once, then just ask questions
Accessibility Requires coding for every use Non-devs use existing servers, vibe coders let LLMs generate code
Best For Web/mobile apps, public APIs AI assistants, agents, IDE extensions

Why Build a Custom MCP Server? {#why-build-custom-mcp}

LunarCrush already has an official MCP server. So why build your own?

Reasons to Use the Official MCP Server

โœ… Quick start - works immediately

โœ… Maintained by LunarCrush team

โœ… All endpoints available

โœ… Proper error handling built-in

Official LunarCrush MCP:

  • https://lunarcrush.ai/mcp?key=${LUNARCRUSH_API_KEY} (MCP Connector)
#Streamable HTTP
{
  "mcpServers": {
    "LunarCrush": {
      "type": "http",
      "url": "https://lunarcrush.ai/mcp",
      "headers": {
        "Authorization": "Bearer LUNARCRUSH_API_KEY"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
#Server-Sent Events (SSE)
{
  "mcpServers": {
    "LunarCrush": {
      "type": "sse",
      "url": "https://lunarcrush.ai/sse",
      "headers": {
        "Authorization": "Bearer 3vvy05ke69voekr3cc492l5lijm8ys4f7mw14lzd5"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Reasons to Build Your Own

Reason Example
Combine multiple APIs LunarCrush + Other API in One server. Consider adding Kalshi, Fear & Greed Index, Hyperliquid, or DEX data
Filter to only what you need 3 tools instead of 11 = less token usage, simpler prompts
Add custom calculations get_whale_posts - filter to high-follower accounts
Custom return format Official returns markdown only; yours can offer JSON + markdown
Add caching Reduce API calls, save money
Custom naming Your brand, your tool names
Add tools that don't exist whale_filter, alert_threshold, sector_analysis
Business logic Pre-process data, add your own analysis
Multi-tenant Add your own auth layer

Custom Tools We'll Build (Not in Official MCP)

The official LunarCrush MCP has 11 tools. In this tutorial, we'll add 2 custom tools that don't exist:

Tool What It Does
compare_topics Compare 2-5 topics side-by-side in one call
get_whale_posts Filter posts to only high-follower accounts

Full implementation in Step 2 and Step 3.


Transport Types: stdio vs Streamable HTTP {#transport-types}

MCP supports multiple transport types. Choose based on your deployment:

Transport Comparison

Transport How It Works Best For Difficulty
stdio Process communication via stdin/stdout Local MCP clients Easiest
Streamable HTTP HTTP + optional SSE streaming in one Production servers, remote access Medium

Note: The older "HTTP+SSE" transport (separate endpoints for HTTP and SSE) was replaced by Streamable HTTP in 2025. Streamable HTTP combines bothโ€”SSE is still used for streaming, just within the same transport.

Which Should You Use?

Local Development / Desktop Apps โ†’ stdio (start here)
Deployed Server / Production โ†’ Streamable HTTP (handles both sync + streaming)
Enter fullscreen mode Exit fullscreen mode

Difficulty Levels by Transport

stdio (Easiest - Start Here)

  • No networking code needed
  • Works immediately with local MCP clients
  • Perfect for local development and personal use
  • Limitation: Only works on same machine

Streamable HTTP (Medium - For Production)

  • Requires server hosting (Vercel, Railway, Fly.io)
  • Need to handle authentication
  • Can be accessed from anywhere
  • Supports both sync responses AND streaming (one transport does both)
  • Good for team/public MCP servers

In this tutorial, we'll use stdio (simplest, works with local MCP clients). The pattern is identical for Streamable HTTPโ€”just swap the transport class.

When Do You Need Streaming?

Streamable HTTP handles this automatically. It uses regular HTTP for fast responses and upgrades to SSE when needed:

Scenario What Happens
Tool returns in <30s Normal HTTP response
Tool takes >30s Auto-upgrades to SSE for progress
Real-time data feeds SSE streaming

For most MCP servers: Just use Streamable HTTP. It does the right thing automatically.


Can I Use MCP Without an LLM? {#mcp-without-llm}

Yes. MCP is just JSON-RPC 2.0. You can call MCP servers from regular code:

// Call MCP server directly from Node.js (no LLM needed)
import { Client } from '@modelcontextprotocol/sdk/client/index.js';

const client = new Client({ name: 'my-app', version: '1.0.0' });

// Call a tool directly
const result = await client.callTool({
  name: 'get_topic',
  arguments: { topic: 'bitcoin' }
});

console.log(result); // JSON data, no LLM involved
Enter fullscreen mode Exit fullscreen mode

Use cases without LLM:

  • Automated scripts that need consistent tool interfaces
  • Testing MCP servers before connecting to an LLM
  • Building dashboards that consume MCP data
  • Scheduled data collection

Prerequisites {#prerequisites}

  • Node.js 18+ (download)
  • Basic TypeScript knowledge (or JavaScript - TypeScript is optional)
  • Claude Desktop installed (download)
  • LunarCrush API key (get one here - use code JAMAALBUILDS for 15% off)

Optional (for combining APIs):

  • Fear & Greed Index (free, no key needed)
  • DeFiLlama API (free, no key needed)
  • DexScreener API (free, no key needed)
  • Hyperliquid API (free, no key needed)

Step 1: Project Setup {#step-1-project-setup}

Create a new project with the MCP SDK:

๐Ÿ“„ Terminal

# Create project directory
mkdir lunarcrush-mcp-custom
cd lunarcrush-mcp-custom

# Initialize Node.js project
npm init -y

# Install dependencies
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node

# Initialize TypeScript
npx tsc --init
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ tsconfig.json (replace contents)

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "declaration": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ package.json (add/update these fields)

{
  "name": "lunarcrush-mcp-custom",
  "version": "1.0.0",
  "type": "module",
  "main": "dist/index.js",
  "bin": {
    "lunarcrush-mcp": "./dist/index.js"
  },
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js",
    "dev": "tsc --watch"
  },
  "files": ["dist"]
}
Enter fullscreen mode Exit fullscreen mode

Create the project structure:

lunarcrush-mcp-custom/
โ”œโ”€โ”€ src/
โ”‚   โ”œโ”€โ”€ index.ts          # MCP server entry point
โ”‚   โ”œโ”€โ”€ tools.ts          # Tool definitions (what LLM can do)
โ”‚   โ”œโ”€โ”€ handlers.ts       # API call handlers (actual logic)
โ”‚   โ””โ”€โ”€ api-client.ts     # REST API wrapper (reusable)
โ”œโ”€โ”€ package.json
โ”œโ”€โ”€ tsconfig.json
โ””โ”€โ”€ .env                  # API keys (don't commit!)
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ .env (create this file)

LUNARCRUSH_API_KEY=your_api_key_here
# Get your key: https://lunarcrush.com/developers/api/authentication (use JAMAALBUILDS for 15% off)
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ .gitignore (important: don't commit secrets!)

node_modules/
dist/
.env
Enter fullscreen mode Exit fullscreen mode

Step 2: Define Your Tools {#step-2-define-your-tools}

Tools are the capabilities your MCP server exposes. Each tool has:

  • name: What the LLM calls it (e.g., get_topic)
  • description: Helps the LLM understand WHEN to use it (critical for good AI behavior)
  • inputSchema: JSON Schema defining parameters

Let's define tools - including custom ones that go beyond the official MCP:

๐Ÿ“„ src/tools.ts

import { z } from 'zod';

/**
 * Tool Definitions
 *
 * These tell the LLM what capabilities are available.
 * Good descriptions = better AI tool selection.
 *
 * We're including:
 * 1. Standard tools (similar to official MCP)
 * 2. Custom tools (our value-add)
 */

export const tools = [
  // ============================================
  // STANDARD TOOLS (mirror official MCP basics)
  // ============================================

  {
    name: 'get_topic',
    description: `Get comprehensive social metrics for any topic (crypto, stock, brand, person, etc.).

Returns: Galaxy Score (0-100 health), AltRank (1-4000, lower=better),
sentiment (% positive), social volume, price data if available.

Use this when user asks about a specific topic's current state.
Examples: "How is Bitcoin doing?", "What's the sentiment on Solana?", "Tell me about NVIDIA"`,
    inputSchema: {
      type: 'object' as const,
      properties: {
        topic: {
          type: 'string',
          description: 'Topic name: "bitcoin", "ethereum", "nvidia", "elon musk", etc.'
        },
        format: {
          type: 'string',
          enum: ['json', 'markdown'],
          default: 'markdown',
          description: 'Output format: markdown (readable) or json (for processing)'
        }
      },
      required: ['topic']
    }
  },

  {
    name: 'list_cryptocurrencies',
    description: `Get a sorted list of cryptocurrencies by various metrics.

Sort options: galaxy_score, alt_rank, percent_change_24h, market_cap, social_dominance

Use for discovery: "What are the top coins?", "Which cryptos are trending?",
"Best performing coins today"`,
    inputSchema: {
      type: 'object' as const,
      properties: {
        sort: {
          type: 'string',
          enum: ['galaxy_score', 'alt_rank', 'percent_change_24h', 'market_cap', 'social_dominance'],
          description: 'How to sort the list'
        },
        limit: {
          type: 'number',
          description: 'Number of results (default: 10, max: 100)'
        },
        sector: {
          type: 'string',
          description: 'Filter by sector: "defi", "meme", "ai", "layer-1", "layer-2", etc.'
        },
        format: {
          type: 'string',
          enum: ['json', 'markdown'],
          default: 'json',
          description: 'Output format (json default for lists)'
        }
      },
      required: []
    }
  },

  // ============================================
  // CUSTOM TOOLS (our value-add - not in official MCP!)
  // ============================================

  {
    name: 'compare_topics',
    description: `Compare 2-5 topics side by side. Returns key metrics for each in a comparison table.

Use when user wants to compare assets: "Compare BTC vs ETH",
"Which is better: Solana or Avalanche?", "Bitcoin vs Ethereum vs Solana comparison"

This tool is NOT in the official LunarCrush MCP - it's our custom addition!`,
    inputSchema: {
      type: 'object' as const,
      properties: {
        topics: {
          type: 'array',
          items: { type: 'string' },
          minItems: 2,
          maxItems: 5,
          description: 'List of topics to compare'
        },
        format: {
          type: 'string',
          enum: ['json', 'markdown'],
          default: 'markdown',
          description: 'Output format'
        }
      },
      required: ['topics']
    }
  },

  {
    name: 'get_whale_posts',
    description: `Get posts only from high-follower accounts (whales/influencers).

Filters out noise to show only what influential accounts are saying.
Default: 100k+ followers. Adjust min_followers as needed.

Use when user wants influencer opinions: "What are whales saying about Bitcoin?",
"Influential accounts on Solana"`,
    inputSchema: {
      type: 'object' as const,
      properties: {
        topic: {
          type: 'string',
          description: 'Topic name'
        },
        min_followers: {
          type: 'number',
          description: 'Minimum follower count (default: 100000)'
        },
        limit: {
          type: 'number',
          description: 'Number of posts (default: 10)'
        },
        format: {
          type: 'string',
          enum: ['json', 'markdown'],
          default: 'markdown',
          description: 'Output format'
        }
      },
      required: ['topic']
    }
  },

];

// ============================================
// Zod Schemas for Runtime Validation
// ============================================

// Format enum used across all schemas
const FormatSchema = z.enum(['json', 'markdown']);

export const GetTopicSchema = z.object({
  topic: z.string().min(1),
  format: FormatSchema.default('markdown')
});

export const ListCryptocurrenciesSchema = z.object({
  sort: z.enum(['galaxy_score', 'alt_rank', 'percent_change_24h', 'market_cap', 'social_dominance']).default('galaxy_score'),
  limit: z.number().min(1).max(100).default(10),
  sector: z.string().optional(),
  format: FormatSchema.default('json')  // Lists default to JSON
});

export const CompareTopicsSchema = z.object({
  topics: z.array(z.string()).min(2).max(5),
  format: FormatSchema.default('markdown')
});

export const GetWhalePostsSchema = z.object({
  topic: z.string().min(1),
  min_followers: z.number().default(100000),
  limit: z.number().min(1).max(50).default(10),
  format: FormatSchema.default('markdown')
});
Enter fullscreen mode Exit fullscreen mode

Why detailed descriptions matter: The LLM reads these descriptions to decide WHICH tool to use. Better descriptions = smarter tool selection.


Step 3: Implement Handlers {#step-3-implement-handlers}

Handlers call the REST API and transform responses. Two files: api-client.ts (reusable fetcher) and handlers.ts (tool logic).

๐Ÿ“„ src/api-client.ts

const API_KEY = process.env.LUNARCRUSH_API_KEY;
const BASE = 'https://lunarcrush.com/api4/public';

export async function fetchLunarCrush<T = any>(endpoint: string): Promise<T> {
  const res = await fetch(`${BASE}${endpoint}`, { headers: { 'Authorization': `Bearer ${API_KEY}` } });
  if (!res.ok) throw new Error(`API error ${res.status}`);
  return res.json();
}

export const fmt = {
  num: (n: number) => !n ? '0' : n >= 1e9 ? `${(n/1e9).toFixed(1)}B` : n >= 1e6 ? `${(n/1e6).toFixed(1)}M` : n >= 1e3 ? `${(n/1e3).toFixed(1)}K` : String(n),
  pct: (n: number) => n != null ? `${n > 0 ? '+' : ''}${n.toFixed(2)}%` : 'N/A',
  sentiment: (s: number) => s > 60 ? '๐ŸŸข Bullish' : s < 40 ? '๐Ÿ”ด Bearish' : '๐ŸŸก Neutral'
};
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ src/handlers.ts - Pattern shown for one handler (all 4 follow same structure):

import { fetchLunarCrush, fmt } from './api-client.js';
import { GetTopicSchema, CompareTopicsSchema, /* ...other schemas */ } from './tools.js';

export async function handleGetTopic(args: unknown) {
  const { topic, format } = GetTopicSchema.parse(args);
  const d = await fetchLunarCrush(`/topic/${encodeURIComponent(topic)}/v1`);

  return format === 'json'
    ? { topic: d.topic, galaxy_score: d.galaxy_score, alt_rank: d.alt_rank, sentiment: d.sentiment, price: d.close, change_24h: d.percent_change_24h }
    : `## ${d.topic?.toUpperCase()}\n**Galaxy Score:** ${d.galaxy_score}/100 | **AltRank:** #${d.alt_rank} | ${fmt.sentiment(d.sentiment)}\n**Price:** $${d.close?.toLocaleString()} (${fmt.pct(d.percent_change_24h)})`;
}

// Custom tool example: compare multiple topics in parallel
export async function handleCompareTopics(args: unknown) {
  const { topics, format } = CompareTopicsSchema.parse(args);
  const results = await Promise.all(topics.map(t => fetchLunarCrush(`/topic/${encodeURIComponent(t)}/v1`).catch(() => ({ topic: t, error: true }))));

  return format === 'json'
    ? results.map((d: any) => ({ topic: d.topic, galaxy_score: d.galaxy_score, alt_rank: d.alt_rank, price: d.close }))
    : `## Comparison: ${topics.join(' vs ')}\n| Metric | ${topics.join(' | ')} |\n|--------|${topics.map(() => '------').join('|')}|\n| Galaxy | ${results.map((d: any) => d.galaxy_score?.toFixed(1) || 'N/A').join(' | ')} |`;
}

// Router maps tool names to handlers
export async function handleToolCall(name: string, args: unknown) {
  const handlers: Record<string, Function> = {
    get_topic: handleGetTopic, list_cryptocurrencies: handleListCryptocurrencies,
    compare_topics: handleCompareTopics, get_whale_posts: handleGetWhalePosts,
  };
  if (!handlers[name]) throw new Error(`Unknown tool: ${name}`);
  return handlers[name](args);
}
Enter fullscreen mode Exit fullscreen mode

All 4 Handlers Summary

Handler Endpoint Pattern
handleGetTopic /topic/{topic}/v1 Parse โ†’ Fetch โ†’ JSON or Markdown
handleListCryptocurrencies /coins/list/v2 Parse โ†’ Fetch with sort/filter โ†’ table
handleCompareTopics Multiple parallel fetches Promise.all() โ†’ comparison table
handleGetWhalePosts Posts + filter Fetch โ†’ filter by followers โ†’ list

๐Ÿ’ก Full code: Use The LLM Shortcut to generate complete handlers for your API.


Step 4: Wire Up the Server {#step-4-wire-up-the-server}

Now let's create the MCP server that connects everything:

๐Ÿ“„ src/index.ts

#!/usr/bin/env node

/**
 * Custom LunarCrush MCP Server
 *
 * Features:
 * - Standard tools (get_topic, list_coins, etc.)
 * - Custom tools (compare, whales)
 * - Optimized responses for LLM consumption
 * - Works with any MCP client (desktop apps, CLI tools, IDE extensions)
 */

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
  ErrorCode,
  McpError
} from '@modelcontextprotocol/sdk/types.js';

// For Streamable HTTP instead of Stdio:

// import express from 'express';
//import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';


import { tools } from './tools.js';
import { handleToolCall } from './handlers.js';

// Server metadata
const SERVER_INFO = {
  name: 'lunarcrush-mcp-custom',
  version: '1.0.0',
  description: 'Custom LunarCrush MCP server with extended tools'
};

// Create MCP server
const server = new Server(
  SERVER_INFO,
  {
    capabilities: {
      tools: {},
      // Can also add:
      // resources: {},  // For exposing data files
      // prompts: {},    // For prompt templates
    }
  }
);

// ============================================
// REQUEST HANDLERS
// ============================================

/**
 * List available tools
 *
 * Called by MCP client to discover what tools are available.
 * The LLM uses this to understand what it can do.
 */
server.setRequestHandler(ListToolsRequestSchema, async () => {
  console.error(`[MCP] Listing ${tools.length} tools`);
  return { tools };
});

/**
 * Execute a tool
 *
 * Called when the LLM decides to use a tool.
 * We route to the appropriate handler and return results.
 */
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  console.error(`[MCP] Tool call: ${name}`);
  console.error(`[MCP] Arguments: ${JSON.stringify(args)}`);

  try {
    // Execute the tool
    const result = await handleToolCall(name, args);

    console.error(`[MCP] Success: ${name}`);

    // Return result as text content
    // Handlers return string (markdown) or object (JSON) - handle both
    return {
      content: [
        {
          type: 'text',
          text: typeof result === 'string' ? result : JSON.stringify(result, null, 2)
        }
      ]
    };

  } catch (error) {
    // Handle errors gracefully
    const message = error instanceof Error ? error.message : 'Unknown error';
    console.error(`[MCP] Error in ${name}: ${message}`);

    // Return error to LLM (it can try to recover or inform user)
    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify({
            error: true,
            message,
            tool: name
          }, null, 2)
        }
      ],
      isError: true
    };
  }
});

// ============================================
// SERVER STARTUP
// ============================================

async function main() {
  // OPTION 1: stdio transport (local MCP clients)
  const transport = new StdioServerTransport();
  await server.connect(transport);

  // OPTION 2: Streamable HTTP (for web deployment)

  // const PORT = process.env.PORT || 3000;
  // const app = express();
  // app.use(express.json());
  //
  // const transport = new StreamableHTTPServerTransport({
  //   sessionIdGenerator: () => crypto.randomUUID(),
  // });
  // await server.connect(transport);  // Same pattern - connect server to transport

  // app.get('/health', (req, res) => res.send('ok'));
  // app.post('/mcp', async (req, res) => {
  //   await transport.handleRequest(req, res, req.body);
  // });
  //
  // app.listen(PORT, () => console.error(`MCP server on http://localhost:${PORT}/mcp`));

  // Log to stderr (stdout is reserved for MCP protocol)
  console.error('โ•'.repeat(50));
  console.error(`${SERVER_INFO.name} v${SERVER_INFO.version}`);
  console.error(`Running on stdio transport`);
  console.error(`Tools available: ${tools.length}`);
  console.error('โ•'.repeat(50));
}

// Start server
main().catch((error) => {
  console.error('Fatal error:', error);
  process.exit(1);
});
Enter fullscreen mode Exit fullscreen mode

Build and test:

# Build TypeScript
npm run build

# Test that it runs (should show startup message then wait for input)
LUNARCRUSH_API_KEY=your_key node dist/index.js

# You should see:
# โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
# lunarcrush-mcp-custom v1.0.0
# Running on stdio transport
# Tools available: 4
# โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
Enter fullscreen mode Exit fullscreen mode

Press Ctrl+C to stop.


Step 5: Test with Claude Desktop {#step-5-test-with-claude-desktop}

Add your server to Claude Desktop:

๐Ÿ“„ macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
๐Ÿ“„ Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "lunarcrush-custom": {
      "command": "node",
      "args": ["/FULL/PATH/TO/lunarcrush-mcp-custom/dist/index.js"],
      "env": {
        "LUNARCRUSH_API_KEY": "your_api_key_here"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Important: Use the FULL absolute path to your dist/index.js file.

Restart Claude Desktop (quit and reopen).

Test these prompts:

"What's the Galaxy Score for Bitcoin?"

"Compare Bitcoin, Ethereum, and Solana"

"What's the sentiment on ETH right now?"

"What are whales saying about Solana?"

Top 5 memecoins by social dominance
Enter fullscreen mode Exit fullscreen mode

Step 6: Add Custom Tools (Beyond Official MCP) {#step-6-custom-tools}

We already added 2 custom tools in Steps 2-3. To add more:

  1. tools.ts: Add tool definition with name, description, inputSchema
  2. tools.ts: Add Zod schema for validation
  3. handlers.ts: Add handler function (parse โ†’ fetch โ†’ return JSON or markdown)
  4. handlers.ts: Add to router object

Ideas for more custom tools:

Tool What It Does
check_alert_conditions Watchlist items exceeding thresholds
sentiment_shift Rapid sentiment changes (trend reversal)
analyze_sector Aggregated metrics for DeFi, AI, Memes, etc.
top_creators Most influential creators for a topic
get_news Top news articles for a topic

Step 7: Combine Multiple APIs {#step-7-combine-apis}

The real power of custom MCP: combine data from multiple sources.

Example: Add Fear & Greed Index

๐Ÿ“„ src/api-client.ts (add this function)

/**
 * Fear & Greed Index API (free, no key needed)
 * https://alternative.me/crypto/fear-and-greed-index/
 */
export async function fetchFearGreed(): Promise<{
  value: number;
  classification: string;
  timestamp: string;
}> {
  const response = await fetch('https://api.alternative.me/fng/?limit=1');
  const data = await response.json();

  return {
    value: parseInt(data.data[0].value),
    classification: data.data[0].value_classification,
    timestamp: new Date(data.data[0].timestamp * 1000).toISOString()
  };
}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ src/tools.ts (add tool definition)

{
  name: 'get_market_sentiment',
  description: `Get overall market sentiment combining Fear & Greed Index with LunarCrush data.

Returns: Fear & Greed score (0-100), classification (Extreme Fear to Extreme Greed),
plus Bitcoin Galaxy Score for correlation.

Use for: "What's the market sentiment?", "Is the market fearful?", "Market overview"`,
  inputSchema: {
    type: 'object' as const,
    properties: {},
    required: []
  }
}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ src/handlers.ts (add handler)

import { fetchLunarCrush, fetchFearGreed } from './api-client.js';

export async function handleMarketSentiment(args: unknown) {
  // Fetch both in parallel
  const [fearGreed, btcData] = await Promise.all([
    fetchFearGreed(),
    fetchLunarCrush('/topic/bitcoin/v1')
  ]);

  return {
    fear_greed: {
      value: fearGreed.value,
      classification: fearGreed.classification,
      interpretation: fearGreed.value <= 25 ? 'Extreme Fear - potential buying opportunity'
        : fearGreed.value <= 45 ? 'Fear - caution warranted'
        : fearGreed.value <= 55 ? 'Neutral'
        : fearGreed.value <= 75 ? 'Greed - caution warranted'
        : 'Extreme Greed - potential selling opportunity'
    },
    bitcoin_social: {
      galaxy_score: btcData.galaxy_score?.toFixed(1),
      sentiment: btcData.sentiment,
      social_dominance: btcData.social_dominance?.toFixed(2) + '%'
    },
    combined_signal: getCombinedSignal(fearGreed.value, btcData.galaxy_score, btcData.sentiment),
    timestamp: fearGreed.timestamp
  };
}

function getCombinedSignal(fearGreed: number, galaxyScore: number, sentiment: number): string {
  // Custom logic combining multiple data sources
  const fearGreedBullish = fearGreed <= 30;
  const socialBullish = galaxyScore >= 60 && sentiment >= 60;

  if (fearGreedBullish && socialBullish) {
    return 'STRONG_OPPORTUNITY: Extreme fear but social metrics strong';
  }
  if (fearGreedBullish && !socialBullish) {
    return 'CAUTION: Fear present but social metrics weak';
  }
  if (!fearGreedBullish && socialBullish) {
    return 'TRENDING: Greed rising with strong social backing';
  }
  return 'NEUTRAL: Mixed signals';
}
Enter fullscreen mode Exit fullscreen mode

Other APIs You Could Combine

API What It Adds Example Tool
DeFiLlama DEX volume, TVL, yields get_dex_volume, get_tvl
DexScreener Token pairs, liquidity, new listings get_token_pairs, get_new_listings
Hyperliquid Perps funding rates, open interest get_funding_rates, get_positions
Glassnode On-chain metrics get_onchain_metrics
NewsAPI News headlines get_crypto_news
Your own DB Portfolio, alerts, history get_my_portfolio

Custom Metrics: Crowd, Whales, Conviction {#custom-metrics}

Beyond standard metrics, you can add custom tools for derived analytics:

Example Custom Tools

// Crowd sentiment (aggregate small accounts)
{
  name: 'get_crowd_sentiment',
  description: 'Get sentiment from retail/crowd (accounts <10K followers)',
  // Filter posts by follower count, aggregate sentiment
}

// Whale consensus (what big accounts think)
{
  name: 'get_whale_consensus',
  description: 'Get sentiment from whale accounts (100K+ followers)',
  // Filter for high-follower accounts, weight by influence
}

// Conviction score (custom calculation)
{
  name: 'calculate_conviction',
  description: 'Calculate conviction score combining multiple signals',
  // Combine: Galaxy Score trend + AltRank movement + sentiment
}
Enter fullscreen mode Exit fullscreen mode

These tools don't exist in the official MCP - they're YOUR value-add.


Can I Wrap Another MCP Server? {#wrap-mcp}

Yesโ€”and it's easier than wrapping REST. If an official MCP server exists, wrap it instead of the REST API:

Approach Work Required
Wrap REST API Write handlers for each endpoint
Wrap MCP Server Just proxy + add custom tools

Example: Wrap the official LunarCrush MCP (11 tools) and add your own custom tools like get_whale_posts:

import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js';

// Connect to the official LunarCrush MCP as a client
const lunarClient = new Client({ name: 'custom-lunar', version: '1.0.0' });
const transport = new SSEClientTransport(
  new URL('https://lunarcrush.ai/sse'),
  { headers: { Authorization: `Bearer ${process.env.LUNARCRUSH_API_KEY}` } }
);
await lunarClient.connect(transport);

// Proxy tools through your server
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  // YOUR CUSTOM TOOLS
  if (name === 'get_whale_posts') {
    return handleGetWhalePosts(args);
  }

  // OPTIONAL: FILTER OUT TOOLS YOU DON'T WANT OR NEED (reduces token usage)
  if (name === 'Post') {
    // Don't need post lookups - block to save tokens
    throw new McpError(ErrorCode.InvalidRequest, 'Post tool disabled');
  }
  if (name === 'List') {
    // Don't need list lookups - creating own categories
    throw new McpError(ErrorCode.InvalidRequest, 'List tool disabled');
  }

  // PROXY EVERYTHING ELSE to official MCP
  return lunarClient.callTool({ name, arguments: args });
});
Enter fullscreen mode Exit fullscreen mode

Why filter tools?

  • Each tool definition consumes tokens in the LLM's context
  • 11 tools โ†’ ~500 tokens just for tool descriptions
  • Block tools you don't need โ†’ smaller context โ†’ cheaper + faster

Result: 11 official tools - 2 blocked + 1 custom = 10 tools total, optimized for your use case.

Best practice: If an MCP server exists for your API, wrap it. Only build from REST if no MCP exists or you need complete control.


The LLM Shortcut: Generate MCP Servers Automatically {#the-llm-shortcut}

Here's the secret: use an LLM to BUILD the MCP server for you.

The Prompt That Works

I want to create an MCP server that wraps this REST API:

[Paste your API documentation here - OpenAPI spec is best]

Create a TypeScript MCP server using @modelcontextprotocol/sdk that:

1. Has these specific tools: [list the endpoints you want]
2. Uses Zod for input validation
3. Transforms responses to be LLM-friendly (remove noise, highlight key data)
4. Includes proper error handling
5. Uses this file structure:
   - src/index.ts (server setup)
   - src/tools.ts (tool definitions)
   - src/handlers.ts (API call handlers)
   - src/api-client.ts (REST API wrapper)

Include package.json and tsconfig.json.
Enter fullscreen mode Exit fullscreen mode

For LunarCrush Specifically

LunarCrush offers LLM-friendly docs at https://lunarcrush.com/api4?format=markdown - paste it directly into LLM with the prompt above.

Time savings: 25 minutes manual โ†’ 5 minutes with LLM assistance.

Extend This With AI

Paste these prompts into Claude or ChatGPT to add features:

Add Database Caching:

"Modify my MCP handlers to cache API responses in SQLite for 60 seconds. Add a get_cached_stats tool that shows cache hit rate and last update times"

Add Sector Analysis:

"Create a new analyze_sector tool that fetches the top 10 coins in a sector (DeFi, AI, Memes) and returns aggregated metrics: average Galaxy Score, total social volume, best/worst performers"


ROI & Monetization {#roi-monetization}

For yourself: Save 5-10 hours/week if you regularly work with APIs + AI. "What's trending in crypto?" โ†’ instant answer from any MCP-enabled AI.

As a service:

Complexity Scope Price Range
Simple 1 API, 3-5 tools $500-1,000
Medium 1 API, 10+ tools, custom logic $1,000-2,000
Complex Multiple APIs, auth, caching $2,000-5,000
Enterprise Multi-tenant, monitoring, SLA $5,000-10,000+

Where to find clients: Reddit (r/ClaudeAI, r/LocalLLaMA), Twitter #BuildInPublic, Anthropic Discord, Upwork/Fiverr, direct outreach to companies with APIs but no MCP.

Pitch angles:

  • "Your API is great, but it's invisible to AI. Let me make it AI-native."
  • "Your competitors have MCP servers. Your users are asking Claude about them, not you."
  • "One MCP server = every AI client. Claude, VS Code, custom agentsโ€”all from one build."
  • "Stop writing integration docs. Let AI discover your API automatically."
  • "Your API + MCP = developers can build with you 10x faster using AI assistants."

Distribution: Package your server for your language's ecosystem (npm for TypeScript, PyPI for Python, etc.) so users can install with one command and add to their MCP config.

Break-even math (if reselling with LunarCrush Builder plan at $300/mo):

  • At $25/user โ†’ 12 users = break-even
  • At $50/user โ†’ 6 users = break-even
  • At $100/user โ†’ 3 users = break-even

Comparison

Approach Build Time Reusability LLM Native Flexibility
Manual prompts with API calls 0 โŒ None โŒ No โŒ Limited
Function calling per app 30 min/app โŒ Per app โš ๏ธ Partial โš ๏ธ App-specific
Official MCP server 5 min โœ… All clients โœ… Full โŒ Fixed tools
Custom MCP server 25 min once โœ… All clients โœ… Full โœ… Unlimited

Troubleshooting {#troubleshooting}

Error Cause Solution
spawn ENOENT Wrong path in Claude config Use absolute path to dist/index.js
401 Unauthorized Invalid or missing API key Check LUNARCRUSH_API_KEY in env
429 Too Many Requests Rate limit exceeded Add caching, upgrade plan with JAMAALBUILDS
Server not showing in Claude Config syntax error Validate JSON, restart Claude Desktop
Cannot find module TypeScript not compiled Run npm run build
Tools not appearing Missing capabilities Ensure tools: {} in server capabilities
ECONNREFUSED API server down Check LunarCrush status, try later
Zod validation error Invalid input from LLM Check tool descriptions are clear

Pro tip: Check Claude Desktop logs:

  • macOS: ~/Library/Logs/Claude/
  • Windows: %APPDATA%\Claude\logs\

FAQ {#faq}

Q: Can I use JavaScript instead of TypeScript?
A: Yes! Remove type annotations and change file extensions to .js. TypeScript just adds safety.

Q: Does this work with ChatGPT?
A: Not directly - MCP is Anthropic's protocol. But your handler code can be adapted for OpenAI function calling with minimal changes.

Q: Is one server enough for all my AI tools?
A: Yes! MCP is protocol-based. One server running on stdio or HTTP serves any MCP clientโ€”desktop apps, CLI tools, IDE extensions, and custom agents.

Q: How do I add authentication to my MCP server?
A: For local use (stdio), credentials come from environment variables. For HTTP deployment, add auth middleware before the MCP handlers.

Q: Can I deploy this to a server instead of running locally?
A: Yes! Use StreamableHTTPServerTransport instead of StdioServerTransport and deploy to any Node.js hosting (Vercel, Railway, Fly.io).

Q: Do I need to write MCP servers in TypeScript?
A: No! MCP has official SDKs for Python and TypeScript. The protocol is language-agnostic - any language that can do JSON-RPC works.


Glossary {#glossary}

  • MCP (Model Context Protocol): Anthropic's open standard for connecting AI to external data and tools. Uses JSON-RPC 2.0.
  • REST API: Standard web API architecture using HTTP methods (GET, POST, etc.) to interact with resources.
  • Tool: A capability exposed by an MCP server that LLMs can call (like a function).
  • Handler: The code that executes when a tool is called - makes API requests, processes data, returns results.
  • Transport: How the MCP server communicates (stdio for local, Streamable HTTP for production).
  • JSON-RPC: A stateless, lightweight remote procedure call protocol using JSON.
  • Galaxy Score: LunarCrush's 0-100 metric measuring social engagement health for any topic.
  • AltRank: LunarCrush's performance ranking (1 = best) combining social and market metrics.
  • Zod: TypeScript-first schema validation library used for input validation.

Resources

๐Ÿš€ Ready for real-time data? Use code JAMAALBUILDS for 15% off the LunarCrush Builder plan when you need higher rate limits.


About the Author

Built something cool with this tutorial? Share it!

#LunarCrushBuilder #MCP - Show off what you built!

Top comments (0)