DEV Community

Cover image for How to Build a Hacker News MCP Server with FastMCP 3.0 (Tools, Resources, Prompts & Lifespan)
Aaryan Tajanpure
Aaryan Tajanpure

Posted on

How to Build a Hacker News MCP Server with FastMCP 3.0 (Tools, Resources, Prompts & Lifespan)

I've been building MCP servers for a few weeks now. A calendar tool, a document parser — useful, but neither of them felt like something I'd actually open every day. Then I thought about Hacker News. I check it constantly, but half the time I just want someone to tell me what actually matters today without me having to dig through 30 tabs.

That's when it clicked: instead of browsing HN, I could give Claude the ability to do it for me — fetch stories, search by topic, pull comments, and generate a curated digest on demand. This is that project.

Along the way it became the best introduction to FastMCP 3.0 I've written — covering tools, resources, prompts, lifespan management, async HTTP, and dual transport in one real-world build. If you've been looking for a project that exercises all of FastMCP's major features without artificial complexity, this is it.

Table of Contents

What We Are Creating

We are building an MCP server that gives any AI client the ability to read, search, and analyze Hacker News. Your AI assistant will be able to fetch the top stories, read user discussions, and even generate a curated daily tech digest on demand.

Why We Are Creating It

This is the third project in a FastMCP learning series. The first two covered the basics — @mcp.tool, local storage, resources, and PyPI packaging. But several core FastMCP features were still untouched:

  • @mcp.prompt — reusable templates that pre-wire Claude into a specific workflow
  • Context injectionctx.info(), ctx.report_progress(), and the FastMCP 3.0 CurrentContext() pattern
  • Lifespan management — sharing an httpx.AsyncClient across all requests instead of recreating it per call
  • Async external API calls — hitting a real REST API with httpx inside tool and resource handlers
  • streamable-http transport — running the server over HTTP alongside the default stdio mode

Hacker News is the right vehicle for all of this. The public API requires no authentication, the data is interesting enough to make the output worth looking at, and the use case naturally demands batch fetching (which makes progress reporting meaningful) and search (which makes async HTTP unavoidable).

What We Will Be Using

  • Python 3.11+
  • uv: The blazing‑fast Python package manager.
  • FastMCP (3.0+): The premier framework for building MCP servers in Python.
  • Hacker News Public APIs: Firebase (for real‑time data) and Algolia (for full‑text search).
  • No API keys needed — the HN Algolia API is fully public.

The Tech Stack: MCP and FastMCP 3.0

What is MCP?
The Model Context Protocol (MCP) is an open standard created by Anthropic. Think of it like a USB‑C port for AI models. Instead of writing custom plugins for every LLM or chat application, you write one MCP server, and any MCP‑compatible client (Claude Desktop, Cursor, etc.) can instantly use it.

What is FastMCP?
FastMCP is to MCP what FastAPI is to web development. It's a lightweight, Pythonic framework that lets you turn regular Python functions into MCP tools, resources, and prompts using simple decorators.


The Tutorial

Installation & Project Structure

# 1. Create and enter project directory
mkdir hacker-news-mcp
cd hacker-news-mcp

# 2. Initialize with uv
uv init

# 3. Set Python version
echo 3.11 > .python-version

# 4. Add dependencies (fastmcp 3.0+)
uv add "fastmcp>=3.0.0" httpx

# 5. Verify installation
uv run python -c "import fastmcp; print(fastmcp.__version__)"
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Sets up a fresh project via uv init.
  • Pins Python 3.11 for environment consistency.
  • Installs fastmcp (3.0+) and httpx.

Create the folder layout:

hacker-news-mcp/
├── pyproject.toml           # project metadata + dependencies
├── .python-version          # 3.11
├── README.md
├── src/
│   └── hacker_news_mcp/
│       ├── __init__.py      # package marker
│       ├── server.py        # FastMCP app + entry point
│       ├── lifespan.py      # httpx.AsyncClient lifecycle
│       ├── tools.py         # @mcp.tool — 4 tools
│       ├── resources.py     # @mcp.resource — 2 resources
│       └── prompts.py       # @mcp.prompt — hn_digest
└── .gitignore
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Isolates application code in src/hacker_news_mcp/ to prevent accidental imports.
  • Keeps configuration files cleanly at the project root.
  • Separates concerns into dedicated files (tools, resources, lifespan, prompts) for scalability.
mkdir -p src/hacker_news_mcp
Enter fullscreen mode Exit fullscreen mode

src/hacker_news_mcp/__init__.py

"""hacker_news_mcp package – FastMCP server for Hacker News.

Provides tools, resources, prompts, and server entry point.
"""
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Marks the directory as a standard Python package.
  • Uses a module docstring for self-documentation (visible in IDEs and help()).
  • Kept intentionally empty to ensure logic stays in dedicated modules.

Lifespan Management: Sharing HTTP Clients

import httpx
from fastmcp.server.lifespan import lifespan

HN_ALGOLIA_BASE = "https://hn.algolia.com/api/v1"
HN_FIREBASE_BASE = "https://hacker-news.firebaseio.com/v0"

@lifespan
async def hn_lifespan(server):
    async with httpx.AsyncClient(
        base_url=HN_ALGOLIA_BASE,
        timeout=httpx.Timeout(30.0),
        headers={"User-Agent": "hacker-news-mcp/0.1.0"},
    ) as algolia_client:
        async with httpx.AsyncClient(
            base_url=HN_FIREBASE_BASE,
            timeout=httpx.Timeout(30.0),
            headers={"User-Agent": "hacker-news-mcp/0.1.0"},
        ) as firebase_client:
            yield {
                "algolia_client": algolia_client,
                "firebase_client": firebase_client,
            }
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • @lifespan initializes HTTP clients once on startup.
  • Yields a dictionary of persistent httpx.AsyncClient instances.
  • Guarantees clean resource shutdown via async with context managers.
  • Eliminates connection overhead across tool and resource calls.

Tools Module

from __future__ import annotations

import json
from datetime import datetime, timedelta, timezone

import httpx
from fastmcp import FastMCP
from fastmcp.server.context import Context
from fastmcp.dependencies import CurrentContext


def register_tools(mcp: FastMCP) -> None:

    @mcp.tool(
        name="get_top_stories",
        description=(
            "Fetch the top N stories from Hacker News with titles, scores, URLs, authors, and comment counts."
        ),
        tags={"stories", "top"},
    )
    async def get_top_stories(
        limit: int = 10,
        ctx: Context = CurrentContext(),
    ) -> str:
        limit = min(max(1, limit), 30)
        firebase: httpx.AsyncClient = ctx.lifespan_context["firebase_client"]
        algolia: httpx.AsyncClient = ctx.lifespan_context["algolia_client"]
        await ctx.info(f"Fetching top {limit} story IDs from HN Firebase API")
        resp = await firebase.get("/topstories.json")
        resp.raise_for_status()
        story_ids: list[int] = resp.json()[:limit]
        stories = []
        total = len(story_ids)
        for i, story_id in enumerate(story_ids):
            await ctx.report_progress(progress=i, total=total)
            await ctx.debug(f"Fetching story {story_id} ({i+1}/{total})")
            item_resp = await firebase.get(f"/item/{story_id}.json")
            item_resp.raise_for_status()
            item = item_resp.json()
            if item:
                stories.append({
                    "id": item.get("id"),
                    "title": item.get("title", "Untitled"),
                    "url": item.get("url", f"https://news.ycombinator.com/item?id={story_id}"),
                    "score": item.get("score", 0),
                    "by": item.get("by", "unknown"),
                    "descendants": item.get("descendants", 0),
                    "time": datetime.fromtimestamp(
                        item.get("time", 0), tz=timezone.utc
                    ).isoformat(),
                })
        await ctx.report_progress(progress=total, total=total)
        await ctx.info(f"Successfully fetched {len(stories)} top stories")
        return json.dumps(stories, indent=2)

    @mcp.tool(
        name="get_story_details",
        description=(
            "Fetch a single HN story by ID, including its metadata and top-level comments (up to 10)."
        ),
        tags={"stories", "details"},
    )
    async def get_story_details(
        story_id: int,
        ctx: Context = CurrentContext(),
    ) -> str:
        firebase: httpx.AsyncClient = ctx.lifespan_context["firebase_client"]
        await ctx.info(f"Fetching story details for ID {story_id}")
        resp = await firebase.get(f"/item/{story_id}.json")
        resp.raise_for_status()
        story = resp.json()
        if not story:
            return json.dumps({"error": f"Story {story_id} not found"})
        comment_ids = story.get("kids", [])[:10]
        comments = []
        total_comments = len(comment_ids)
        for i, cid in enumerate(comment_ids):
            await ctx.report_progress(progress=i, total=total_comments)
            comment_resp = await firebase.get(f"/item/{cid}.json")
            comment_resp.raise_for_status()
            comment = comment_resp.json()
            if comment and comment.get("type") == "comment":
                comments.append({
                    "id": comment.get("id"),
                    "by": comment.get("by", "[deleted]"),
                    "text": comment.get("text", ""),
                    "time": datetime.fromtimestamp(
                        comment.get("time", 0), tz=timezone.utc
                    ).isoformat(),
                })
        await ctx.report_progress(progress=total_comments, total=total_comments)
        result = {
            "id": story.get("id"),
            "title": story.get("title", "Untitled"),
            "url": story.get("url", f"https://news.ycombinator.com/item?id={story_id}"),
            "text": story.get("text", ""),
            "score": story.get("score", 0),
            "by": story.get("by", "unknown"),
            "descendants": story.get("descendants", 0),
            "time": datetime.fromtimestamp(
                story.get("time", 0), tz=timezone.utc
            ).isoformat(),
            "type": story.get("type", "story"),
            "comments": comments,
        }
        await ctx.info(
            f"Fetched story '{result['title']}' with {len(comments)} comments"
        )
        return json.dumps(result, indent=2)

    @mcp.tool(
        name="search_stories",
        description=(
            "Search Hacker News via the Algolia API. Supports full-text query and optional date range filtering."
        ),
        tags={"search", "algolia"},
    )
    async def search_stories(
        query: str,
        days_back: int = 7,
        limit: int = 10,
        ctx: Context = CurrentContext(),
    ) -> str:
        limit = min(max(1, limit), 30)
        algolia: httpx.AsyncClient = ctx.lifespan_context["algolia_client"]
        cutoff = datetime.now(timezone.utc) - timedelta(days=days_back)
        numeric_filters = f"created_at_i>{int(cutoff.timestamp())}"
        await ctx.info(
            f"Searching Algolia for '{query}' (last {days_back} days, limit {limit})"
        )
        resp = await algolia.get(
            "/search",
            params={
                "query": query,
                "tags": "story",
                "numericFilters": numeric_filters,
                "hitsPerPage": limit,
            },
        )
        resp.raise_for_status()
        data = resp.json()
        results = []
        for hit in data.get("hits", []):
            results.append({
                "id": hit.get("objectID"),
                "title": hit.get("title", "Untitled"),
                "url": hit.get("url", ""),
                "author": hit.get("author", "unknown"),
                "points": hit.get("points", 0),
                "num_comments": hit.get("num_comments", 0),
                "created_at": hit.get("created_at", ""),
                "story_url": f"https://news.ycombinator.com/item?id={hit.get('objectID')}",
            })
        await ctx.info(f"Found {len(results)} results for '{query}'")
        return json.dumps(results, indent=2)

    @mcp.tool(
        name="get_user",
        description="Fetch a Hacker News user profile by username.",
        tags={"user", "profile"},
    )
    async def get_user(
        username: str,
        ctx: Context = CurrentContext(),
    ) -> str:
        firebase: httpx.AsyncClient = ctx.lifespan_context["firebase_client"]
        await ctx.info(f"Fetching user profile for '{username}'")
        resp = await firebase.get(f"/user/{username}.json")
        resp.raise_for_status()
        user = resp.json()
        if not user:
            return json.dumps({"error": f"User '{username}' not found"})
        result = {
            "id": user.get("id"),
            "created": datetime.fromtimestamp(
                user.get("created", 0), tz=timezone.utc
            ).isoformat(),
            "karma": user.get("karma", 0),
            "about": user.get("about", ""),
            "submitted_count": len(user.get("submitted", [])),
        }
        await ctx.info(f"Found user '{username}' with {result['karma']} karma")
        return json.dumps(result, indent=2)
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Exposes 4 tools: fetching top stories, story details with comments, Algolia search, and user profiles.
  • Employs ctx.report_progress to keep the client updated during loops.
  • Applies date range numericFilters in Algolia to keep search results relevant.
  • Uses CurrentContext() to inject shared HTTP clients without threading arguments.

Resources Module

from __future__ import annotations

import json
from datetime import datetime, timezone

import httpx
from fastmcp import FastMCP
from fastmcp.server.context import Context
from fastmcp.dependencies import CurrentContext


def register_resources(mcp: FastMCP) -> None:

    @mcp.resource(
        uri="hn://stories/top",
        name="TopStories",
        description="Static snapshot of the current top 20 Hacker News stories.",
        mime_type="application/json",
        tags={"stories", "top"},
    )
    async def top_stories_resource(ctx: Context = CurrentContext()) -> str:
        firebase = ctx.lifespan_context["firebase_client"]
        await ctx.info("Resource read: hn://stories/top")
        resp = await firebase.get("/topstories.json")
        resp.raise_for_status()
        story_ids = resp.json()[:20]
        stories = []
        for i, sid in enumerate(story_ids):
            await ctx.report_progress(progress=i, total=20)
            item_resp = await firebase.get(f"/item/{sid}.json")
            item_resp.raise_for_status()
            item = item_resp.json()
            if item:
                stories.append({
                    "id": item.get("id"),
                    "title": item.get("title", "Untitled"),
                    "url": item.get("url", ""),
                    "score": item.get("score", 0),
                    "by": item.get("by", "unknown"),
                    "descendants": item.get("descendants", 0),
                    "time": datetime.fromtimestamp(
                        item.get("time", 0), tz=timezone.utc
                    ).isoformat(),
                })
        await ctx.report_progress(progress=20, total=20)
        return json.dumps(stories, indent=2)

    @mcp.resource(
        uri="hn://item/{item_id}",
        name="HNItem",
        description="Fetch any Hacker News item (story, comment, poll, job) by its ID.",
        mime_type="application/json",
        tags={"item", "detail"},
    )
    async def item_resource(item_id: int, ctx: Context = CurrentContext()) -> str:
        firebase = ctx.lifespan_context["firebase_client"]
        await ctx.info(f"Resource read: hn://item/{item_id}")
        resp = await firebase.get(f"/item/{item_id}.json")
        resp.raise_for_status()
        item = resp.json()
        if not item:
            return json.dumps({"error": f"Item {item_id} not found"})
        result = {
            "id": item.get("id"),
            "type": item.get("type", "unknown"),
            "title": item.get("title", ""),
            "text": item.get("text", ""),
            "url": item.get("url", ""),
            "score": item.get("score", 0),
            "by": item.get("by", "[deleted]"),
            "time": datetime.fromtimestamp(
                item.get("time", 0), tz=timezone.utc
            ).isoformat(),
            "descendants": item.get("descendants", 0),
            "kids_count": len(item.get("kids", [])),
        }
        return json.dumps(result, indent=2)
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Defines read-only data endpoints, comparable to REST GET endpoints.
  • hn://stories/top provides a static snapshot for live feeds.
  • hn://item/{item_id} resolves specific items dynamically using URI templates.
  • Reuses the shared Firebase client and reports progress back to the AI client.

Prompts Module

from __future__ import annotations

from fastmcp import FastMCP
from fastmcp.prompts import Message


def register_prompts(mcp: FastMCP) -> None:

    @mcp.prompt(
        name="hn_digest",
        description=(
            "Generates a daily Hacker News briefing prompt. Claude will use the available tools to fetch stories, "
            "summarize them, and present a curated digest."
        ),
        tags={"digest", "briefing"},
    )
    def hn_digest(
        num_stories: int = 10,
        include_comments: bool = True,
    ) -> list[Message]:
        comment_instruction = ""
        if include_comments:
            comment_instruction = (
                "\n- For the top 3 most-discussed stories, also fetch their "
                "comments using `get_story_details` and include a brief "
                "summary of the community discussion."
            )
        return [
            Message(
                role="user",
                content=f"""Please create a **Daily Hacker News Digest** briefing. Follow these steps:\n\n1. **Fetch Stories**: Use the `get_top_stories` tool with limit={num_stories} to get today's top stories.\n\n2. **Organize by Category**: Group the stories into these categories:\n   - 🚀 **Tech & Engineering** — programming, infrastructure, tools\n   - 🤖 **AI & ML** — artificial intelligence, machine learning, LLMs\n   - 💼 **Business & Startups** — funding, launches, acquisitions\n   - 🔬 **Science & Research** — papers, discoveries, space\n   - 📱 **Product & Design** — UX, apps, consumer tech\n   - 💬 **Community & Culture** — HN meta, tech culture, essays\n\n3. **Format Each Story**:\n   - **Title** (linked to URL)\n   - ⬆ Score | 💬 Comment count | 👤 Author\n   - One-sentence summary of why it matters{comment_instruction}\n\n4. **Add a TL;DR Section** at the top with 3 bullet points capturing the day's biggest themes.\n\n5. **Closing**: End with a "🔍 Worth Watching" section highlighting 1-2 stories that could have significant future implications.\n\n**Formatting**: Use clean markdown with emojis for visual scanning. Keep summaries crisp and insightful.""",
            ),
            Message(
                role="assistant",
                content=(
                    "I'll create your Daily Hacker News Digest now. "
                    "Let me start by fetching the top stories..."
                ),
            ),
        ]
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Returns a list of Message objects (FastMCP 3.0 requirement).
  • Accepts parameters (num_stories, include_comments) for conditional logic.
  • Delivers robust step-by-step formatting instructions in the user message.
  • "Prefills" the assistant reply to lock the AI into the desired output format immediately.

Server Entry Point

from __future__ import annotations

import sys

from fastmcp import FastMCP

from hacker_news_mcp.lifespan import hn_lifespan
from hacker_news_mcp.tools import register_tools
from hacker_news_mcp.resources import register_resources
from hacker_news_mcp.prompts import register_prompts

mcp = FastMCP(
    name="hacker-news-mcp",
    instructions=(
        "A Hacker News MCP server. Use the provided tools to fetch stories, "
        "search HN, and get user profiles. Use the hn_digest prompt for a "
        "curated daily briefing."
    ),
    lifespan=hn_lifespan,
)

register_tools(mcp)
register_resources(mcp)
register_prompts(mcp)

def main():
    if "--http" in sys.argv:
        mcp.run(
            transport="streamable-http",
            host="127.0.0.1",
            port=8000,
            path="/mcp",
        )
    else:
        mcp.run(transport="stdio")

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Acts as the project's central composition root and CLI entry point.
  • Registers lifespan, tools, resources, and prompts directly onto the FastMCP instance.
  • Supports dual transports dynamically driven by command-line arguments.
  • Defaults to stdio for local AI agents, with an --http flag for web and remote clients.

Running and Testing

# Option A: FastMCP Dev Inspector (recommended)
cd hacker-news-mcp
fastmcp dev src/hacker_news_mcp/server.py
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Validates the server dynamically via an interactive web UI (http://localhost:6274).
  • Allows testing tools, resources, and prompts without Claude.
  • Surfaces useful tracebacks locally for rapid debugging.
# Option B: Connect to Claude Desktop (stdio)
uv run hacker-news-mcp
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Default production mode for local assistants (like Claude Desktop).
  • Communicates JSON-RPC via stdin and stdout.
  • Managed directly as a subprocess by the host client.
# Option C: HTTP server for any client
uv run hacker-news-mcp --http
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • Binds the MCP server to http://127.0.0.1:8000/mcp.
  • Suited for web integrations, custom frontends, or remote cloud deployment.
  • Supports streaming for real-time progress and log delivery.

FastMCP 2.x → 3.0 Migration Reference

If you're coming from FastMCP 2.x, here's a quick reference for what changed. All the 3.0 patterns are used throughout this project.

Area FastMCP 2.x FastMCP 3.0
Context injection ctx: Context = Context sentinel ctx: Context = CurrentContext() from fastmcp.dependencies
Server config FastMCP("name", host=..., port=...) FastMCP("name") + mcp.run(transport=..., host=..., port=...)
Duplicate handling on_duplicate_tools=... etc. Single on_duplicate=... parameter
Prompts Accepted raw dicts Must use Message from fastmcp.prompts
State methods ctx.set_state() sync await ctx.set_state() async

Further Ahead 🚀

You now have a fully functional FastMCP 3.0 server. Here are three directions worth exploring next:

  • Keyword digest workflow. The search_stories tool is already wired to the Algolia API. Build a hn_keyword_digest prompt that accepts a topic (e.g. "Rust", "LLMs", "YC") and tells Claude to search, filter by score, and summarize. Essentially a personalized HN newsletter, on demand.

  • Cloud deployment. Because we included the streamable-http transport, this server can run anywhere — Render, Fly.io, Railway. Deploy it, point Claude Desktop at the remote URL, and you have your HN assistant available from any machine without running a local process.

  • Specialized prompts. HN has distinct content types that deserve their own prompts. A hn_job_hunt prompt that filters "Who is Hiring?" threads, or an hn_papers prompt that focuses on research and arXiv links — both are a register_prompts addition away. Once you've built one prompt you'll find the pattern comes naturally.


Conclusion

This started as a learning project to cover FastMCP features I hadn't used yet — lifespan, resources, prompts, async HTTP. It ended up being something I actually use. That's the best outcome for a side project.

The full code is on GitHub. If you build something on top of it — a different data source, a new prompt, a cloud deployment — I'd genuinely like to see it.

Top comments (0)