MCP (Model Context Protocol) gives you three server types to work with: tools, resources, and prompts. They look similar on the surface but solve fundamentally different problems.
Get this wrong and your AI integration feels awkward. Get it right and it feels like the AI actually understands what it's doing.
The Mental Model
Think of it like building an API:
| MCP Type | Analogy | Purpose |
|---|---|---|
| Tool | POST /action | Do something, return result |
| Resource | GET /data | Read something, return content |
| Prompt | Template library | Structure a request |
Tools are for actions. Resources are for data. Prompts are for intent.
Tools: When AI Needs to Do Things
Use tools when you want the AI to take an action with side effects or call external services.
Good tool use cases:
- Sending an email or Slack message
- Writing a file to disk
- Making an API call (POST to your backend, webhooks)
- Running a database query that modifies data
- Executing a shell command
- Fetching live data from an external API
The key trait: Tools are called, they do something, they return a result. The AI decides when and how to call them.
server.tool(
"send_notification",
"Send a push notification to a user",
{
userId: z.string(),
message: z.string().max(200),
priority: z.enum(["low", "normal", "high"]).default("normal"),
},
async ({ userId, message, priority }) => {
await notificationService.send(userId, message, priority);
return { sent: true, timestamp: new Date().toISOString() };
}
);
The AI sees this tool, decides when to call it, fills in the parameters, gets back the result.
Resources: When AI Needs to Know Things
Use resources when you want to expose data the AI can read and reason about.
Good resource use cases:
- Your codebase or specific files
- Database records (read-only)
- Configuration and environment info
- Documentation or knowledge bases
- API responses you want indexed
The key trait: Resources have stable URIs. The AI (or client) addresses them by URI and reads their content.
server.resource(
"user-profile",
new ResourceTemplate("user://{userId}/profile", { list: undefined }),
async (uri, { userId }) => ({
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(await db.users.findById(userId)),
}]
})
);
When you ask Claude "what do you know about user 123?", it fetches user://123/profile and reasons about the data.
Prompts: When You Want Structured Intent
Use prompts when you have reusable patterns for how to approach a task.
Good prompt use cases:
- Code review checklists
- Security audit frameworks
- Documentation templates
- Debugging workflows
- Domain-specific analysis patterns
The key trait: Prompts are templates with parameters. They shape how the AI approaches something, not what data it has access to.
server.prompt(
"code-review",
"Comprehensive code review with severity ratings",
{
code: z.string().describe("The code to review"),
language: z.string().optional(),
focus: z.enum(["security", "performance", "style", "all"]).default("all"),
},
({ code, language, focus }) => ({
messages: [{
role: "user",
content: {
type: "text",
text: `Review this ${language || ""} code with focus on ${focus}:\n\n\`\`\`\n${code}\n\`\`\`\n\nFor each issue:\n- Severity: [critical/major/minor]\n- Location: [line or function]\n- Issue: [what's wrong]\n- Fix: [how to fix it]`,
},
}],
})
);
The prompt structures the AI's approach. Consistent, thorough reviews instead of variable freeform responses.
Combining All Three
The real power is when they work together. A practical example — internal developer tool:
Resources:
codebase://src/** → expose your source files
docs://api/** → expose API documentation
Tools:
run_tests → execute test suite
deploy_to_staging → trigger deployment
create_pr → open pull request
Prompts:
pre-deploy-check → structured deployment readiness review
incident-runbook → guided incident response
Now when you ask "Is the auth module ready to deploy?", Claude can:
- Read the relevant source files (resource)
- Run the test suite (tool)
- Apply the pre-deploy-check prompt (prompt)
- Give you a structured answer
That's not just AI — that's AI with actual context and capability.
Common Mistakes
Using tools when resources are better: If you're just reading data that doesn't change often, make it a resource. Tools imply side effects. Resources are lighter and cacheable.
Making prompts too rigid: A prompt that's too prescriptive produces robotic output. Give structure for what to analyze; leave how to the AI.
Not using resources for context: The single biggest win in MCP integrations is giving the AI access to relevant data. If Claude doesn't know your schema or codebase — it's guessing. Resources fix that.
Quick Reference
| Situation | Use |
|---|---|
| AI needs to call an API | Tool |
| AI needs to write/modify data | Tool |
| AI needs to read a file | Resource |
| AI needs to query a database (read) | Resource |
| AI needs consistent analysis structure | Prompt |
| AI needs reusable task templates | Prompt |
| AI needs live external data | Tool (fetch + return) |
| AI needs your internal docs/context | Resource |
If you're just getting started, the fastest path to a working MCP server is npx @webbywisp/create-mcp-server — scaffolds all three types with real working implementations.
Build the right type from the start and your integration feels natural. Build the wrong one and you'll spend days wondering why it feels off.
Top comments (0)