MCP servers are growing very fast - it feels like the dotcom bubble but everything happens 10x faster. Not having your own MCP server can soon feel like not having a website back in the day. In this tutorial I’ll show how to create your own MCP that can also manage payment flows.
Let’s say you want to create an MCP that generates images. I know, most AI chatbots already have image generation built in - but I don’t want to show another "a+b" example. I want a real use case that you can easily replace with e.g. video generation which has very high demand right now.
We’ll use OpenAI for image generation and Walleot as the payment provider (it lets you charge any amount, compared to Stripe which has a minimum).
What you need
- Python 3.10+
- OpenAI API key
- Walleot API key
- uv to manage Python projects
1) Project setup
So, we start the project the same way the official MCP Python SDK is suggesting:
uv init mcp-server-demo
cd mcp-server-demo
# Add MCP SDK and CLI
uv add "mcp[cli]"
# Image generation and payments
uv add openai paymcp
# Optional: fetch image bytes, env vars, local resize
uv add requests python-dotenv Pillow
Create a .env
file if you like to keep keys locally during development:
OPENAI_API_KEY=sk-...
WALLEOT_API_KEY=...
ENV=development
2) Minimal MCP server
Create server.py
the same way the official MCP Python SDK recommends:
from mcp.server.fastmcp import FastMCP, Context, Image
# Create an MCP server
mcp = FastMCP("Image generator", capabilities={"elicitation": {}})
# Define your AI tool
@mcp.tool()
async def generate(prompt: str, ctx: Context): # important to have ctx: Context here
"""Generates an image and returns it as an MCP resource"""
# Your image generation logic will appear here
return None # later we’ll return an image here
# Run the server
if __name__ == "__main__":
mcp.run(transport="streamable-http")
As you can see - we added elicitation capability - a new feature that allows the MCP server to ask the user for additional data during execution. Not all clients support it yet, and for older clients you don’t need this capability.
3) OpenAI image generation helper
Next, let’s add image generation logic. Since we use the OpenAI API, create a file named openai_client.py
with this code:
from typing import Optional
import os
import base64
import requests
from openai import AsyncOpenAI
_client: Optional[AsyncOpenAI] = None
def _get_client() -> AsyncOpenAI:
global _client
if _client is None:
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise RuntimeError(
"Missing OPENAI_API_KEY. Set it in your environment before calling generate_image()."
)
_client = AsyncOpenAI(api_key=api_key)
return _client
async def generate_image(prompt: str) -> str:
"""Generate an image and return base64 (PNG by default)."""
client = _get_client()
res = await client.images.generate(
model="dall-e-2", # use any image model you have access to
prompt=prompt
)
b64 = getattr(res.data[0], "b64_json", None) if res.data else None
if not b64:
url = getattr(res.data[0], "url", None) if res.data else None
if not url:
raise RuntimeError("No image returned (neither b64_json nor url)")
resp = requests.get(url, timeout=30)
resp.raise_for_status()
b64 = base64.b64encode(resp.content).decode("ascii")
return b64
This function returns a base64 image instead of a link to increase compatibility with different clients.
4) Use the helper in the tool and return an MCP resource
Now we use this function in our main code. To keep it working smoothly with clients like Claude Desktop, I also resize the image to 100x100. You can skip this if your client has no size limits.
Update server.py:
from mcp.server.fastmcp import FastMCP, Context, Image
from openai_client import generate_image
from io import BytesIO
from PIL import Image as PILImage
import base64
mcp = FastMCP("Image generator", capabilities={"elicitation": {}})
@mcp.tool()
async def generate(prompt: str, ctx: Context): # important to have ctx: Context here
"""Generates high quality image and returns it as MCP resource"""
b64 = await generate_image(prompt)
# Decode base64 and resize locally
raw = base64.b64decode(b64)
img = PILImage.open(BytesIO(raw))
img.thumbnail((100, 100))
buffer = BytesIO()
img.save(buffer, format="PNG")
buffer.seek(0)
return Image(data=buffer.getvalue(), format="png")
if __name__ == "__main__":
mcp.run(transport="streamable-http")
5) Run and test
This runs MCP server with MCP Inspector where you can test your tool:
uv run mcp dev server.py
Alternatively, install for Claude Desktop:
uv run mcp install server.py --with openai --with paymcp --with requests --with Pillow
6) Add payments
Now the easy but important step - add payments.
First, import and initialize PayMCP in server.py
, then price your tool:
from paymcp import PayMCP, PaymentFlow, price
#...
# Initialize payments with Walleot
PayMCP(
mcp,
providers={"walleot": {"apiKey": os.getenv("WALLEOT_API_KEY")}},
payment_flow=PaymentFlow.ELICITATION # change to PaymentFlow.TWO_STEP for Claude Desktop
)
@mcp.tool()
@price(0.2, "USD")
async def generate(prompt: str, ctx: Context): # important to have ctx: Context here
#....
And that’s it! Once you add WALLEOT_API_KEY
to your environment, your MCP will start asking for payment before running the image generation.
7) Run as a server or install in a client
Run as a server:
uv run server.py
Install to Claude Desktop the same way as before (but don’t forget to change payment_flow to PaymentFlow.TWO_STEP
for clients that do not support elicitation yet, e.g. Claude Desktop):
uv run mcp install server.py --with openai --with paymcp --with requests --with Pillow
Full code
If you want this as a single file example, here is server.py
with everything wired together. You will still need openai_client.py as shown above.
# server.py
from mcp.server.fastmcp import FastMCP, Image, Context
from paymcp import PayMCP, PaymentFlow, price
from openai_client import generate_image
import base64
from io import BytesIO
import os
from PIL import Image as PILImage
# Load .env in development
env = os.getenv("ENV", "development")
if env == "development":
from dotenv import load_dotenv
load_dotenv()
mcp = FastMCP("Image generator", capabilities={"elicitation": {}})
# Payments
PayMCP(
mcp,
providers={"walleot": {"apiKey": os.getenv("WALLEOT_API_KEY")}},
payment_flow=PaymentFlow.ELICITATION
)
@mcp.tool()
@price(0.2, "USD")
async def generate(prompt: str, ctx: Context): # important to have ctx: Context here
"""Generates high quality image and returns it as MCP resource"""
b64 = await generate_image(prompt)
# Decode base64 and resize locally
raw = base64.b64decode(b64)
img = PILImage.open(BytesIO(raw))
img.thumbnail((100, 100))
buffer = BytesIO()
img.save(buffer, format="PNG")
buffer.seek(0)
return Image(data=buffer.getvalue(), format="png")
if __name__ == "__main__":
mcp.run(transport="streamable-http")
Troubleshooting
Missing OPENAI_API_KEY
- check your env vars or .env
file.
No image returned - verify your OpenAI model access or try another model name.
Payment not triggered - check WALLEOT_API_KEY,
PayMCP init, and the @price(...)
decorator.
Links
Full code: python-paymcp-server-demo
PayMCP library: This is a very first version. If you face issues or want to contribute, feel free to open PRs here — https://github.com/blustAI/paymcp
Top comments (0)