Model Context Protocol (MCP) is changing how we connect LLMs to external data. While running MCP locally via stdio is great for testing, the real power comes when you deploy these servers to the cloud, making your tools available to any agent, anywhere.
In this guide, I'm going to walk you through building a Google Cloud Storage (GCS) MCP Server, containerizing it, and deploying it to Google Cloud Run.
We will cover:
- The Server Logic: Writing our own custom MCP server.
- Containerization: Docker setup for Cloud Run.
- Deployment: Permissions and Cloud Build commands.
- Verification: Testing with the MCP Inspector.
So let's get started.
The Server Logic (server.py)
The biggest challenge with deploying MCP servers is the transport. Locally, we usually use Stdio (standard input/output). In the cloud, we need SSE (Server-Sent Events) over HTTP.
I didn't want to maintain two separate scripts, so I wrote a single server.py that detects where it's running. If it sees a PORT environment variable (which Cloud Run injects), it spins up a web server.
Here is the implementation using FastMCP:
# server.py
import os
import logging
from mcp.server.fastmcp import FastMCP
from google.cloud import storage
# Initialize FastMCP
mcp = FastMCP("gcs-mcp-server")
# Initialize GCS Client
storage_client = storage.Client()
@mcp.tool()
def list_buckets() -> str:
"""Lists all Google Cloud Storage buckets in the project."""
buckets = list(storage_client.list_buckets())
return "\n".join([bucket.name for bucket in buckets])
@mcp.tool()
def create_bucket(bucket_name: str, location: str = "US") -> str:
"""Creates a new GCS bucket."""
bucket = storage_client.bucket(bucket_name)
bucket.storage_class = "STANDARD"
new_bucket = storage_client.create_bucket(bucket, location=location)
return f"Created bucket {new_bucket.name} in {location}"
@mcp.tool()
def delete_bucket(bucket_name: str) -> str:
"""
Deletes a GCS bucket.
WARNING: This forces deletion of all objects and versions inside.
"""
bucket = storage_client.bucket(bucket_name)
# Robust cleanup: List and delete all blob versions to avoid "Bucket not empty" errors
blobs = list(storage_client.list_blobs(bucket_name, versions=True))
for blob in blobs:
blob.delete()
bucket.delete(force=True)
return f"Deleted bucket {bucket_name}"
if __name__ == "__main__":
# DUAL MODE LOGIC
if "PORT" in os.environ:
# We are in Cloud Run -> Start an SSE Server
import uvicorn
port = int(os.environ["PORT"])
print(f"Starting SSE server on port {port}...")
# Critical for Cloud Run: Allow proxy headers and bind to 0.0.0.0
uvicorn.run(
mcp._sse_app,
host="0.0.0.0",
port=port,
proxy_headers=True,
forwarded_allow_ips='*',
timeout_keep_alive=300 # Prevent SSE connection drops
)
else:
# We are local -> Start Stdio Server
mcp.run()
You'll notice I added some extra logic in delete_bucket. GCS is very protective; it won't let you delete a bucket if there's anything inside it. If you have Object Versioning turned on, a bucket might look empty but still have hidden archived files. My implementation explicitly hunts down every version and deletes it first.
Dockerizing (Keep It Simple)
We don't need a complex multi-stage build here. A simple Python slim image works perfectly.
The requirements.txt is minimal:
mcp[cli]
google-cloud-storage
uvicorn
And the Dockerfile:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "server.py"]
Infrastructure Setup
Before we deploy, we need to make sure your CLI is pointing to the right place and your project has permissions.
Step 2.1: Initialize gcloud
Open your terminal. If you haven't logged in recently, run:
gcloud auth login
gcloud config set project <YOUR_PROJECT_ID>
Step 2.2: Grant Permissions
By default, Cloud Run cannot manage Storage buckets. We need to find the Service Account that Cloud Run uses and give it the Storage Admin role.
Run these commands in your terminal:
# 1. Get your Project ID and Number
export PROJECT_ID=$(gcloud config get-value project)
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format='value(projectNumber)')
# 2. Grant the permission
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/storage.admin"
Expected Output: You should see a YAML list of bindings updated with "Updated IAM policy for project [your-project-id]."
Deploying to Cloud Run
Instead of typing long commands every time, we use a deployment script. Save this as deploy.sh in your project folder.
#!/bin/bash
set -e
# Configuration
PROJECT_ID=$(gcloud config get-value project)
SERVICE_NAME="gcs-mcp-server"
REGION="us-central1"
IMAGE_NAME="gcr.io/${PROJECT_ID}/${SERVICE_NAME}"
if [ -z "$PROJECT_ID" ]; then
echo "Error: No Google Cloud project selected. Run 'gcloud config set project <PROJECT_ID>' first."
exit 1
fi
echo "Deploying ${SERVICE_NAME} to Project: ${PROJECT_ID}..."
# 1. Build the container image
echo "Building container image..."
gcloud builds submit --tag $IMAGE_NAME
# 2. Deploy to Cloud Run
echo "Deploying to Cloud Run..."
# Note: --allow-unauthenticated is used here for simplicity to allow easy testing.
# For production, remove this flag and configure IAM authentication.
gcloud run deploy $SERVICE_NAME \
--image $IMAGE_NAME \
--platform managed \
--region $REGION \
--allow-unauthenticated
echo "Deployment complete!"
echo "Service URL:"
gcloud run services describe $SERVICE_NAME --platform managed --region $REGION --format 'value(status.url)'
Running the Deployment
Now, make the script executable and run it:
chmod +x deploy.sh
./deploy.sh
What You Will See (Terminal Output)
Build Step: You will see a stream of logs starting with Creating temporary tarball... followed by Docker build steps (Step 1/5, Step 2/5). This ends with SUCCESS.
Deploy Step: You will see Deploying container to Cloud Run service [gcs-mcp-server] in project...
Success:
Service [gcs-mcp-server] has been deployed and is serving 100 percent of traffic.
Service URL: https://gcs-mcp-server-xyz-uc.a.run.app
Copy that Service URL. You need it.
Verification
Go to the Google Cloud Console and navigate to Cloud Run.
1. The Service Dashboard
Click on gcs-mcp-server. You should see a green checkmark indicating the service is healthy.
Revisions Tab: Ensure the latest revision is receiving 100% traffic.
Logs Tab: This is crucial. If your code crashes, this is where you look. You should see a log entry: Starting SSE server on port 8080.... If you see that, your code successfully detected the environment and started Uvicorn.
2. Testing with MCP Inspector
You don't need to write Python code to verify the server is working. Use the MCP Inspector from your terminal.
The Command:
npx @modelcontextprotocol/inspector \
--transport sse \
--server-url https://<YOUR-SERVICE-URL>/sse
Note: You MUST append /sse to the URL. The root URL / will likely give you a 404 or Method Not Allowed.
What You Will See: A web interface will open in your browser.
1.Start by establishing connection by clicking Connect.
2.On the left sidebar, click Tools and List Tools. You should see
list_buckets, create_bucket, and delete_bucket listed.
3.Click list_buckets and hit Run.
4.You should see a JSON response with your bucket names.
Conclusion
That's it! You've gone from a local script to a deployed, scalable microservice that acts as a tool for your AI agents.
By moving your tools to Cloud Run, you decouple them from your agent's runtime. Your agent can be a script on your laptop, a service in Vertex AI, or a workflow in a completely different environment - it doesn't matter. As long as it can hit that URL, it has access to your Google Cloud Storage tools.




Top comments (0)