Authors: Igor Alekseev (PSA AWS), Anuj Panchal (SA MongoDB), Vin Dahake (SA AWS)
AI agents are only as useful as the tools they can access. With the Model Context Protocol (MCP) becoming the standard for connecting AI models to external data sources, running MCP servers in managed cloud environments is the natural next step. In this post, we'll walk through deploying the MongoDB MCP Server on Amazon Bedrock AgentCore - giving your AI agents direct, structured access to MongoDB databases.
What You're Building
By the end of this guide, you'll have a containerized MongoDB MCP Server running as an AgentCore MCP runtime. Your AI agents will be able to query collections, inspect schemas, run aggregations, and interact with MongoDB Atlas - all through the MCP protocol, managed by AgentCore.
The architecture is straightforward:
AI Agent → Bedrock AgentCore → MongoDB MCP Server (container) → MongoDB / Atlas
AgentCore handles session management, scaling, and invocation routing. The MCP server handles the MongoDB-specific tooling.
Prerequisites
Before getting started, make sure you have:
- AWS CLI v2 installed and configured (
aws configure) - Docker (or Finch) for building container images
- An Amazon ECR private repository for storing the container image
- Access to Amazon Bedrock AgentCore in your target region
- A MongoDB connection string or Atlas API credentials
Step 1: Create an ECR Repository
First, create a private ECR repository to store the MCP server image:
aws ecr create-repository \
--repository-name mongodb-mcp-server \
--region us-east-1
Take note of the repository URI in the output - you'll need it in the next step.
Step 2: Build and Push the Container Image
The MongoDB MCP Server repository includes an AWS-optimized Dockerfile at deploy/aws/Dockerfile, pre-configured for AgentCore compatibility. Start by cloning the repo:
git clone https://github.com/mongodb-js/mongodb-mcp-server.git
cd mongodb-mcp-server
Then authenticate with ECR, build the image, and push it:
# Authenticate Docker with ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin \
<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com
# Build for ARM64 (required by AgentCore)
docker build --platform linux/arm64 \
-t <ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/mongodb-mcp-server:latest \
-f deploy/aws/Dockerfile .
# Push to ECR
docker push <ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/mongodb-mcp-server:latest
Important: AgentCore runtimes only support linux/arm64 images. Always build with --platform linux/arm64.
What's in the Dockerfile
The container is intentionally minimal. Here's what it configures:
| Setting | Value | Why |
|---|---|---|
| MDB_MCP_EXTERNALLY_MANAGED_SESSIONS | true | Lets AgentCore manage MCP session IDs instead of the server |
| MDB_MCP_HTTP_RESPONSE_TYPE | json | Returns JSON responses instead of Server-Sent Events |
| MDB_MCP_DISABLED_TOOLS | atlas-local | Disables local deployment tools that don't work in containers |
| MDB_MCP_TRANSPORT | http | Runs over HTTP instead of the default stdio transport |
| Port | 8000 | The HTTP listener port AgentCore connects to |
The server runs as a non-root user (mcp) for security, and the image is based on node:24-alpine to keep it lightweight.
Step 3: Configure the AgentCore Runtime
When creating your AgentCore MCP runtime, point the container image URI to your ECR image. The key configuration happens through environment variables that you set in the AgentCore runtime configuration.
Passing MongoDB Credentials
At minimum, you need to provide a way for the server to connect to your database. Set these environment variables in your AgentCore runtime:
For a direct MongoDB connection:
MDB_MCP_CONNECTION_STRING=mongodb://username:password@host:port/database
Note: Use standard mongodb:// connection strings. The mongodb+srv:// format is not yet supported by AgentCore.
For MongoDB Atlas API access (enables Atlas management tools):
MDB_MCP_API_CLIENT_ID=your-atlas-service-account-client-id
MDB_MCP_API_CLIENT_SECRET=your-atlas-service-account-client-secret
You can provide both a connection string and Atlas API credentials to enable the full set of tools --- database operations plus Atlas cluster management.
Optional Configuration
The server supports a wide range of configuration options through environment variables. A few worth considering for production:
-
MDB_MCP_READ_ONLY=true- Restricts the server to read-only operations. Highly recommended if your agents only need to query data. -
MDB_MCP_MAX_DOCUMENTS_PER_QUERY=100- Caps the number of documents returned per query (default: 100). -
MDB_MCP_TELEMETRY=disabled- Disables telemetry collection if needed. -
MDB_MCP_INDEX_CHECK=true- Rejects queries that don't use an index, enforcing performance best practices.
See the full configuration reference for all available options.
Step 4: Invoke the Runtime
Once deployed, you can invoke the AgentCore runtime using the Bedrock AgentCore API. Here's how to construct the invocation URL and make a request:
# Set your runtime ARN
AGENT_ARN="arn:aws:bedrock-agentcore:<REGION>:<ACCOUNT_ID>:runtime/<RUNTIME_NAME>"
# URL-encode the ARN
ENCODED_ARN=$(python3 -c "import urllib.parse; print(urllib.parse.quote('$AGENT_ARN', safe=''))")
# List available tools
curl -X POST \
"https://bedrock-agentcore.<REGION>.amazonaws.com/runtimes/${ENCODED_ARN}/invocations?qualifier=DEFAULT" \
-H "Authorization: Bearer <TOKEN>" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1}'
This returns the full list of MCP tools the server exposes - database queries, schema inspection, aggregation pipelines, Atlas management, and more.
Available Tools
Once deployed, your agents get access to a comprehensive set of MongoDB tools:
- Database operations - find, aggregate, insert-many, update-many, delete-many, count, and more. These cover the full range of CRUD operations plus aggregation pipelines.
- Schema and metadata - collection-schema, collection-indexes, list-databases, list-collections, db-stats. Useful for agents that need to understand the data model before querying.
- Atlas management - atlas-list-clusters, atlas-create-free-cluster, atlas-list-projects, atlas-get-performance-advisor, and others. These let agents manage Atlas infrastructure directly.
- Knowledge search - search-knowledge and list-knowledge-sources provide access to MongoDB's documentation and expert guidance, helping agents answer MongoDB-related questions accurately.
Monitoring
Keep an eye on your deployment through Amazon
CloudWatch logs:
aws logs describe-log-groups \
--region us-east-1 \
--log-group-name-prefix "/aws/bedrock-agentcore"
The server logs to stderr by default in the container configuration, which AgentCore captures and forwards to CloudWatch.
Security Considerations
A few things to keep in mind for production deployments:
- Use read-only mode (
MDB_MCP_READ_ONLY=true) unless your agents genuinely need write access. - Scope Atlas API credentials to the minimum required permissions using MongoDB Atlas Service Accounts.
- Use standard MongoDB authentication - create a dedicated database user for the MCP server with only the necessary privileges.
- Monitor query patterns - use the
MDB_MCP_INDEX_CHECK=trueoption to prevent unindexed collection scans that could impact database performance.
Wrapping Up
Deploying the MongoDB MCP Server on AgentCore gives your AI agents a reliable, managed path to MongoDB data. AgentCore handles the infrastructure concerns --- session management, scaling, health checks --- while the MCP server provides the structured tooling layer that agents need to work with databases effectively.
The setup is minimal: one Dockerfile, a few environment variables, and you're running. From there, it's about tuning the configuration to match your security and performance requirements.
For more details, check out:

Top comments (0)