Building a Serverless DynamoDB MCP: Making Your AI Talk to Your Database
Have you ever wished you could just ask your AI assistant to query your database? Something like:
"Hey Kiro, show me all active users from my DynamoDB table"
or
"Add a new user named Alice with email alice@example.com to the Users table"
Well, that's exactly what we're building today! 🚀
The Big Picture: What Are We Building?
We're creating a serverless MCP (Model Context Protocol) backend on AWS that enables AI assistants like Kiro to interact with DynamoDB tables conversationally. Think of it as giving Kiro a direct, secure phone line to your DynamoDB database.
Here's what makes this special:
- 10 DynamoDB operations exposed as natural language tools
- Completely serverless - runs on AWS Lambda
- Secure by default - AWS IAM authentication with SigV4 signing
- Zero local dependencies - all the heavy lifting happens in the cloud
- Self-configuring - tools are discovered dynamically
Wait, What's MCP?
Before we dive in, let's talk about MCP (Model Context Protocol).
Think of MCP as a standardized way for AI assistants to use external tools. It's like giving your AI a toolbox where each tool does something specific - query a database, fetch weather data, send emails, etc.
The protocol works like this:
- AI assistant connects to an MCP server
- Server tells AI what tools are available
- AI can call these tools when needed
- Server executes the tool and returns results
- AI uses the results to help the user
The beauty? The AI doesn't need to know how the tools work internally. It just needs to know what they do and how to call them.
Why Build This Serverless?
You might ask: "Why not just run a local server on my machine?"
Great question! Here's why serverless wins:
1. Centralized Management
One deployment serves all your team members. Update once, everyone benefits. No "it works on my machine" problems.
2. Security at Scale
- IAM-based authentication (no API keys to rotate)
- Each Lambda has scoped permissions
- Audit logs for every database operation
- Secrets managed by AWS Secrets Manager
3. Cost Efficiency
Pay only when you use it. Lambda charges per request, not per hour. Most hobby projects? Practically free under AWS free tier.
4. Automatic Scaling
Whether it's you at 2 AM or your whole team during peak hours, it just works.
5. No Infrastructure Headaches
No servers to patch, no runtime versions to manage, no "why is Python 3.8 broken on my Mac?"
The Architecture: How It All Fits Together
Let me paint you a picture of how this works:
┌─────────────────────┐
│ You: "Show me all │
│ users from Users │
│ table" │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Claude Desktop │ ← Your AI assistant
│ (MCP Client) │
└──────────┬──────────┘
│ stdio / JSON-RPC
▼
┌─────────────────────┐
│ Local Proxy │ ← Signs requests with your AWS credentials
│ (proxy.sh) │
└──────────┬──────────┘
│ HTTPS + AWS IAM Auth
▼
┌─────────────────────┐
│ API Gateway │ ← Entry point to AWS
│ (HTTP API) │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ Lambda Functions │ ← 11 functions, one per operation
│ - get-item │
│ - put-item │
│ - query │
│ - scan │
│ - etc... │
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ DynamoDB Tables │ ← Your actual data
└─────────────────────┘
The Flow, Step by Step:
- You ask Kiro something about your database
- Kiro recognizes it needs to use a DynamoDB tool
- Local proxy intercepts the request and signs it with AWS SigV4
- API Gateway validates the signature (IAM authentication)
- Lambda function executes the DynamoDB operation
- Result comes back as human-readable text
- Kiro uses the result to answer your question
The genius here? Kiro has no idea it's talking to AWS. It thinks it's using a local tool. All the cloud complexity is hidden.
The Key Design Decisions
Let me walk you through the "why" behind each major decision:
Decision 1: Why Plain-Text Responses?
DynamoDB returns data in this format:
{
"Item": {
"userId": {"S": "user001"},
"name": {"S": "Alice Johnson"},
"age": {"N": "28"}
}
}
Ugly, right? Those {"S": ...} and {"N": ...} wrappers are DynamoDB's type system.
Our Lambda functions convert this to:
Item from table 'Users':
userId: user001
name: Alice Johnson
age: 28
Why? Because Kiro can narrate this naturally to you. No JSON parsing needed. It's optimized for conversation, not computation.
Decision 2: Why One Lambda Per Operation?
We could've built one mega-Lambda that handles everything. But we didn't. Here's why:
Principle of Least Privilege: Each Lambda gets only the permissions it needs.
-
get-itemLambda →dynamodb:GetItempermission only -
put-itemLambda →dynamodb:PutItempermission only -
delete-itemLambda →dynamodb:DeleteItempermission only
If one Lambda gets compromised? Damage is limited.
Clear Separation:
- Each Terraform file = One Lambda
- Easy to understand, easy to modify
- Want to remove scan operation? Delete one file.
Cost Optimization:
Lambda charges by execution time. Smaller functions = faster cold starts = lower costs.
Decision 3: Why Self-Configuring Tools?
The proxy script doesn't have any hardcoded tool definitions. On startup, it calls:
GET /tools
And receives:
[
{
"name": "dynamodb_get_item",
"description": "Retrieve a single item from DynamoDB...",
"inputSchema": {...},
"route": "/dynamodb/get-item"
},
...
]
The magic? Add a new tool to dynamodb_ops.py, deploy, and the proxy automatically discovers it. No client-side updates needed.
This follows the Unix philosophy: "mechanism, not policy." The proxy provides the mechanism (SigV4 signing, JSON-RPC), but the backend defines the policy (what tools exist).
Decision 4: Why AWS IAM Instead of API Keys?
Traditional approach:
export API_KEY="super-secret-key-123"
Our approach:
# Uses your AWS credentials
# Same ones you use for AWS CLI
Benefits:
- ✅ No keys to rotate every 90 days
- ✅ Integrates with your existing AWS setup
- ✅ CloudTrail logs every request
- ✅ Can revoke access instantly via IAM
- ✅ Supports MFA, temporary credentials, SSO
The proxy signs every request with AWS Signature Version 4. API Gateway validates the signature before Lambda even runs. It's the same security AWS Console uses.
The Code: Let's Break It Down
The Lambda Handler (Simplified)
Here's what a Lambda function looks like (simplified for clarity):
def get_item_handler(event, context):
"""Retrieve a single item from DynamoDB by primary key."""
# Parse the request
params = json.loads(event.get("body", "{}"))
table_name = params.get("table_name")
key = params.get("key")
# Convert simple format to DynamoDB format
dynamodb_key = {}
for k, v in key.items():
if isinstance(v, str):
dynamodb_key[k] = {"S": v}
elif isinstance(v, (int, float)):
dynamodb_key[k] = {"N": str(v)}
# Call DynamoDB
response = dynamodb.get_item(
TableName=table_name,
Key=dynamodb_key
)
# Format response as human-readable text
item = response.get("Item", {})
formatted = format_item(item)
return {
"statusCode": 200,
"headers": {"Content-Type": "text/plain"},
"body": f"Item from table '{table_name}':\n{formatted}"
}
Three key parts:
- Parse input - Extract table name and key
- Convert formats - Simple JSON → DynamoDB types
- Return readable text - Not raw JSON
The Proxy Script (The Secret Sauce)
The proxy does three critical things:
1. Tool Discovery:
# On startup
curl -X GET https://api.execute-api.us-east-1.amazonaws.com/tools
# Saves tool definitions locally
2. SigV4 Signing:
# For each request
signature=$(calculate_aws_signature "$request")
curl -H "Authorization: AWS4-HMAC-SHA256 Credential=..." \
https://api.execute-api.us-east-1.amazonaws.com/dynamodb/get-item
3. JSON-RPC Translation:
# Receives from Kiro:
{"jsonrpc": "2.0", "method": "tools/call", "params": {...}}
# Translates to HTTP:
POST /dynamodb/get-item
{"table_name": "Users", "key": {"userId": "123"}}
# Returns to Kiro:
{"jsonrpc": "2.0", "result": {"content": [{"type": "text", "text": "..."}]}}
It's a protocol adapter - speaks MCP to Kiro, speaks HTTP to AWS.
The Infrastructure (Terraform)
Each Lambda gets its own Terraform file. Here's the pattern:
# IAM Role
resource "aws_iam_role" "lambda_get_item_role" {
name = "dynamodb-get-item-role"
# Trust policy allows Lambda service to assume this role
}
# Scoped Permission
resource "aws_iam_role_policy" "lambda_get_item_dynamodb" {
name = "dynamodb-get-item-policy"
role = aws_iam_role.lambda_get_item_role.id
policy = jsonencode({
Statement = [{
Effect = "Allow"
Action = ["dynamodb:GetItem"] # Only this action!
Resource = ["*"]
}]
})
}
# Lambda Function
resource "aws_lambda_function" "lambda_get_item" {
function_name = "dynamodb-get-item"
role = aws_iam_role.lambda_get_item_role.arn
runtime = "python3.13"
handler = "dynamodb_ops.get_item_handler"
# ... more config
}
Rinse and repeat for each operation. Total: 11 Lambda functions.
The 10 DynamoDB Operations
Here's what you can do:
Read Operations
1. Get Item - Fetch a single item by key
"Get user user001 from the Users table"
2. Query - Find items matching a condition
"Show me all orders for user123 from the Orders table"
3. Scan - Read the entire table (with optional filters)
"Scan the Products table and show me 10 items"
4. Batch Get - Fetch multiple items at once
"Get users user001, user002, and user003 from Users table"
5. List Tables - See all DynamoDB tables
"What DynamoDB tables do I have?"
6. Describe Table - Get table metadata
"Describe the Users table structure"
7. Count Items - Get approximate table size
"How many items are in the Users table?"
Write Operations
8. Put Item - Add or replace an item
"Add a user with userId user011, name Kate Brown to Users table"
9. Update Item - Modify specific attributes
"Update the role to Senior Engineer for user001"
10. Delete Item - Remove an item
"Delete user user005 from the Users table"
Bonus: The Sample Table
We include an optional sample-table.tf that creates a "Users" table with 10 realistic user records:
resource "aws_dynamodb_table" "users_sample" {
name = "Users"
billing_mode = "PAY_PER_REQUEST" # No fixed costs!
hash_key = "userId"
# ... schema definition
}
resource "aws_dynamodb_table_item" "user_1" {
table_name = aws_dynamodb_table.users_sample.name
item = jsonencode({
userId = { S = "user001" }
name = { S = "Alice Johnson" }
email = { S = "alice.johnson@example.com" }
role = { S = "Software Engineer" }
department = { S = "Engineering" }
active = { BOOL = true }
# ... more fields
})
}
Perfect for testing! Deploy once, start asking questions immediately.
Don't need it? Just delete the file or rename it to sample-table.tf.disabled.
How to Deploy This
Ready to try it? Here's the journey:
Prerequisites
# You need these installed
aws --version # AWS CLI
terraform --version # Terraform
jq --version # JSON processor
bash --version # Bash 4+
Make sure your AWS credentials are configured:
aws sts get-caller-identity
Step 1: Clone and Deploy
# Clone the repo (or create from the code)
cd AWSServerlessMCP
# Run the magic script
./apply.sh
This script:
- ✅ Validates your environment
- ✅ Deploys all 11 Lambda functions via Terraform
- ✅ Creates API Gateway routes
- ✅ Generates IAM user for the proxy
- ✅ Stores credentials in Secrets Manager
- ✅ Generates Claude Desktop config
- ✅ Runs validation tests
Total deployment time: ~2-3 minutes
Step 2: Configure Claude Desktop
The script generates 02-proxy/claude_desktop_config_sh.json:
{
"mcpServers": {
"dynamodb": {
"command": "bash",
"args": ["/path/to/proxy.sh"],
"env": {
"MCP_ACCESS_KEY_ID": "AKIA...",
"MCP_SECRET_ACCESS_KEY": "...",
"MCP_API_ENDPOINT": "https://....execute-api.us-east-1.amazonaws.com",
"MCP_REGION": "us-east-1"
}
}
}
}
Copy this to your Claude Desktop config:
-
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json -
Linux:
~/.config/Claude/claude_desktop_config.json
Step 3: Restart Claude Desktop
Close and reopen Claude Desktop. You should see DynamoDB tools appear!
Step 4: Start Asking Questions!
Try these:
"List all my DynamoDB tables"
"Describe the Users table"
"Show me all users from the Users table"
"Get user user001 from Users table"
"Add a new user with userId user011, name John Doe,
email john@example.com to the Users table"
Security Deep Dive
Let's talk about how we keep this secure:
1. IAM Authentication
Every request goes through this flow:
Request → Proxy signs with AWS SigV4 → API Gateway validates signature → Lambda executes
No signature = No access. Period.
2. Scoped Permissions
The proxy IAM user has exactly ONE permission:
{
"Effect": "Allow",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:ACCOUNT:API_ID/*/*"
}
It can call the API. Nothing else. Can't create EC2 instances, can't delete S3 buckets, can't read secrets.
3. Lambda Isolation
Each Lambda has scoped DynamoDB permissions:
get-item Lambda → Can only read
put-item Lambda → Can only write
delete-item Lambda → Can only delete
Even if you somehow bypass API Gateway (you can't), each Lambda is isolated.
4. Audit Trail
Every action is logged:
def _audit_log(event: dict, tool: str) -> None:
user = event.get("headers", {}).get("x-mcp-user", "unknown")
print(f"AUDIT tool={tool} user={user}")
CloudWatch Logs capture:
- Who made the request (your username)
- What tool was called
- When it happened
- What the result was
5. No Secrets in Code
Credentials live in AWS Secrets Manager:
aws secretsmanager get-secret-value --secret-id dynamodb-mcp-proxy
Never in your codebase. Never in environment variables you might accidentally commit.
Cost Analysis
"How much does this cost to run?"
Let's break it down:
AWS Free Tier (First 12 Months):
- Lambda: 1M requests/month free + 400,000 GB-seconds compute
- API Gateway: 1M API calls/month free
- DynamoDB: 25 GB storage + 25 read/write units
After Free Tier:
Lambda: $0.20 per 1M requests + $0.0000166667 per GB-second
Example calculation for 10,000 queries/month:
- Requests: 10,000 × $0.20/1M = $0.002
- Compute (128MB, 200ms avg): 10,000 × 0.2s × 0.125GB × $0.0000166667 = $0.004
- Total Lambda: ~$0.01/month
API Gateway: $1.00 per 1M requests
- 10,000 requests = $0.01/month
DynamoDB: Pay-per-request pricing
- $1.25 per 1M write requests
- $0.25 per 1M read requests
- 10,000 reads = $0.003/month
Secrets Manager: $0.40/month per secret
- $0.40/month
Total for 10,000 queries/month: ~$0.42
For a hobby project? Basically free. For production? Scales linearly with usage.
Common Patterns and Best Practices
Pattern 1: Query with Filters
Instead of scanning, use query when possible:
# Efficient - uses partition key
"Query Orders table where userId equals user123"
# Less efficient - full table scan
"Scan Orders table and filter by userId user123"
Pattern 2: Batch Operations
Fetch multiple items in one call:
# One request for three items
"Get users user001, user002, user003 using batch get"
# Better than three separate requests
Pattern 3: Conditional Updates
Use update expressions for atomic operations:
"Update the counter by incrementing it by 1 for item user001"
This translates to:
UpdateExpression="SET #counter = #counter + :inc"
Atomic, no race conditions.
Extending the System
Want to add a new operation? Here's how:
1. Add Handler to Python
# In dynamodb_ops.py
def batch_write_handler(event, context):
"""Bulk write multiple items."""
params = _parse_json_body(event)
# ... implementation
return _response(200, "Successfully wrote N items")
# Add to TOOL_REGISTRY
TOOL_REGISTRY.append({
"name": "dynamodb_batch_write",
"description": "Write multiple items in one request",
"inputSchema": {...},
"route": "/dynamodb/batch-write"
})
2. Create Terraform File
# lambda-batch-write.tf
resource "aws_iam_role" "lambda_batch_write_role" {
name = "dynamodb-batch-write-role"
# ... role definition
}
resource "aws_iam_role_policy" "lambda_batch_write_dynamodb" {
policy = jsonencode({
Statement = [{
Effect = "Allow"
Action = ["dynamodb:BatchWriteItem"]
Resource = ["*"]
}]
})
}
resource "aws_lambda_function" "lambda_batch_write" {
function_name = "dynamodb-batch-write"
handler = "dynamodb_ops.batch_write_handler"
# ... function config
}
3. Update API Gateway
# In api.tf
resource "aws_apigatewayv2_integration" "batch_write_integration" {
api_id = aws_apigatewayv2_api.dynamodb_api.id
integration_uri = aws_lambda_function.lambda_batch_write.invoke_arn
# ... integration config
}
resource "aws_apigatewayv2_route" "batch_write_route" {
api_id = aws_apigatewayv2_api.dynamodb_api.id
route_key = "POST /dynamodb/batch-write"
target = "integrations/${aws_apigatewayv2_integration.batch_write_integration.id}"
}
4. Deploy
./apply.sh
That's it! The proxy auto-discovers the new tool on next startup.
Real-World Use Cases
Where does this shine?
1. Data Exploration
"Show me all users who joined in 2023"
"How many active subscriptions do we have?"
"What's the average age of users in the Engineering department?"
Natural language beats writing DynamoDB queries.
2. Quick CRUD Operations
"Add a test user for QA testing"
"Update the status to active for order order123"
"Delete all test data with prefix test-"
No need to open AWS Console.
3. Database Migrations
"Scan the Users table and show me all items missing the email field"
"Update all users in the Premium tier to add a credits field with value 100"
Kiro can help you identify and fix data inconsistencies.
4. Monitoring and Alerts
"How many failed login attempts in the last hour?"
"Show me all orders with status pending older than 24 hours"
Quick operational queries without building dashboards.
5. Developer Productivity
"Create a sample order for testing the checkout flow"
"Copy user user001 to user001-backup"
"Show me the schema of the Products table"
Faster than clicking through the console.
Lessons Learned
Building this taught me some valuable lessons:
1. Start with Security
We didn't bolt on IAM later - it was there from day one. That made all subsequent decisions easier.
2. Simplicity Scales
One Python file. Simple Terraform. No fancy frameworks. Yet it handles thousands of requests/day without breaking a sweat.
3. Developer Experience Matters
The fact that you can ask questions in plain English? That's not a gimmick. It genuinely changes how you interact with your data.
4. Observability is Free (Almost)
CloudWatch Logs, CloudTrail, X-Ray tracing - all built into Lambda. We didn't build a monitoring system; we just used what AWS gives us.
5. The Proxy Pattern Works
Keeping the proxy thin and stateless was the right call. All complexity lives in Lambda where we can update it independently.
Troubleshooting Tips
Hit a snag? Here's how to debug:
Problem: Proxy won't connect
# Check AWS credentials
aws sts get-caller-identity
# Test API Gateway directly
aws lambda invoke \
--function-name dynamodb-list-tables \
--payload '{}' \
/tmp/out.json
Problem: Permission denied
Check IAM user has execute-api permission:
aws iam get-user-policy \
--user-name dynamodb-mcp-proxy \
--policy-name dynamodb-mcp-proxy-invoke
Problem: Lambda timeout
Increase timeout in Terraform:
resource "aws_lambda_function" "lambda_scan" {
timeout = 30 # Increase from 15 to 30 seconds
}
Problem: Can't find table
Verify table exists:
aws dynamodb list-tables
Check Lambda has permission to access it.
Future Enhancements
Where could this go?
1. Multi-Region Support
Deploy to multiple regions, let Kiro route to the nearest one:
module "dynamodb_mcp_us_east" {
source = "./modules/dynamodb-mcp"
region = "us-east-1"
}
module "dynamodb_mcp_eu_west" {
source = "./modules/dynamodb-mcp"
region = "eu-west-1"
}
2. Advanced Query Support
Add support for complex queries:
"Find all users where age > 25 AND department = Engineering
AND active = true, sorted by joinDate"
3. Transaction Support
DynamoDB supports transactions:
def transaction_handler(event, context):
"""Execute multiple operations atomically."""
dynamodb.transact_write_items(
TransactItems=[
{"Put": {...}},
{"Update": {...}},
{"Delete": {...}}
]
)
4. Stream Processing
React to DynamoDB changes:
"Alert me when a new order is created"
"Update the analytics table whenever a user signs up"
Use DynamoDB Streams + Lambda triggers.
5. Cost Optimization
Add DynamoDB reserved capacity for predictable workloads:
resource "aws_dynamodb_table" "users" {
billing_mode = "PROVISIONED"
read_capacity = 5
write_capacity = 5
}
6. Multi-Table Operations
"Join Users table with Orders table on userId
and show me total order value per user"
Execute multiple queries and aggregate in Lambda.
Comparison with Alternatives
How does this stack up?
vs. Local MCP Server
Local Server:
- ✅ Lower latency
- ✅ No AWS costs
- ❌ Runs only on your machine
- ❌ Need to manage runtime dependencies
- ❌ No centralized updates
Serverless (Ours):
- ✅ Works for your whole team
- ✅ No runtime to manage
- ✅ Built-in scaling
- ✅ AWS-level security
- ❌ Small latency overhead (~100-200ms)
vs. Direct DynamoDB Access
Direct Access (boto3):
- ✅ Maximum control
- ✅ Lowest latency
- ❌ Requires coding for every query
- ❌ No natural language interface
- ❌ Harder to audit
MCP (Ours):
- ✅ Natural language queries
- ✅ Audit trail built-in
- ✅ Non-technical users can query
- ❌ Limited to predefined operations
vs. AWS Data API
AWS Data API:
- Only for Aurora Serverless
- HTTP-based queries
- SQL interface
Ours:
- ✅ Works with DynamoDB
- ✅ NoSQL operations
- ✅ Natural language interface
- ✅ MCP integration
Key Takeaways
If you remember nothing else, remember this:
MCP is powerful - It's not hype. It genuinely changes how we interact with data.
Serverless fits MCP perfectly - Centralized, scalable, secure. All the things MCP needs.
Security first, always - IAM, scoped permissions, audit logs. Build it in from day one.
Plain-text responses win - Optimize for conversation, not computation.
Keep it simple - One Python file, clear Terraform, no magic. Simplicity scales.
The proxy pattern works - Thin client, fat backend. Update independently.
Try It Yourself!
Ready to build your own? Here's the complete source:
GitHub: [Link to your repo]
Deploy in 3 commands:
git clone [your-repo]
cd AWSServerlessMCP
./apply.sh
Questions? Hit me up in the comments! I'd love to hear:
- What other AWS services would you want MCP tools for?
- What improvements would you make?
- What challenges did you face deploying it?
Wrapping Up
We started with a simple question: "Can I ask Kiro to query my database?"
We ended with:
- ✅ A production-ready serverless MCP backend
- ✅ 10 DynamoDB operations as natural language tools
- ✅ Secure, scalable, and cost-effective
- ✅ Deployable in under 5 minutes
This is just the beginning. MCP is going to change how we build AI-powered tools. The future isn't about building smarter AI - it's about giving AI better tools.
What will you build with MCP?
Found this helpful? Give it a ❤️ and follow for more serverless + AI content!
Have questions or improvements? Drop them in the comments - I read every one!
Further Reading
- Model Context Protocol Specification
- AWS Lambda Best Practices
- DynamoDB Developer Guide
- AWS IAM Best Practices
Connect with me:
- GitHub: [https://github.com/yeshwanthlm]
- LinkedIn: [https://www.linkedin.com/in/yeshwanth-l-m/]
- YouTube: [https://www.youtube.com/@TechWithYeshwanth]
Tags: #aws #serverless #lambda #dynamodb #ai #claude #mcp #terraform #python #devops

Top comments (0)