DEV Community

Cover image for Why I Structured MemoryMesh Across 3 CDK Stacks — Every Decision Explained
Abdulai Yorli Iddrisu
Abdulai Yorli Iddrisu

Posted on

Why I Structured MemoryMesh Across 3 CDK Stacks — Every Decision Explained

When I started building MemoryMesh I had a choice to make early on. Throw everything into one CDK stack and move fast, or split it properly and build something I could actually maintain and explain. I went with three stacks. Here's why each decision was made the way it was.

The Three Stacks
MemoryMeshDynamoDB handles the data layer — two tables, memorymesh-context (PK: userId, SK: createdAt) and memorymesh-profile (PK: userId).
MemoryMeshLambda handles compute — five Lambda functions on Node.js 20 plus the IAM role and Bedrock permissions.
MemoryMeshApi handles the HTTP API — API Gateway with routes wired to each Lambda.

Why Three Stacks Instead of One
These layers have completely different deployment cycles. If I update a Lambda function I don't want to risk touching the database stack. If I change an API route I don't need to redeploy compute. Keeping them separate means each piece deploys independently without putting the others at risk.

Why PAY_PER_REQUEST on DynamoDB
This is a personal tool. Traffic is low volume and unpredictable. Provisioned capacity would mean paying for read and write units I'm mostly not using. PAY_PER_REQUEST means the bill reflects what I actually use, which at this scale is close to nothing.

Why HTTP API Gateway Over REST API
REST API has more features. HTTP API is simpler, cheaper and faster for Lambda proxy calls. For a tool making straightforward requests to Lambda functions the additional features of REST API weren't needed. HTTP API was the right tool for this use case.

The Most Interesting Decision: Two Different Access Paths
The Chrome extension goes through API Gateway. The MCP server bypasses API Gateway entirely and talks to DynamoDB directly via the AWS SDK.
The reason is trust. The MCP server runs as a local Node.js process on your machine with real AWS credentials in the config. It's a trusted local process. Hitting DynamoDB directly is faster and simpler.
The Chrome extension runs inside a browser. It can't hold AWS credentials the same way. So it goes through API Gateway, which is the right entry point for an untrusted external client.
Same data. Two different trust models. Two different access paths.

CORS Scoped to Three Origins
API Gateway CORS is configured for exactly three origins: claude.ai, chatgpt.com and gemini.google.com. No wildcard. The API only accepts requests from those specific domains.

The full CDK code for all three stacks is in the repo if you want to see how it's structured: github.com/yorliabdulai/contextbridge
And if you missed the full technical deep-dive from launch day: dev.to/abdulai_yorliiddrisu_f5b/i-built-a-portable-ai-memory-layer-with-mcp-aws-bedrock-and-a-chrome-extension-18de

Top comments (0)