AI tools have memory now. Claude remembers your projects. ChatGPT has built a profile of how you work. Open a new conversation and the tool already has context - you don't have to re-explain yourself from zero every time.
The problem is that this memory is platform-locked.
Switch from ChatGPT to Claude and you lose six months of built-up context. The new tool doesn't know your projects, your preferences, your ongoing work. Technically it might be the better model for what you need right now - but it performs worse because it's starting blind. So you go back to your old tool. Not because it's better. Because it knows you.
That's the lock-in. Not pricing, not features - context. And it's the problem MemoryMesh solves.
MemoryMesh is a portable context layer: a Chrome extension + MCP server + AWS serverless backend that captures your context from any AI tool and injects it into any other. Your context travels with you when you switch.
This article walks through how it's built.
GitHub: github.com/yorliabdulai/contextbridge
Architecture Overview
architecture diagram

Part 1: The MCP Server
MCP (Model Context Protocol) is Anthropic's open standard for giving Claude tools that run locally. The MemoryMesh MCP server exposes four tools to Claude Desktop via stdio transport - no HTTP, no browser required.
`
// packages/mcp-server/src/server.ts
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "save_context",
description: "Save a context entry to MemoryMesh memory",
inputSchema: {
type: "object",
properties: {
content: { type: "string", description: "The context to save" },
source: { type: "string", description: "Where this came from" }
},
required: ["content"]
}
},
{
name: "get_context",
description: "Retrieve recent context entries from MemoryMesh",
inputSchema: {
type: "object",
properties: {
limit: { type: "number" }
}
}
},
{
name: "search_memory",
description: "Search stored context by keyword",
inputSchema: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"]
}
},
{
name: "get_user_profile",
description: "Get the current user profile",
inputSchema: { type: "object", properties: {} }
}
]
}));
`
The stdio entry point:
// packages/mcp-server/src/index.ts
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { createServer } from "./server.js";
const server = createServer();
const transport = new StdioServerTransport();
await server.connect(transport);
Claude Desktop config:
{
"mcpServers": {
"memorymesh": {
"command": "node",
"args": ["path/to/mcp-server/dist/index.js"],
"env": {
"AWS_REGION": "eu-west-2",
"CONTEXT_TABLE": "memorymesh-context",
"PROFILE_TABLE": "memorymesh-profile",
"MEMORYMESH_USER_ID": "mm-your-uuid",
"AWS_ACCESS_KEY_ID": "...",
"AWS_SECRET_ACCESS_KEY": "..."
}
}
}
}
Key architectural decision: the MCP server bypasses API Gateway and talks directly to DynamoDB via the AWS SDK. The API Gateway exists for the Chrome extension, which runs in the browser and can't use the AWS SDK natively. The MCP server is a local Node.js process - direct SDK access is faster and simpler.
Part 2: The AWS Backend (CDK)
Three CDK stacks, deployed in order.
DynamoDB Stack
// packages/infrastructure/lib/dynamodb-stack.ts
this.contextTable = new dynamodb.Table(this, "ContextTable", {
tableName: "memorymesh-context",
partitionKey: { name: "userId", type: dynamodb.AttributeType.STRING },
sortKey: { name: "createdAt", type: dynamodb.AttributeType.STRING },
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});
this.profileTable = new dynamodb.Table(this, "ProfileTable", {
tableName: "memorymesh-profile",
partitionKey: { name: "userId", type: dynamodb.AttributeType.STRING },
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});
PAY_PER_REQUEST - traffic is low and bursty. No need to provision capacity.
Lambda Stack
All five functions share the same deployment package. The Lambda IAM role gets bedrock:InvokeModel explicitly:
// packages/infrastructure/lib/lambda-stack.ts
lambdaRole.addToPolicy(new iam.PolicyStatement({
actions: ["bedrock:InvokeModel"],
resources: ["*"],
}));
props.contextTable.grantReadWriteData(lambdaRole);
props.profileTable.grantReadWriteData(lambdaRole);
const createFn = (name: string, handler: string) =>
new lambda.Function(this, name, {
functionName: `memorymesh-${name}`,
runtime: lambda.Runtime.NODEJS_20_X,
handler,
code: lambda.Code.fromAsset("../mcp-server/lambda-package.zip"),
role: lambdaRole,
environment: {
CONTEXT_TABLE: "memorymesh-context",
PROFILE_TABLE: "memorymesh-profile",
},
timeout: Duration.seconds(30),
});
this.saveFn = createFn("save-context", "lambda/saveContext.handler");
this.getFn = createFn("get-context", "lambda/getContext.handler");
this.searchFn = createFn("search-memory", "lambda/searchMemory.handler");
this.profileFn = createFn("get-user-profile", "lambda/getUserProfile.handler");
this.summarizeFn = createFn("summarize", "lambda/summarize.handler");
API Gateway Stack
CORS origins are scoped to the three AI tool domains - no wildcard:
// packages/infrastructure/lib/api-stack.ts
const api = new apigateway.HttpApi(this, "MemoryMeshApi", {
corsPreflight: {
allowOrigins: [
"https://claude.ai",
"https://chatgpt.com",
"https://gemini.google.com"
],
allowMethods: [CorsHttpMethod.GET, CorsHttpMethod.POST],
allowHeaders: ["Content-Type"],
},
});
api.addRoutes({ path: "/context", methods: [HttpMethod.POST], integration: new HttpLambdaIntegration("Save", props.saveFn) });
api.addRoutes({ path: "/context/{userId}", methods: [HttpMethod.GET], integration: new HttpLambdaIntegration("Get", props.getFn) });
api.addRoutes({ path: "/search/{userId}", methods: [HttpMethod.GET], integration: new HttpLambdaIntegration("Search", props.searchFn) });
api.addRoutes({ path: "/profile/{userId}", methods: [HttpMethod.GET], integration: new HttpLambdaIntegration("Profile", props.profileFn) });
api.addRoutes({ path: "/summarize", methods: [HttpMethod.POST], integration: new HttpLambdaIntegration("Summarize",props.summarizeFn) });
Part 3: Bedrock Summarisation
Raw conversation text is never stored directly. Every save goes through the summarise Lambda first, which calls Amazon Bedrock and stores the structured output.
// packages/mcp-server/src/lambda/summarize.ts
const MODEL_ID = "eu.anthropic.claude-haiku-4-5-20251001-v1:0";
export const handler = async (event: APIGatewayProxyEvent) => {
const { content } = JSON.parse(event.body!);
const prompt = `You are a context summariser for an AI memory system.
Analyse the following conversation and return ONLY a valid JSON object with these fields:
- summary: a dense paragraph capturing the main topic, key decisions, and outcomes
- tags: an array of 5-10 semantic keywords for search
- projects: an array of project names or identifiers mentioned
Conversation:
${content}
Return only the JSON object. No preamble, no markdown.`;
const command = new InvokeModelCommand({
modelId: MODEL_ID,
body: JSON.stringify({
anthropic_version: "bedrock-2023-05-31",
max_tokens: 1024,
messages: [{ role: "user", content: prompt }],
}),
contentType: "application/json",
accept: "application/json",
});
const response = await bedrock.send(command);
const body = JSON.parse(new TextDecoder().decode(response.body));
const structured = JSON.parse(body.content[0].text);
return { statusCode: 200, body: JSON.stringify(structured) };
};
One thing that will catch you out: in eu-west-2, you must use the EU cross-region inference profile ID - eu.anthropic.claude-haiku-4-5-20251001-v1:0 - not the standard Haiku model ID. Standard model IDs return a ValidationException in that region. The EU prefix routes through Bedrock's cross-region inference system.
What gets stored in DynamoDB is always the structured { summary, tags, projects } object - never a raw transcript. This is what makes the context injection useful rather than noisy. When you sync into a new tool, the AI gets dense, structured information about your work history - not a wall of raw dialogue.
memorymesh in action
Part 4: The Chrome Extension
Content scripts are injected into Claude.ai, ChatGPT, and Gemini. Each injects a floating banner with two controls: Save Context and Sync to AI.
Event Delegation
The trickiest implementation detail is how AI tool pages re-render parts of the DOM as you interact with them. Naive event listeners attached directly to injected buttons get orphaned when the surrounding DOM updates. The fix is a single delegated listener on the banner container itself:
// packages/extension/src/content/claude.ts
function injectBanner(userId: string) {
const banner = document.createElement("div");
banner.id = "memorymesh-banner";
banner.innerHTML = `
<div class="mm-controls">
<button data-action="save">⬡ Save Context</button>
<button data-action="sync">↺ Sync to AI</button>
</div>
`;
document.body.appendChild(banner);
// Single delegated listener - survives DOM re-renders
banner.addEventListener("click", async (e) => {
const btn = (e.target as HTMLElement).closest("[data-action]");
if (!btn) return;
const action = btn.getAttribute("data-action");
if (action === "save") await saveContext(userId);
if (action === "sync") await syncToAI(userId);
});
}
Context Injection
Each AI tool has a different DOM structure for its chat input. The injection targets:
const SELECTORS = {
claude: '[data-testid="chat-input"] [contenteditable]',
chatgpt: '#prompt-textarea',
gemini: '.ql-editor[contenteditable]',
};
async function syncToAI(userId: string) {
const entries = await getContext(userId); // fetches up to 1000 entries
const contextText = entries.map(e => e.summary).join("\n\n---\n\n");
const input = document.querySelector(SELECTORS[currentTool]);
if (!input) return;
input.textContent = contextText;
input.dispatchEvent(new Event("input", { bubbles: true }));
}
The limit was an important fix. An earlier version defaulted to fetching only 5 entries - fine for basic use, but completely insufficient after a bulk import. The default is 1000, which covers any realistic history size without a noticeable API response time difference given DynamoDB's read performance.
The History Importer
The importer accepts ChatGPT and Claude data export ZIPs. Format detection:
async function detectAndParse(json: any[]): Promise<Conversation[]> {
// Claude export: flat array, sender field is "human" or "assistant"
if (json[0]?.chat_messages !== undefined) {
return parseClaude(json);
}
// ChatGPT export: mapping object with author.role and content.parts
if (json[0]?.mapping !== undefined) {
return parseChatGPT(json);
}
throw new Error("Unrecognised export format");
}
// 300ms throttle between API calls
async function processAll(conversations: Conversation[], userId: string) {
for (const conv of conversations) {
await summarizeAndSave(conv, userId);
await delay(300);
}
}
The 300ms throttle between calls is not optional. Without it, Bedrock starts returning throttling errors around the 10th–15th consecutive request. With it, 58 conversations import cleanly with zero errors.
Deployment
The Lambda handlers and MCP server share the same TypeScript source. One build produces dist/, used both by the MCP server locally and packaged as lambda-package.zip for AWS.
# Build
cd packages/mcp-server && npm run build
# Package
Copy-Item package.json dist\
cd dist && npm install --production && cd ..
Compress-Archive -Path ".\dist\*" -DestinationPath ".\lambda-package.zip" -Force
# Deploy all 5 functions
@("save-context","get-context","search-memory","get-user-profile","summarize") | ForEach-Object {
aws lambda update-function-code --function-name "memorymesh-$_" `
--region eu-west-2 --zip-file fileb://lambda-package.zip
}
Does It Actually Work?
After importing 58 Claude + ChatGPT conversations through the bulk importer and syncing into Gemini - a tool that had never seen any of that history - Gemini responded:
> "It looks like we've been working through a dense sprint involving the LandLedger platform, quantum neural network optimizations, and various AWS infrastructure labs… I'm ready to pick up exactly where we left off."
That's the point. Gemini at that moment was technically the better model for what I needed. MemoryMesh made it actually useful - not just capable.
What's Next
Local SQLite backend - for users who don't want AWS infrastructure
Firefox port - Manifest V3 is largely compatible, mostly a manifest diff + testing
Gemini export - no native export exists today; DOM scraper is the only option
Selective sync - currently all entries inject; a picker UI would give more control
Source Code
github.com/yorliabdulai/contextbridge
Full CDK infrastructure, MCP server, Chrome extension, Lambda handlers, and setup guide. Contributions welcome - especially the Firefox port and local SQLite backend.
For the non-technical take on why AI context lock-in matters, read the companion piece:
Abdulai Yorli is a software developer based in Ghana, currently an IT Support Engineer at KPMG.

Top comments (0)