In the first post I showed what an AI does with 118 MCP tools. In the second I showed how we organized them. Both posts assumed a working connection. This one covers the hardest part: getting that connection to work securely.
Most MCP servers run locally. You add a JSON block to your Claude config, paste an API key, and you're done. That works for developer tools. It doesn't work for a SaaS product where users have teams, roles, and data they share with other people.
We needed something different: a remote MCP server where users log in through their browser, pick a team, choose what the AI can access, and revoke it whenever they want. No API keys. No config files on disk.
The problem with local MCP servers
The standard MCP setup looks like this:
{
"mcpServers": {
"my-tool": {
"command": "npx",
"args": ["my-mcp-server"],
"env": {
"API_KEY": "sk-live-abc123"
}
}
}
}
This has three problems for a SaaS product:
1. API keys are static secrets. If a key leaks, it has full access until someone manually rotates it. There's no expiry, no scope, no revocation without generating a new key.
2. No user consent flow. The user pastes a key and hopes for the best. There's no screen that says "this AI assistant wants to read your invoices and create transactions - approve?"
3. No multi-tenancy. If your product has teams, which team does the API key belong to? If a user is on three teams, do they need three keys? How do they switch?
A remote MCP server with OAuth solves all three. The user authenticates through a browser, picks a team, grants specific permissions, and gets a token that expires in an hour. The AI never sees a password. The user can disconnect from the app's settings page.
What we built
Our MCP server runs at https://mcp.paperlink.online/api/mcp/mcp and uses streamable HTTP transport - no local process, no stdio, no binary to install. The connection command is:
claude mcp add --transport http paperlink https://mcp.paperlink.online/api/mcp/mcp
When a client connects for the first time, it discovers the OAuth endpoints through two well-known documents. Then it runs a standard OAuth 2.1 authorization code flow with PKCE. The user sees a consent screen in their browser. After approval, the client gets a short-lived access token and a 30-day refresh token.
Here's the full flow:
Client Server User's Browser
| | |
|-- GET /.well-known/ | |
| oauth-protected-resource ->| |
|<- {auth_server, scopes} | |
| | |
|-- GET /.well-known/ | |
| oauth-authorization-server | |
|<- {authorize, token, revoke} | |
| | |
|-- Open browser: /authorize? | |
| code_challenge=...& | |
| scope=invoices:read ------>|-- Redirect to consent UI ------->|
| | |
| | User picks team, |
| | reviews scopes, |
| | clicks Approve |
| | |
| |<- POST consent (teamId, scopes) -|
|<- Redirect: ?code=abc123 -| |
| | |
|-- POST /token -| |
| code=abc123& | |
| code_verifier=xyz | |
|<- {access_token, refresh} | |
| | |
|-- POST /api/mcp/mcp -| |
| Authorization: Bearer ... | |
|<- Tool results | |
Let me walk through each piece.
Step 1: Discovery
MCP clients discover auth endpoints through two standard documents. The protected resource metadata tells the client where to authenticate:
GET https://mcp.paperlink.online/.well-known/oauth-protected-resource
{
"resource": "https://mcp.paperlink.online/api/mcp/mcp",
"authorization_servers": ["https://app.paperlink.online"],
"scopes_supported": [
"invoices:read", "accounting:read", "accounting:write", "accounting:delete",
"companies:read", "companies:write", "clients:read", "clients:write",
"products:read", "products:write", "companies:delete", "clients:delete",
"products:delete", "invoices:write", "invoices:delete",
"estimates:read", "estimates:write", "estimates:delete",
"teams:read", "teams:write", "billing:read",
"ai:read", "ai:write", "sharing:read", "sharing:write"
]
}
Then the authorization server metadata tells the client the exact endpoints:
GET https://app.paperlink.online/.well-known/oauth-authorization-server
{
"issuer": "https://app.paperlink.online",
"authorization_endpoint": "https://app.paperlink.online/api/oauth/authorize",
"token_endpoint": "https://app.paperlink.online/api/oauth/token",
"revocation_endpoint": "https://app.paperlink.online/api/oauth/revoke",
"registration_endpoint": "https://app.paperlink.online/api/oauth/register",
"response_types_supported": ["code"],
"grant_types_supported": ["authorization_code", "refresh_token"],
"token_endpoint_auth_methods_supported": ["none"],
"code_challenge_methods_supported": ["S256"],
"scopes_supported": ["invoices:read", "..."]
}
Notice code_challenge_methods_supported: ["S256"]. We only support S256, not plain. This is intentional - plain PKCE offers almost no security benefit over no PKCE at all.
Also notice token_endpoint_auth_methods_supported: ["none"]. MCP clients are public clients (no client secret), so authentication happens entirely through PKCE.
Step 2: Authorization
The client opens the user's browser to the authorization endpoint:
GET /api/oauth/authorize?
response_type=code&
client_id=claude-desktop&
redirect_uri=http://localhost:5555/callback&
code_challenge=E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM&
code_challenge_method=S256&
scope=invoices:read+accounting:write+sharing:read&
state=random-csrf-token
The server validates the request and redirects to the consent UI. Two things get validated before the user sees anything:
// Only S256 - reject plain PKCE
if (codeChallengeMethod !== 'S256') {
return NextResponse.json({
error: 'invalid_request',
error_description: 'Only code_challenge_method=S256 is supported',
}, { status: 400 });
}
// Redirect URI must match known patterns
if (!isAllowedRedirectUri(clientId, redirectUri)) {
return NextResponse.json({
error: 'invalid_request',
error_description: 'redirect_uri is not registered for this client',
}, { status: 400 });
}
The redirect URI validation is worth expanding on. MCP clients use localhost callbacks (Claude Desktop uses http://localhost:PORT/callback), and browser-based clients like Claude.ai and ChatGPT use their own domains. We maintain an allowlist:
const SAFE_REDIRECT_PATTERNS = [
'http://127.0.0.1:*/*', // Desktop clients (IP)
'http://localhost:*/*', // Desktop clients (hostname)
'https://*.claude.ai/*', // Claude.ai
'https://claude.ai/*',
'https://*.chatgpt.com/*', // ChatGPT
'https://chatgpt.com/*',
'https://*.openai.com/*', // OpenAI
'https://*.perplexity.ai/*', // Perplexity
'https://*.mistral.ai/*', // Mistral
'https://*.vscode.dev/*', // VS Code
// ... full list in mcpClientRegistry.ts
];
The patterns use glob-like matching: * in a port position matches any port, *.claude.ai matches any subdomain. The matcher parses the URI and compares scheme, host, port, and path separately.
Any client_id gets through as long as the redirect URI matches a known pattern. This is dynamic client registration - we don't pre-register clients. Security comes from PKCE plus redirect URI validation, not from client secrets.
Step 3: The consent screen
The consent screen is a regular Next.js page. The user must already be logged in (NextAuth session required). They see two things: a team picker and a scope list.
export function OAuthConsentForm({
teams,
requestedScopes,
clientId,
redirectUri,
state,
codeChallenge,
codeChallengeMethod,
dict,
}: OAuthConsentFormProps) {
const [selectedTeamId, setSelectedTeamId] = useState<string | undefined>(
teams[0]?.id
);
return (
<form action={formAction}>
{/* Team selector - one connection = one team */}
<Dropdown
options={teams.map(t => ({ value: t.id, label: t.name }))}
value={selectedTeamId}
onChange={setSelectedTeamId}
label={dict.consent.selectTeam}
/>
{/* Scope display - user sees what they're granting */}
<ul>
{requestedScopes.map(scope => (
<li key={scope}>
<span className="text-success">✓</span>
{dict.consent.scopes[scope] ?? scope}
</li>
))}
</ul>
<button type="button" onClick={handleDeny}>Deny</button>
<button type="submit">Approve</button>
</form>
);
}
The team picker is the key piece. PaperLink is multi-tenant - a user might belong to "My Freelance Business" and "Acme Corp." Each MCP connection is scoped to exactly one team. If you want AI access to both teams, you create two connections. This is intentional: you might want Claude to have full access to your personal team but read-only access to the company team.
The scopes are displayed with human-readable labels from the i18n dictionary. invoices:read shows as "View invoices and estimates." accounting:write shows as "Create and modify transactions." Users understand what they're granting.
Step 4: Authorization code
When the user clicks Approve, a server action creates an authorization code:
async execute(input: CreateAuthorizationCodeInput): Promise<Result<{ code: string }>> {
// Verify user is an active member of the selected team
const member = await this.teamMemberRepository.findByTeamAndUser(
input.teamId,
input.userId
);
if (!member?.isActive()) {
return Result.Forbidden(McpPermissionErrors.teamAccessDenied);
}
// 256-bit code (two UUIDs concatenated)
const code = crypto.randomUUID() + crypto.randomUUID();
// Only valid scopes pass through - invalid ones are silently dropped
const validScopes = input.scopes.filter((s): s is McpScope =>
Object.values(McpScope).includes(s as McpScope)
);
const entity = McpAuthorizationCodeEntity.create({
code,
userId: input.userId,
teamId: input.teamId,
teamRole: member.getRole(), // OWNER, ADMIN, MANAGER, MEMBER
codeChallenge: input.codeChallenge,
codeChallengeMethod: input.codeChallengeMethod,
scopes: validScopes,
// expiresAt: now + 10 minutes (set automatically)
});
await this.mcpAuthorizationCodeRepository.save(entity);
return Result.Success({ code });
}
The authorization code is a domain entity, not a raw database record. McpAuthorizationCodeEntity.create() validates all inputs and sets the 10-minute expiry automatically. The PKCE challenge is stored alongside the code for verification in the next step.
Two security details worth noting:
Invalid scopes are silently dropped, not rejected. If a client requests invoices:read admin:superpower, the code is created with only invoices:read. This prevents a misbehaving client from blocking the entire flow over an unrecognized scope.
Team membership is verified at code creation time, not just at consent display. A race condition where a user is removed from a team between seeing the consent screen and clicking Approve is handled here.
Step 5: Token exchange
The client exchanges the authorization code for tokens:
POST /api/oauth/token
Content-Type: application/x-www-form-urlencoded
grant_type=authorization_code&
code=<256-bit-code>&
code_verifier=<PKCE-verifier>&
redirect_uri=http://localhost:5555/callback&
client_id=claude-desktop
The exchange use case performs five validations:
async execute(input: ExchangeCodeForTokenInput): Promise<Result<TokenResponse>> {
// 1. Find and DELETE the code in one operation (one-time use)
const authCode = await this.mcpAuthorizationCodeRepository
.findByCodeAndDelete(input.code);
if (!authCode) return Result.Error('invalid_grant');
// 2. Check the 10-minute window
if (authCode.isExpired()) return Result.Error('invalid_grant');
// 3. PKCE verification (timing-safe comparison)
const pkceValid = await authCode.verifyCodeChallenge(input.codeVerifier);
if (!pkceValid) return Result.Error('invalid_grant');
// 4. Redirect URI must match exactly
if (authCode.getRedirectUri() !== input.redirectUri) {
return Result.Error('invalid_grant');
}
// 5. Client ID must match
if (authCode.getClientId() !== input.clientId) {
return Result.Error('invalid_grant');
}
// All checks passed - generate tokens
const accessToken = generateSecureToken(); // 256-bit random
const refreshToken = generateSecureToken();
// Hash before storing - raw tokens never touch the database
const [tokenHash, refreshTokenHash] = await Promise.all([
sha256Hash(accessToken),
sha256Hash(refreshToken),
]);
const now = Date.now();
const tokenEntity = McpAccessTokenEntity.create({
userId: authCode.getUserId(),
teamId: authCode.getTeamId(),
teamRole: authCode.getTeamRole(),
tokenHash,
refreshTokenHash,
scopes: authCode.getScopes(),
clientName: authCode.getClientId(),
expiresAt: new Date(now + ACCESS_TOKEN_TTL_MS), // 1 hour
refreshExpiresAt: new Date(now + REFRESH_TOKEN_TTL_MS), // 30 days
});
await this.mcpAccessTokenRepository.save(tokenEntity);
return Result.Success({
accessToken, // Raw value - only time it exists
refreshToken,
expiresIn: ACCESS_TOKEN_EXPIRES_IN_SECONDS,
tokenType: 'Bearer',
});
}
The route handler converts camelCase to the OAuth-standard snake_case response:
const token = result.value!;
return NextResponse.json({
access_token: token.accessToken,
token_type: token.tokenType,
expires_in: token.expiresIn,
refresh_token: token.refreshToken,
});
The critical security choice here: tokens are hashed before storage. The database holds SHA-256(token), never the raw token. If someone dumps the database, they get hashes that can't be reversed into working tokens. The raw token is returned to the client exactly once and never stored on the server side.
This is the same pattern GitHub uses for personal access tokens. It's more secure than JWTs for this use case because tokens can be individually revoked without maintaining a blocklist. Find the hash in the database, delete it, done.
Step 6: Token verification on every request
Every MCP request hits the verification flow:
async function handleMcpRequest(request: Request): Promise<Response> {
const authInfo = await verifyBearerToken(request);
if (!authInfo) {
return withMcpCors(
new Response(
JSON.stringify({
error: 'invalid_token',
error_description: 'No authorization provided',
}),
{
status: 401,
headers: {
'Content-Type': 'application/json',
'WWW-Authenticate':
'Bearer resource_metadata="https://mcp.paperlink.online/.well-known/oauth-protected-resource"',
},
}
)
);
}
const server = createMcpServer(authInfo);
const transport = new WebStandardStreamableHTTPServerTransport({});
await server.connect(transport);
return withMcpCors(await transport.handleRequest(request));
}
The verifyBearerToken function does three checks:
async execute(bearerToken: string): Promise<McpAuthInfoDto | null> {
// 1. Hash the incoming token and look it up
const tokenHash = await sha256Hash(bearerToken);
const token = await this.mcpAccessTokenRepository.findByTokenHash(tokenHash);
if (!token) return null;
// 2. Check expiry and revocation
if (token.isExpired() || token.isRevoked()) return null;
// 3. Live membership check - is user still active in this team?
const member = await this.teamMemberRepository.findByTeamAndUser(
token.getTeamId(),
token.getUserId()
);
if (!member?.isActive()) return null;
return {
userId: token.getUserId(),
teamId: token.getTeamId(),
teamRole: member.getRole(),
scopes: token.getScopes(),
};
}
Check #3 is the one most OAuth implementations skip. If a team admin removes a user, their MCP token should stop working immediately - not in an hour when the token expires. We check team membership on every request. It's one extra database query, and it means "remove from team" actually removes access.
The returned McpAuthInfoDto is a lightweight object that flows into every tool handler:
interface McpAuthInfoDto {
userId: string;
teamId: string;
teamRole: string; // OWNER, ADMIN, MANAGER, MEMBER
scopes: McpScope[]; // [McpScope.INVOICES_READ, McpScope.ACCOUNTING_WRITE]
}
No password, no token, no hash - just the identity and permissions the tool needs.
Step 7: Scope enforcement in tools
Every tool checks its required scope before executing:
export const registerAccountingReadTools: ToolRegistrar = (server, authInfo) => {
server.registerTool(
'list-invoices',
{
title: 'List Invoices',
description: 'List invoices for the authenticated team.',
inputSchema: z.object({
status: z.string().optional(),
clientName: z.string().optional(),
limit: z.coerce.number().int().min(1).max(100).optional(),
offset: z.coerce.number().int().min(0).optional(),
}),
annotations: { readOnlyHint: true },
},
async (params) => {
if (!authInfo.scopes.includes('invoices:read')) {
return {
content: [{ type: 'text', text: 'Insufficient scope - invoices:read required.' }],
isError: true,
};
}
const useCase = mcpUseCases.getListInvoicesViaMcpUseCase();
const result = await useCase.execute({
teamId: authInfo.teamId, // Always from auth, never from params
status: params.status,
clientName: params.clientName,
limit: params.limit,
offset: params.offset,
});
if (!result.isSuccess) {
return {
content: [{ type: 'text', text: result.errors.join(', ') }],
isError: true,
};
}
return {
content: [
{ type: 'text', text: `Found ${result.value.length} invoices.` },
{ type: 'text', text: JSON.stringify(result.value, null, 2) },
],
};
}
);
};
Two things never come from the AI's parameters: teamId and userId. They always come from authInfo, which is populated from the verified token. Even if the AI sends teamId: "someone-elses-team" in the tool call, it's ignored. The auth context wins.
This is the same principle as "don't trust the client" in web apps, applied to AI clients. The AI is a client. It sends parameters. You validate them, but identity always comes from the token.
Token refresh
Access tokens expire in one hour. The client refreshes without user interaction:
async execute(input: { refreshToken: string }): Promise<Result<TokenResponse>> {
const refreshTokenHash = await sha256Hash(input.refreshToken);
const existingToken =
await this.mcpAccessTokenRepository.findByRefreshTokenHash(refreshTokenHash);
if (!existingToken) return Result.Error('invalid_grant');
if (existingToken.isRefreshExpired() || existingToken.isRevoked()) {
return Result.Error('invalid_grant');
}
// Issue new pair with same scopes
const newAccessToken = generateSecureToken();
const newRefreshToken = generateSecureToken();
const [newTokenHash, newRefreshTokenHash] = await Promise.all([
sha256Hash(newAccessToken),
sha256Hash(newRefreshToken),
]);
const now = Date.now();
const newTokenEntity = McpAccessTokenEntity.create({
userId: existingToken.getUserId(),
teamId: existingToken.getTeamId(),
teamRole: existingToken.getTeamRole(),
tokenHash: newTokenHash,
refreshTokenHash: newRefreshTokenHash,
scopes: existingToken.getScopes(),
clientName: existingToken.getClientName(),
expiresAt: new Date(now + ACCESS_TOKEN_TTL_MS),
refreshExpiresAt: new Date(now + REFRESH_TOKEN_TTL_MS),
});
await this.mcpAccessTokenRepository.save(newTokenEntity);
// Revoke old token pair (rotation - prevents replay)
await this.mcpAccessTokenRepository.revoke(existingToken.revoke());
return Result.Success({
accessToken: newAccessToken,
refreshToken: newRefreshToken,
expiresIn: ACCESS_TOKEN_EXPIRES_IN_SECONDS,
tokenType: 'Bearer',
});
}
We use refresh token rotation: every refresh issues a new refresh token and revokes the old one. If a refresh token is used twice, the second attempt fails - which signals a possible token theft.
The refresh token lives for 30 days. After that, the user re-authenticates through the browser. This is the only time they see the consent screen again.
The scope model
We have 25 scopes organized by domain and permission level:
invoices:read invoices:write invoices:delete
accounting:read accounting:write accounting:delete
companies:read companies:write companies:delete
clients:read clients:write clients:delete
products:read products:write products:delete
estimates:read estimates:write estimates:delete
sharing:read sharing:write
teams:read teams:write
billing:read
ai:read ai:write
The pattern is {domain}:{level}. Three levels: read, write, delete. Some domains don't have all three - billing is read-only because there's no "create a subscription" tool.
This maps directly to our file organization from post 2. accountingReadTools.ts checks accounting:read. accountingWriteTools.ts checks accounting:write. The file name tells you the scope.
A typical connection grants 5-10 scopes. A freelancer connecting their personal AI assistant might grant everything. A company admin connecting a shared AI might grant invoices:read and accounting:read only - the AI can look at data but can't change anything.
Why remote is better than local for SaaS
| Local (stdio + API key) | Remote (HTTP + OAuth) | |
|---|---|---|
| Setup | Edit JSON config file | Run one command |
| Security | Static key with full access | Scoped token, 1-hour expiry |
| Multi-tenancy | Manual key-per-team | Team picker in consent UI |
| Revocation | Generate new key, update config | Click "disconnect" in app settings |
| User consent | None - paste key and hope | Browser-based scope approval |
| Cross-device | Config file per device | Token syncs across Claude.ai devices |
| Updates | Reinstall/rebuild server binary | Server updates without client changes |
| Dependencies | Node.js + npm + binary | None (HTTP transport) |
The last row matters more than you'd think. Local MCP servers require the user to have Node.js installed, run npx, and keep the process alive. Our remote server is just an HTTP endpoint. Claude Desktop, Claude.ai, ChatGPT, Cursor - they all connect with a URL. No local runtime.
Security model summary
What the AI can do:
- Call any tool that's within the granted scopes
- Read, create, update, or delete data within the selected team
- Chain multiple tools in a single conversation
What the AI cannot do:
- Access data from a team the user didn't select
- Exceed the granted scopes (read-only connection stays read-only)
- Impersonate another user (userId always comes from the token)
- Use an expired or revoked token
- Access data after the user is removed from the team (live membership check)
What the user controls:
- Which team to grant access to
- Which scopes to approve
- When to disconnect (revoke from app settings)
If you're building a remote MCP server
Here's what we'd recommend based on what worked and what didn't:
Start with PKCE from day one. We briefly considered simple API keys as an MVP. Glad we didn't - retrofitting auth into a working server is painful, and users who connected with API keys would need to re-authenticate anyway.
Hash tokens before storing. This is a one-time cost (two lines of code) that eliminates an entire class of data breach scenarios.
Check team membership on every request. The extra database query is worth it. "Remove user from team" should mean "remove access immediately," not "remove access when their token expires."
Use domain:level scope naming. It scales. When we added the estimates domain, we added estimates:read, estimates:write, estimates:delete and it was obvious where each scope goes.
Return two content blocks. A human-readable summary and the full JSON data. The AI shows the summary to the user and uses the JSON for follow-up operations.
Try it
claude mcp add --transport http paperlink https://mcp.paperlink.online/api/mcp/mcp
You'll see the consent screen, pick a team, and be connected in about 10 seconds. 118 tools, scoped to exactly what you approved.
Top comments (0)