<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manato Takai</title>
    <description>The latest articles on DEV Community by Manato Takai (@manaty226).</description>
    <link>https://dev.to/manaty226</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/manaty226"/>
    <language>en</language>
    <item>
      <title>Building an Apache Iceberg Log Analytics Platform with S3 Tables and Amazon Data Firehose</title>
      <dc:creator>Manato Takai</dc:creator>
      <pubDate>Tue, 23 Dec 2025 00:02:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-an-apache-iceberg-log-analytics-platform-with-s3-tables-and-amazon-data-firehose-2i6</link>
      <guid>https://dev.to/aws-builders/building-an-apache-iceberg-log-analytics-platform-with-s3-tables-and-amazon-data-firehose-2i6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon S3 Tables is an AWS-managed object storage service that supports the Apache Iceberg specification and automatically performs table optimization tasks such as compaction in the background. In this article, I explore how to architect a log analytics platform for applications deployed on AWS using S3 Tables and Amazon Data Firehose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building an Apache Iceberg Log Analytics Platform on AWS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  System Architecture
&lt;/h3&gt;

&lt;p&gt;The system architecture for this implementation is shown in the diagram below. Assuming a containerized application deployed on ECS, I have placed firelens as a sidecar container to serve as the log router. firelens receives logs and forwards them to Amazon Kinesis Data Firehose. Firehose then stores the received logs in S3 Tables in Iceberg format. Finally, the stored Iceberg tables are queried and analyzed using Amazon Athena.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo71ratxw2xa7gon9rle2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo71ratxw2xa7gon9rle2.png" alt="S3 Tables application log platform architecture diagram" width="771" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Provisioning
&lt;/h3&gt;

&lt;p&gt;The infrastructure is provisioned using Terraform. I have included a link to the Terraform code repository below for those interested in the implementation details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/manaty226/aws-s3tables-firehose-athena-log-analytics" rel="noopener noreferrer"&gt;https://github.com/manaty226/aws-s3tables-firehose-athena-log-analytics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this configuration, writing data from Amazon Data Firehose to S3 Tables requires LakeFormation permission settings. However, as of December 2025, Terraform's LakeFormation resource does not support permission configuration for S3 Tables. Therefore, after creating the IAM role for Kinesis Data Firehose with Terraform, you need to execute the following AWS CLI command to configure LakeFormation permissions. Without this configuration, S3 Tables will not be visible from the Firehose stream, causing resource creation to fail.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws lakeformation grant-permissions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--principal&lt;/span&gt; &lt;span class="nv"&gt;DataLakePrincipalIdentifier&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ACCOUNT_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:role/s3tables-log-demo-firehose-role"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resource&lt;/span&gt; &lt;span class="s2"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Table&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;CatalogId&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;ACCOUNT_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:s3tablescatalog/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TABLE_BUCKET_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;DatabaseName&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;logs&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;some_api_logs&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--permissions&lt;/span&gt; &lt;span class="s2"&gt;"ALL"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; ap-northeast-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details on the LakeFormation permission configuration issue in Terraform, please refer to the following GitHub issue.&lt;br&gt;
&lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/40724" rel="noopener noreferrer"&gt;https://github.com/hashicorp/terraform-provider-aws/issues/40724&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Log Sample Structure
&lt;/h2&gt;

&lt;p&gt;For this implementation, the application container outputs logs in the following JSON format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-12-20T10:10:21Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"INFO"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Sample log message"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"93f7a013-37d1-4793-a73f-b660d78a1f16"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When these logs are sent to Amazon Data Firehose via FireLens, metadata is appended, resulting in the following structure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"container_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdout"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"log"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;timestamp&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;2025-12-20T10:10:21Z&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;level&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;INFO&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Sample log message&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;request_id&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;93f7a013-37d1-4793-a73f-b660d78a1f16&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"container_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"xxxxxxxxxxxxxxxxx"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When data is sent from Amazon Data Firehose to S3 Tables, fields that do not exist in the S3 Tables schema are silently ignored. Therefore, you need to define an Iceberg table schema that accommodates this format in advance. Unlike CloudWatch Logs, where the log schema is parsed at query time after ingestion, with S3 Tables you must standardize the log fields. However, as application developers, we often need to modify log fields depending on specific functions or contexts. Given this, a reasonable approach might be to include the full log content in the &lt;code&gt;log&lt;/code&gt; field while defining standardized fields as S3 Tables columns and configuring FireLens to format the logs accordingly. This is still an area I am experimenting with, so if you have any best practices to share, I would appreciate your input.&lt;/p&gt;

&lt;p&gt;If you can verify the log data in the S3 Tables preview, the setup is successful. Note that when the JSON fields output by FireLens do not match the S3 Tables column schema, Amazon Data Firehose will not report an error, and the data will simply not be stored in S3 Tables, making debugging particularly challenging.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zqr206o65lje1eo3oqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zqr206o65lje1eo3oqq.png" alt="S3 Tables data preview" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Querying Logs Stored in S3 Tables
&lt;/h3&gt;

&lt;p&gt;Finally, I query the logs stored in S3 Tables from Athena.&lt;/p&gt;

&lt;p&gt;As mentioned earlier, the log body is stored as a JSON string in the &lt;code&gt;log&lt;/code&gt; field. Therefore, we need to parse the &lt;code&gt;log&lt;/code&gt; field as JSON in Athena to extract individual fields. Below is an example query. I considered creating a View table to streamline analysis for the development team, but currently S3 Tables is recognized as a cross-account Glue Data Catalog, resulting in an error when attempting to execute the &lt;code&gt;CREATE VIEW&lt;/code&gt; command. For now, the recommended approach is to save queries like the one below as Named Queries and share them within the team. If anyone knows how to create View tables in this context, I would be grateful for your guidance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="n"&gt;container_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ecs_cluster&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ecs_task_arn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ecs_task_definition&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;json_extract_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.timestamp'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="nb"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;json_extract_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.level'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;level&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;json_extract_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.message'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;json_extract_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.request_id'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;request_id&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;some_api_logs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, I explored building an application log analytics platform using S3 Tables. While there are still areas that feel somewhat immature such as Terraform not yet supporting LakeFormation permission configuration, the need to set ALL permissions for Amazon Data Firehose, and the inability to create View tables—the potential for significantly lower log ingestion and storage costs compared to CloudWatch Logs is promising. Additionally, the optimization opportunities offered by Iceberg's compaction and index optimization based on data characteristics make this a technology I am looking forward to seeing evolve.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iceberg</category>
      <category>logging</category>
    </item>
    <item>
      <title>Building a Remote MCP Server with OAuth Authorization Using Amazon API Gateway and Cognito</title>
      <dc:creator>Manato Takai</dc:creator>
      <pubDate>Wed, 13 Aug 2025 04:16:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-remote-mcp-server-with-oauth-authorization-using-amazon-api-gateway-and-cognito-19ab</link>
      <guid>https://dev.to/aws-builders/building-a-remote-mcp-server-with-oauth-authorization-using-amazon-api-gateway-and-cognito-19ab</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the specification of MCP server published at 2025-03-25, a new authorization specification for MCP servers using HTTP transport was proposed. This specification enables access using temporary user credentials via the OAuth 2.1 authorization code flow, allowing AI agents to access the server without embedding long-lived, highly privileged API keys.&lt;/p&gt;

&lt;p&gt;Furthermore, the specification published at 2025-06-18 clarified that MCP server should be treated as a resource server, logically separating the authorization server and MCP server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization" rel="noopener noreferrer"&gt;https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, developers who want to implement a remote MCP server with OAuth authorization still need to provide an OAuth 2.1-compliant authorization server. Specifically, in AWS environments, Amazon Cognito does not support standard protocols or endpoints for dynamic client registration and authorization server metadata. Thus, it is difficult to comply with standard MCP authorization protocol which is required to communicate with general MCP clients like MCP Inspector or VSCode.&lt;/p&gt;

&lt;p&gt;Additionally, managing extra compute resources for the authorization server should be undesired for the MCP server developers. Therefore, in this article, I will demonstrate how to implement a remote MCP server with OAuth authorization that works with MCP Inspector and VSCode, leveraging the request/response manipulating features of Amazon API Gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Authorized MCP Server
&lt;/h2&gt;

&lt;p&gt;The MCP server authorization specification is based on the following four RFCs (including drafts):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://datatracker.ietf.org/doc/html/rfc6749" rel="noopener noreferrer"&gt;[Draft] OAuth 2.1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://datatracker.ietf.org/doc/html/rfc8414" rel="noopener noreferrer"&gt;RFC 8414 Authorization Server Metadata&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://datatracker.ietf.org/doc/html/rfc7591" rel="noopener noreferrer"&gt;RFC 7591 Dynamic Client Registration (DCR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://datatracker.ietf.org/doc/html/rfc9728" rel="noopener noreferrer"&gt;RFC 9728 Protected Resource Server Metadata&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For OAuth 2.1, the authorization code flow is sufficient for this use case, and Cognito's standard features can handle it, so details are omitted here.&lt;/p&gt;

&lt;p&gt;Protected Resource Server Metadata is used by the MCP server (as a resource server) to indicate the URL of the authorization server it relies on. In RFC 9728, the &lt;code&gt;authorization_servers&lt;/code&gt; field is OPTIONAL, but in the MCP specification, it is mandatory to include information about one or more authorization servers. Additionally, when the MCP server responds with HTTP status 401, it must include a &lt;code&gt;WWW-Authenticate&lt;/code&gt; header with a value like &lt;code&gt;Bearer resource_metadata="https://resource.example.com/.well-known/oauth-protected-resource"&lt;/code&gt;, indicating the URL for the resource metadata. This allows MCP clients to follow the resource metadata to obtain the authorization server URL.&lt;/p&gt;

&lt;p&gt;Authorization Server Metadata and Dynamic Client Registration are required for MCP clients to interact with the authorization server. Authorization Server Metadata is mandatory, while DCR is optional. If DCR is not supported, some alternative method for registering MCP clients must be provided. However, since MCP Inspector assumes DCR for client registration, DCR is supported in this implementation to ensure standard operation with MCP Inspector.&lt;/p&gt;

&lt;p&gt;The overall flow using these specifications is described in the MCP specification:&lt;br&gt;
&lt;a href="https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#authorization-flow-steps" rel="noopener noreferrer"&gt;https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#authorization-flow-steps&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementing an Authorized MCP Server with AWS Managed Services
&lt;/h2&gt;

&lt;p&gt;To implement an authorized MCP server using AWS managed services, Amazon Cognito is used as the authorization server. Cognito supports the authorization code grant with PKCE and the client credentials grant, making it compatible with the MCP server's authorization flow.&lt;/p&gt;

&lt;p&gt;However, Cognito does not support authorization server metadata endpoints or DCR. To address this, API Gateway is utilized to supplement the missing specifications.&lt;/p&gt;

&lt;p&gt;The conceptual architecture is shown below. API Gateway routes requests to various endpoints. The HTTP API Gateway in the front is a workaround to place well-known endpoints (e.g., &lt;code&gt;https://example.com/.well-known/oauth-protected-resource&lt;/code&gt;) directly under the host, as required by MCP clients like MCP Inspector. The REST API Gateway is the main resource for this implementation. Cognito is used as the authorization server, and API Gateway is integrated with AWS services to support DCR. Lambda is used for the MCP server tool implementation via proxy integration. Metadata endpoints are implemented using API Gateway's Mock integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3zo1iqy95k0dxokx3sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3zo1iqy95k0dxokx3sw.png" alt="conceptual architecture of the proposed remote MCP server" width="729" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, API Gateway REST APIs add a stage name to the root path. However, MCP clients like MCP Inspector access the &lt;code&gt;/.well-known/&lt;/code&gt; path directly under the host, as specified in RFC 8414, making it impossible to use REST API mode as-is. HTTP API mode allows defining the stage as &lt;code&gt;$default&lt;/code&gt; to place paths directly under the host, but lacks features like Mock integration. Therefore, this architecture is used. In production, only REST API mode with a custom domain is required to set paths directly under the host.&lt;/p&gt;

&lt;p&gt;The implementation is available in the following repository:&lt;br&gt;
&lt;a href="https://github.com/manaty226/remote-mcp-based-on-aws-managed-services" rel="noopener noreferrer"&gt;https://github.com/manaty226/remote-mcp-based-on-aws-managed-services&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Implementing Metadata Endpoints with API Gateway Mock Integration
&lt;/h3&gt;

&lt;p&gt;Authorization server and resource server metadata consist of static information such as identifiers and related endpoints. For the MCP server, this information is determined at AWS infrastructure setup time.&lt;/p&gt;

&lt;p&gt;API Gateway provides a Mock integration feature for returning fixed values or simple computed results. Using this, you can implement metadata endpoints for the authorization server and resource server.&lt;/p&gt;

&lt;p&gt;For example, the minimum required authorization server metadata for the MCP server can be implemented with a mapping template like this. The URLs for the issuer and various endpoints can be statically set using Cognito and API Gateway endpoint information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"issuer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;Issuer URL&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"authorization_endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;Authorization Endpoint URL&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"token_endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;Token Endpoint URL&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"response_types_supported"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"grant_types_supported"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"authorization_code"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"code_challenge_methods_supported"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"S256"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"registration_endpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;DCR Endpoint URL&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, resource server metadata can be implemented as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;API Gateway URL implementing the resource server&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"authorization_servers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;Authorization Server URL&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Supporting Dynamic Client Registration with API Gateway AWS Integration and Mapping Templates
&lt;/h3&gt;

&lt;p&gt;Amazon Cognito provides an API called CreateUserPoolClient for client registration. However, the request and response formats are proprietary, so you cannot directly comply with the DCR specification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To address this, you can use API Gateway's mapping template feature to transform request and response data, enabling DCR-compliant client registration with Cognito.&lt;/p&gt;

&lt;p&gt;For example, you can implement the transformation from a DCR request to a Cognito client creation API request as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ClientName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$input.json('$.client_name')&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"CallbackURLs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$input.json('$.redirect_uris')&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AllowedOAuthFlows"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$input.json('$.response_types')&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AllowedOAuthFlowsUserPoolClient"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"AllowedOAuthScopes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"openid"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"profile"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SupportedIdentityProviders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"COGNITO"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"UserPoolId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;Cognito User Pool ID&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;API Gateway mapping templates use the Velocity Template Language (VTL), allowing you to reference request body values like &lt;code&gt;$input.json('$.client_name')&lt;/code&gt;. You can set these as values in the Cognito API request body. Required Cognito-specific settings can be hardcoded in the mapping template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also transform Cognito's proprietary response format to a DCR-compliant response for the MCP client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"client_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$input.json('$.UserPoolClient.ClientId')&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"client_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$input.json('$.UserPoolClient.ClientName')&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"redirect_uris"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$input.json('$.UserPoolClient.CallbackURLs')&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"response_types"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$input.json('$.UserPoolClient.AllowedOAuthFlows')&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  MCP Tool Implementation
&lt;/h3&gt;

&lt;p&gt;With all authorization features handled by AWS managed services, the resource server functionality (the MCP tool implementation) can be kept lightweight. The awslabs/mcp repository provides a library called &lt;code&gt;mcp-lambda-handler&lt;/code&gt; for running MCP tool functions from the Lambda handler's &lt;code&gt;event&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/awslabs/mcp/tree/8d90be5c403af4829a45d8f03093f830ffed6285/src/mcp-lambda-handler" rel="noopener noreferrer"&gt;https://github.com/awslabs/mcp/tree/8d90be5c403af4829a45d8f03093f830ffed6285/src/mcp-lambda-handler&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, you can implement an MCP server that simply returns a UUID v4 in just a few lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uuid&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;awslabs.mcp_lambda_handler&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MCPLambdaHandler&lt;/span&gt;

&lt;span class="n"&gt;mcp_server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MCPLambdaHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sample-uuid-mcp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@mcp_server.tool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_uuid&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Generate a new UUID v4&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;uuid4&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;mcp_server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handle_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For API authorization, configure the API Gateway Cognito authorizer and add the &lt;code&gt;WWW-Authenticate&lt;/code&gt; header to the gateway response for UNAUTHORIZED errors to route to the resource server metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verification by MCP Inspector
&lt;/h2&gt;

&lt;p&gt;You can use MCP Inspector to verify that the authorized remote MCP server works as expected. Set the Transport Type in MCP Inspector to Streamable HTTP and enter the MCP endpoint of your API Gateway (e.g., &lt;code&gt;https://&amp;lt;your-api-gw-host&amp;gt;/mcp&lt;/code&gt;) in the URL form. Then, click "Open Auth Setting" and select either "Quick OAuth Flow" or "Guided OAuth Flow" to start the authorization flow with your remote MCP server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lqh1v2an96yesvbgtge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lqh1v2an96yesvbgtge.png" alt="MCP Inspector settings" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If successful, you will be able to retrieve metadata, perform dynamic client registration, and execute the authorization code flow to obtain an access token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vm0dz4zseoy5e6sv3rj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vm0dz4zseoy5e6sv3rj.png" alt="MCP Inspector OAuth testing results" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also connect and invoke tools from the left menu in MCP Inspector, or from AI agents such as VSCode.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, I demonstrated how to implement a remote MCP server with OAuth authorization using AWS API Gateway and Cognito. By using API Gateway mapping templates to bridge protocol and AWS specification gaps, you can implement authorization without additional compute resources.&lt;/p&gt;

&lt;p&gt;For simplicity, both the resource server and authorization server were implemented on the same API Gateway, but you can also separate the authorization server functionality using only API Gateway and Cognito, making the MCP server implementation even simpler.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>aws</category>
      <category>ai</category>
      <category>oauth</category>
    </item>
  </channel>
</rss>
