Introduction
In the article Introduction to Spring AI, we introduced the sample application to search for conferences. We also exposed its functionality as a set of MCP-compatible tools. In the article Explore Spring AI MCP Server with Streamable HTTP protocol, we ran this application as an MCP-Server locally and connected to it using the MCP Inspector or Amazon Q Developer.
Of course, running the application locally is not an enterprise-ready solution. That's why, in this article, we'll explain how to deploy and run our conference search application on the Amazon Bedrock AgentCore Runtime as the MCP server.
Adjustments of the conference search application
I decided to make some adjustments to the conference search application, which will act as the MCP server by exposing its functionality through tools. Please review the above-mentioned to gain a basic understanding of how to use the Spring AI framework with its rich set of features, including the MCP client and server functionality. You can find the updated version of our application in the spring-ai-1.1-conference-search-app-bedrock-agentcore-runtime-mcp-server repository.
I updated the examples to use Java 25 and recent Spring Boot 4 and Spring AI version 1.1.x versions. You can update it to the newest minor versions if you wish. And there is the Spring AI 2.x branch for Spring Boot 4 applications. The last one is currently in development and not GA. When it's released, I'll also provide my examples for the Spring AI 2.x version.
I also adjusted the static list of conferences to search for. Additionally, I updated the conference properties to set the conference (fake) dates in the future (the previous ones were mostly for 2025). And I also added 3 additional properties: conferenceId, callForPapersStartDate, and callForPapersEndDate. This enables us to search not only for all conferences, conferences by topic, and additionally by the date range, but also for the date when the call for papers is still open. In the course of the series, I'll use this functionality to apply for the conference when we extend our application.
With this, the Conference domain class looks like this:
public record Conference (Integer conferenceId, String name, Set<String> topics, String homepage,
LocalDate startDate, LocalDate endDate, LocalDate callForPapersStartDate, LocalDate callForPapersEndDate,
String city, String linkToCallforPapers) {
}
I also added one additional tool in the ConferenceSearchTool.
It's responsible for answering the following prompt: "Please provide me with the list of conferences, including their IDs, with a Java topic happening in 2027, with a call for papers open today." Here is how it's implemented:
@Tool(name = "Conference_Search_Tool_By_Topic_Date_CFP_Open", description = "Search for the conference list for exactly one topic provided, conference dates and the call for papers still open on the given date")
public Set<Conference> search(@ToolParam(description = "conference topic") String topic,
@ToolParam(description = " the conference earliest start date") LocalDate earliestStartDate,
@ToolParam(description = " the conference latest start date") LocalDate latestStartDate,
@ToolParam(description = " the call for papers still open on this date") LocalDate callForPapersStillOpenOnThisDate) {
this.conferences.stream()
.filter(c -> c.topics().contains(topic))
.filter(c -> isConferenceStartDateInDateRange(c, earliestStartDate, latestStartDate))
.filter(c -> isCallForPapersOpenOnThisDate(c, callForPapersStillOpenOnThisDate))
.collect(Collectors.toSet());
}
We can run this updated version of the application locally as the MCP server, as described in the Explore Spring AI MCP Server with Streamable HTTP protocol.
Deploying the conference search application on the Amazon Bedrock AgentCore Runtime
I've written the whole article series about this service. So I refer to it for the overview of this service. AgentCore Runtime also lets us deploy and run Model Context Protocol (MCP) servers in the AgentCore Runtime, see Deploying MCP servers in AgentCore Runtime. In the next article, we'll develop the (MCP-) client, capable of talking to our application (MCP server).
One important thing is to understand How Amazon Bedrock AgentCore supports MCP. AgentCore supports both stateless and stateful streamable-HTTP MCP servers. By default, stateless mode (stateless_http=True) is recommended for basic MCP servers. The platform automatically adds an Mcp-Session-Id header for any request without one, so MCP clients can maintain connection continuity to the same Amazon Bedrock AgentCore Runtime session. Spring AI also supports Stateless Streamable-HTTP MCP Servers. To configure them, we need to add some properties to application.properties :
spring.ai.mcp.server.type=SYNC
spring.ai.mcp.server.protocol=STATELESS
server.port=8000
server.address=0.0.0.0
I tried to automate as much as possible for the IaC with AWS CDK for Java. Please read the article Getting started with the AWS CDK for the overview and installation. I usually use AWS SAM, but it currently doesn't have Bedrock AgentCore support. CDK AgentCore L2 construct is currently still alpha. Even with CDK, it turned out to be challenging because of the discovered issues and difficulties in fully automating roles, permissions, and Docker image creation with IaC. I'll give some guidance on how to proceed. You can find the whole IaC in the spring-ai-1.1-conference-app-bedrock-agentcore-cdk repository.
Let's first comment out the definition of the GatewayTargetStack stack in the CDKApp class:
public interface CDKApp {
String appName = "spring-ai-conference-search-agentcore";
static void main(String... args) {
var app = new App();
new UserClientPoolStack(app, appName, stackProperties());
new RuntimeWithMCPStack(app, appName, stackProperties());
//new GatewayTargetStack(app, appName, stackProperties());
app.synth();
}
It contains the Stack using the AgentCore Gateway service, which we'll explore in the later articles. AgentCore Gateway also provides the functionality of the managed MCP server. I'll then give some recommendations on when to use the Runtime (directly) and when to use the Gateway.
Now, let's take a look at the RuntimeWithMCPStack, which we'll deploy later:
var runtime = Runtime.Builder.create(this, "MCPRuntime-123")
.runtimeName(appName.replace("-", "_")+ "_runtime")
.authorizerConfiguration(RuntimeAuthorizerConfiguration.usingJWT(UserClientPoolStack.COGNITO_DISCOVERY_URL, List.of(UserClientPoolStack.userPoolClient.getUserPoolClientId()), null))
.description("AgenCore Runtime with MCP protocol for running conference search app")
.protocolConfiguration(ProtocolType.MCP)
.agentRuntimeArtifact(agentRuntimeArtifact)
.executionRole(role)
.build();
CfnOutput.Builder.create(this, "RuntimeIdOutput").value(runtime.getAgentRuntimeId()).build();
We start with some easy parts: we give the AgentCore runtime a name, description, and define the protocol as MCP. Finally, the deployed runtime ID will be placed as the output variable RuntimeIdOutput. We'll see it in the console after the deployment of this stack is finished.
Let's cover the artifact part. You can automate the steps of building the Docker file, uploading it to the Amazon Elastic Container Registry, and referencing the image URL completely. The AgentRuntimeArtifact class offers different from* methods (fromCode, fromAsset, and so on). I prefer to do those steps separately and only reference the image URI. This is how publishing to ECR works :
# build the application
mvn clean package
# build the Docker image
sudo docker build --no-cache -t spring-ai-1.1-conference-search-bedrock-agentcore-runtime-mcp-server:v1
# Login to ECR
aws ecr get-login-password --region {region} | sudo docker login --username AWS --password-stdin {account_id}.dkr.ecr.{region}.amazonaws.com
# Create ECR repository (if it doesn't exist)
aws ecr create-repository --repository-name spring-ai-1.1-conference-search-bedrock-agentcore-runtime-mcp-server --image-scanning-configuration scanOnPush=true --region {region}
# Tag the Docker image
sudo docker tag spring-ai-1.1-conference-search-bedrock-agentcore-runtime-mcp-server:v1 {account_id}.dkr.ecr.{region}.amazonaws.com/spring-ai-1.1-conference-search-bedrock-agentcore-runtime-mcp-server:v1
# Push the Docker Image to the ECR repository
sudo docker push {account_id}.dkr.ecr.{region}.amazonaws.com/spring-ai-1.1-conference-search-bedrock-agentcore-runtime-mcp-server:v1
Please replace AWS {account_id} and {region} with our own values. Also, your version may not be v1 but a different one.
We can also build the Docker image by using Buildpack support built into Spring instead of a Dockerfile. Just use the Maven task spring-boot:build-image.
Let's look at the relevant code parts to assign this code artifact to the AgentCore Runtime:
var ecrImageURI=ConventionalDefaults
.getContextVariableValueWithReplacedAccountId(this, "ecrImageURIForConferenceSearchAndApplicationAppAsMCPServer");
var agentRuntimeArtifact = AgentRuntimeArtifact.fromImageUri(ecrImageURI);
Runtime.Builder.create(this, "MCPRuntime-123")
.runtimeName(appName.replace("-", "_")+ "_runtime")
...
.agentRuntimeArtifact(agentRuntimeArtifact)
...
.build();
First, we get the value of the variable ecrImageURIForConferenceSearchAndApplicationAppAsMCPServer, which points to the imageURI in the ECR. This is typically done in the cdk.json:
{
"app": "mvn -e -q compile exec:java",
"context": {
"ecrImageURIForConferenceSearchAndApplicationAppAsMCPServer": "{AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/spring-ai-1.1-conference-search-bedrock-agentcore-runtime-mcp-server:v17",
...
}
}
Let's ignore all the content variables we defined there for a moment. Please adjust the value so that it matches your imageURI. We use the placeholder {AWS_ACCOUNT_ID} there. The reason for it is that I don't want to expose the AWS account ID publicly. That's why I wrote the following utility method getContextVariableValueWithReplacedAccountId in the ConventionalDefaults class to replace the placeholder with the real value :
static String getContextVariableValueWithReplacedAccountId(Stack stack, String contextVariableName) {
var awsAccountId=(String)stack.getNode().tryGetContext("awsAccountId");
if(awsAccountId == null || awsAccountId.trim().isEmpty()) {
System.out.println("please provide your aws account id as as content to the call, for example: cdk deploy -c awsAccountId=1234567890101");
}
var contextVariableValue= getContextVariableValue(stack, contextVariableName);
return replaceAWSAccountID(contextVariableValue, awsAccountId);
}
static String getContextVariableValue(Stack stack, String contextVariableName) {
return (String)stack.getNode().tryGetContext(contextVariableName);
}
private static String replaceAWSAccountID(String configParam, String awsAccountId) {
return configParam.replace("{AWS_ACCOUNT_ID}", awsAccountId);
}
The command to deploy this stack later is:
cdk deploy spring-ai-conference-search-agentcore-runtime-with-mcp-server-stack -c awsAccountId={YOUR_AWS_ACCOUINT_ID}
Don't do it now, as we first need to create inbound authentication.
spring-ai-conference-search-agentcore-runtime-with-mcp-server-stack is the name of our stack, and you need to pass your AWS account ID. Here, I assume you operate everything (AgentCore, IAM Role, ECR) in the same AWS account. Otherwise, you'll need to adjust the code to adapt to your needs.
Now, let's cover the part of defining the authorizer configuration and assigning it to the AgentCore Runtime. This configuration is responsible for creating the inbound authentication type. As we deploy our application on the Runtime publicly, we need to secure the access. For the Runtime MCP protocol, there are inbound authentication types :
- IAM permissions - This will use the IAM username that you used to sign-in to the AWS console)
- JSON Web Tokens (JWT) - Configure JWT (like an OAuth token) as the Inbound Auth to validate incoming token signatures and scopes.
We'll use JWT issues by Amazon Cognito as an identity provider. For this, we need to define the Discovery URL from Cognito and JWT Authorization Configuration ( "allowed clients IDS"), see the relevant code from the RuntimeWithMCPStack class:
runtime = Runtime.Builder.create(this, "MCPRuntime-123")
.runtimeName(appName.replace("-", "_")+ "_runtime")
....
.authorizerConfiguration(RuntimeAuthorizerConfiguration
.usingJWT(UserClientPoolStack.COGNITO_DISCOVERY_URL,
List.of(UserClientPoolStack.userPoolClient.getUserPoolClientId()), null))
...
.build();
But how do we get those values? For this, we need to set up an Amazon Cognito user pool, user domain (to use the JWT token), and user client pool. I define all this in the UserClientPoolStack.
I'll leave it up to you to understand this stack in detail, because it requires Cognito knowledge and is not strictly related to the AgentCore. But basic steps are:
- Create a user pool with the given ID.
- Create a resource server with the scope (I created the full access scope).
- Add the resource server to the already created user pool.
- Construct the discovery URL, which always has a predefined schema: https://cognito-idp."+{region}+".amazonaws.com/"+{userPoolId}+"/.well-known/openid-configuration".
- Create a user client pool with the given name with only the default user flow, client credentials, and the scopes with the resource server we defined for the user pool. Add the already created user pool to the user client pool and generate the secret for it.
- Add the domain to the user pool we created to issue the token.
Initially, I thought I could automate this part completely. We don't even need to run the UserClientPoolStack stack individually, as we made the COGNITO_DISCOVERY_URL and the userPoolClient publicly there:
public static String COGNITO_DISCOVERY_URL;
public static UserPoolClient userPoolClient;
We also used them both from the RuntimeWithMCPStack as shown in the example above. With that, CDK understands that the RuntimeWithMCPStack stack depends on the UserClientPoolStack stack and executes the latter automatically first.
What should have worked out of the box is to create the domain prefix from the user pool ID. By definition, the prefix should be the user pool ID with lowercase letters, and the character _ stripped:
userPool.addDomain("UserPoolForAgentCoreMCPDomain", UserPoolDomainOptions.builder()
.cognitoDomain(CognitoDomainOptions.builder()
.domainPrefix(userPoolId.replace("_", "").toLowerCase()).build()).build());
Cognito doesn't accept setting other domain prefixes. But unfortunately, I encountered one issue with the creation of the user pool domain, which I described here. The only workaround I currently found was to comment out the user domain creation in the UserClientPoolStack :
/*
userPool.addDomain("UserPoolForAgentCoreMCPDomain", UserPoolDomainOptions.builder()
.cognitoDomain(CognitoDomainOptions.builder()
.domainPrefix(cognitoDomainPrefix.replace("_", "")
.toLowerCase()).build()).build());
*/
Then, execute this stack individually: cdk deploy spring-ai-conference-search-agentcore-user-client-pool-stack -c awsAccountId={YOUR_AWS_ACCOUINT_ID}.
Then, grab the value of the output variable CognitoUserPoolIdOutput and configure it in the cdk.json :
{
"app": "mvn -e -q compile exec:java",
"context": {
"cognitoDomainPrefix":"us-east-1_JbjQPT5GJ",
...
}
}
Finally, uncomment the user domain creation, which uses the value of this variable to construct the domain name :
var cognitoDomainPrefix=ConventionalDefaults
.getContextVariableValue(this, "cognitoDomainPrefix");
userPool.addDomain("UserPoolForAgentCoreMCPDomain", UserPoolDomainOptions.builder()
.cognitoDomain(CognitoDomainOptions.builder()
.domainPrefix(cognitoDomainPrefix.replace("_", "")
.toLowerCase()).build()).build());
And re-run the command: cdk deploy spring-ai-conference-search-agentcore-user-client-pool-stack -c awsAccountId={YOUR_AWS_ACCOUINT_ID}.
When AWS has fixed the issue with the domain prefixes, we can completely remove this workaround. We also won't need to deploy this stack separately.
Now let's cover the last missing part - defining the IAM execution role.
It's very difficult to automate this part as it takes plenty of time. If I find it, I'll provide the IaC part in the future :). I refer you to the article IAM Permissions for AgentCore Runtime for more information. You can also read my article Amazon Bedrock AgentCore Runtime - Part 2 Using Bedrock AgentCore Runtime Starter Toolkit with Strands Agents SDK, where I explained this part. In that article, we developed the agent in Python with the Strands Agents framework and deployed it on AgentCore Runtime.
Once we have defined the IAM role, we need to configure it in the cdk.json :
{
"app": "mvn -e -q compile exec:java",
"context": {
"roleArnForTheAgentCoreRuntime": "arn:aws:iam::{AWS_ACCOUNT_ID}:role/service-role/AmazonBedrockAgentCoreRuntimeDefaultServiceRole-q8xp1",
....
}
}
We use the placeholder for the AWS account ID as explained above. Here is the relevant code to grab the value of the roleArnForTheAgentCoreRuntime variable and set it to the execution role of the Runtime from the RuntimeWithMCPStack:
var roleArnForTheAgentCoreRuntime=ConventionalDefaults
.getContextVariableValueWithReplacedAccountId(this, "roleArnForTheAgentCoreRuntime");
role=Role.fromRoleArn(this,"roleArnForTheAgentCoreRuntimeRole", roleArnForTheAgentCoreRuntime);
Runtime.Builder.create(this, "MCPRuntime-123")
.runtimeName(appName.replace("-", "_")+ "_runtime")
...
.executionRole(role)
.build();
Now we're completely ready and can deploy the conference-search-agentcore-runtime-with-mcp-server-stack stack with: cdk deploy spring-ai-conference-search-agentcore-runtime-with-mcp-server-stack -c awsAccountId={YOUR_AWS_ACCOUINT_ID}.
We'll need some values for the configuration of the (Spring AI) MCP client, which we'll cover in the next article:
- user pool name, user client pool name, and auth token resource server ID, which we defined as constants in the UserClientPoolStack. You can also see them in the output of the cdk deploy command.
- agentcore runtime ID from the RuntimeWithMCPStack. This value will only be there after the deployment of this stack. You can also see it in the output of the cdk deploy command (variable name RuntimeIdOutput).
Here is how the AgentCore Runtime looks in the console after its creation:
Conclusion
In this article, we explained how to deploy and run our conference search application on the Amazon Bedrock AgentCore Runtime as the MCP server. In the next article, we'll develop the (MCP-) client, capable of talking to our application running on AgentCore Runtime.
If you like my content, please follow me on GitHub and give my repositories a star!
Please also check out my website for more technical content and upcoming public speaking activities.

Top comments (1)
The part that resonated was the friction around automating the IAM role and the Cognito domain prefix workaround — not because those are inherently interesting details, but because they expose something about the current state of infrastructure-as-code for these newer AWS services. The CDK constructs are still in alpha, and the places where automation breaks down aren't architectural decisions; they're edge cases in the service APIs that CI/CD pipelines don't handle gracefully yet.
What strikes me is that this kind of semi-automated deployment — where you still need a human to click through the IAM console or work around a domain prefix bug — is probably the default experience for most teams adopting AgentCore right now, even though the marketing presents it as a seamless managed service. The gap between the demo path and the production path is real, and it's filled with these small, undocumented papercuts.
I wonder if the long-term answer is better CDK support maturing, or if there's a deeper tension here: managed services that abstract away infrastructure complexity sometimes make the remaining un-automated pieces harder to reason about, because you have less visibility into what's actually happening. When an EC2 instance misbehaves, you have a familiar set of debugging instincts. When an AgentCore runtime's IAM role propagation is delayed, the debugging surface is thinner. Curious if you've developed any reusable patterns for diagnosing issues at that layer, or if it's still mostly trial and error.