DEV Community

Srinivasan Chinnachamy
Srinivasan Chinnachamy

Posted on • Edited on

From Prototype to Runtime: Operationalizing AI Agents on Amazon Bedrock AgentCore

As organizations experiment with agentic AI, many teams quickly realize that building an intelligent agent is only the first step. The real challenge begins when those agents must be deployed, secured, scaled, observed, and continuously updated in enterprise environments.

With the emergence of standardized interaction patterns such as Model Context Protocol (MCP) and agent-to-agent (A2A) communication, AI agents are increasingly treated as long-running, production workloads rather than isolated experiments. This shift places new demands on platform engineering teams, who must extend existing cloud-native practices — such as containerization, CI/CD, networking, and governance — to support AI agents at runtime.

In this blog, I explore how Amazon Bedrock AgentCore, together with the AgentCore Starter Toolkit, enables a structured and repeatable approach to operationalizing AI agents on AWS. Rather than focusing solely on agent logic, the toolkit helps teams define and manage the runtime infrastructure required to host agents securely and at scale.

With AWS Bedrock AgentCore service, typical developer journey from agent logic to live execution can be broken down into the following stages

Stage 1: Developing your codebase for agent and tool calling

Stage 2: Provisioning your infrastructure to deploy agents

Stage 3: Preparing your AgentCore runtime and the AgentCore endpoint

Stage 4: Invocation of the deployed AgentCore runtime ARN

Developers own agent logic and invocation, while platform teams manage infrastructure and AgentCore runtime preparation, enabling a scalable and governed agent platform.

AgentCore Starter Toolkit

Let’s begin with understanding the functional requirements to host an agent on AWS Bedrock AgentCore Runtime and look at what this toolkit offers.

  1. AgentCore Runtime act as a serverless container and AWS manages the underlying infrastructure
  2. Source code along with template shall be hosted in S3 bucket or
  3. Building a container image (ECR) packaging the source code with agent logic and necessary runtime dependencies

To get started programmatically, AWS provides Bedrock AgentCore Starter Toolkit that acts as a boilerplate configuration to prepare and host agents on AgentCore Service. It is a python library and can be imported as below.

##install the toolkit
pip install bedrock-agentcore-starter-toolkit
Enter fullscreen mode Exit fullscreen mode
##invoking toolkit functions
from bedrock_agentcore_starter_toolkit import Runtime
agentcore_runtime = Runtime()
Enter fullscreen mode Exit fullscreen mode

This toolkit helps you to provision runtime infrastructure that AI agents require and is shown below.

Upon successful deployment, Toolkit enables below capabilities

  • Source code with Agent logic and tool calling will be packaged as .zip file and be uploaded to the specified S3 bucket
  • Triggers the Code build project, that acts as a CI server, to prepare a docker image by packaging the latest source code with necessary dependencies
  • Pushes the Docker images to a Elastic Container Registry and fully version controls them
  • Creates a Bedrock AgentCore YAML file that acts as a blueprint configuration of the target runtime infrastructure
  • AgentCore Runtime will be created/updated with latest container image and generates a AgentCore endpoint (ARN)

AgentCore Runtime doesn’t get a DNS name or static IP’s; you get an ARN that acts as the invocation endpoint

Let’s take a deeper look at the couple of toolkit constructs configure() and launch() which is more predominantly used for agent preparation with various configurations and deployment options.

1. configure()

This function is mainly used to design/construct infrastructure blueprint for the AgentCore Runtime.

def configure_bedrock_agentcore(
    agent_name: str,
    entrypoint_path: Path,
    create_mode_enabled: bool = False,
    execution_role: Optional[str] = None,
    code_build_execution_role: Optional[str] = None,
    ecr_repository: Optional[str] = None,
    s3_path: Optional[str] = None,
    container_runtime: Optional[str] = None,
    auto_create_ecr: bool = True,
    auto_create_s3: bool = True,
    auto_create_execution_role: bool = True,
    enable_observability: bool = True,
    memory_mode: Literal["NO_MEMORY", "STM_ONLY", "STM_AND_LTM"] = "NO_MEMORY",
    requirements_file: Optional[str] = None,
    authorizer_configuration: Optional[Dict[str, Any]] = None,
    request_header_configuration: Optional[Dict[str, Any]] = None,
    verbose: bool = False,
    region: Optional[str] = None,
    protocol: Optional[str] = None,
    non_interactive: bool = False,
    source_path: Optional[str] = None,
    vpc_enabled: bool = False,
    vpc_subnets: Optional[List[str]] = None,
    vpc_security_groups: Optional[List[str]] = None,
    idle_timeout: Optional[int] = None,
    max_lifetime: Optional[int] = None,
    deployment_type: str = "direct_code_deploy",
    runtime_type: Optional[str] = None,
    is_generated_by_agentcore_create: bool = False,
) -> ConfigureResult:
Enter fullscreen mode Exit fullscreen mode

It allows you to specify

  1. Right entry path to your AI Agent. Generally, it corresponds to the python file that hosts @app.entrypoint annotation.
  2. Memory mode for the selected agent
  3. Private VPC information, in case of running these agents on your private VPC
  4. Inbound and Outbound Authentication controls
  5. Observability pillars

Executing this function generates a YAML file .bedrock_agentcore.yaml that holds the sufficient information,

  • To store the deployment state and configuration
  • Track AWS resource ARNs and identifiers
  • Enable future updates and management of the deployed agent

This file serves as the “state file” for your deployed agent, similar to how Terraform uses state files to track infrastructure.

YAML file structure:

we could see the clear sections such as Agent Memory, Identity controls, etc., which forms an agentic blueprint/ configuration as shown in the below example.

2. launch()

This function takes the configured agents and deploys them into AWS managed infrastructure. It begins with dockerizing the application, uploading the container image to registry, integrating observability and generating a new endpoint

def launch_bedrock_agentcore(
    config_path: Path,
    agent_name: Optional[str] = None,
    local: bool = False,
    use_codebuild: bool = True,
    env_vars: Optional[dict] = None,
    auto_update_on_conflict: bool = False,
    console: Optional[Console] = None,
    force_rebuild_deps: bool = False,
)
Enter fullscreen mode Exit fullscreen mode

In general, Agents deployed on AgentCore Runtime are Ephemeral containers — not persistent like EC2 and has Auto-scaling capability based on request volume

The toolkit is designed to be idempotent — it could detect existing resources and avoid recreating them, but always backup your YAML before running

Practical Scenario’s:

Let me explain usage of these constructs with few practical scenarios.

1. Changing Network Configuration (PUBLIC → PRIVATE)

While Public mode of VPC is fine for quick PoC and 'dev' environments,'Private' VPC shall be considered for higher environments. Use .configure() for infrastructure changes.

from bedrock_agentcore_starter_toolkit import Runtime
agentcore_runtime = Runtime()
agentcore_runtime.configure(
    agent_name="strands_chat_agent",
    ## network_mode="PUBLIC",    
    network_mode="PRIVATE",
    network_mode_config={
        "subnet_ids": ["subnet-0627b3f5exxxxx",  "subnet-015f98bdxxxxxx"],
        "security_group_ids": ["sg-010348223xxxx"]
    }
)
Enter fullscreen mode Exit fullscreen mode

This will:

  • Update .bedrock_agentcore.yaml state file
  • Modify the underlying AWS infrastructure (VPC configuration)
  • Redeploy the agent with new network settings

2. Adding More Tools to Your Agent

As developers accelerate with agentic logic development, Use .launch() to re-deploy your code changes
This is like updating your application code without changing infrastructure — similar to deploying new application code in containers without modifying the Terraform infrastructure definition.

# After adding tools to agents/strands_chat_agent.py
from bedrock_agentcore_starter_toolkit import Runtime
agentcore_runtime = Runtime()
agentcore_runtime.launch(agent_name="strands_chat_agent")
Enter fullscreen mode Exit fullscreen mode

This will:

  • Rebuild the container with your updated agent code
  • Push the new image to ECR and deploys the agent with latest code and versioning them
  • No infrastructure changes — uses existing network, IAM roles, etc.

Walk through:

For a quick walkthrough, I have created a simple chat agent built on strands framework with simple tool calls and frontend with Stream lit, hosted in my GitHub public repository.

Bedrock_AgentCore_Strands_ChatAgent/
├── agents/
│   └── strands_chat_agent.py    # Main agent implementation
├── tools/
│   ├── __init__.py
│   ├── random_tools.py        # Random generation tools
│   ├── text_tools.py          # Text processing tools
│   ├── time_tool.py           # Time-related tools
│   └── weather.py             # Weather information tool
├── streamlit_app.py           # Web interface
├── requirements.txt           # Python dependencies
└── README.md                  # This file
Enter fullscreen mode Exit fullscreen mode

Repository:
https://github.com/SrinivasanChinnachamy/Bedrock_AgentCore_Strands_ChatAgent

Deployment Script:

I have wrapped both configure() and launch() function in deploy.sh file as below. At this moment, we are leveraging AWS credentials in the local profile to execute the script and can easily be orchestrated by a dedicated pipeline later.

#!/usr/bin/env python3

# Deployment script for Strands Agent with Bedrock AgentCore
print("Starting deployment process...")

# Importing necessary libraries and establishing session
from bedrock_agentcore_starter_toolkit import Runtime
from boto3.session import Session

boto_session = Session()
region = boto_session.region_name
agentcore_runtime = Runtime()

# Begin AgentCore Configuration
agent_name = "strands_chat_agent"
configuration = agentcore_runtime.configure(
    entrypoint="agents/strands_chat_agent.py",
    auto_create_execution_role=True,
    auto_create_ecr=True,
    requirements_file="requirements.txt",
    region=region,
    agent_name=agent_name
)
print(configuration)

# Trigger AgentCore deployment and Launch Runtime
launch_agent = agentcore_runtime.launch()
print(f"Agent launched: {launch_agent}")
Enter fullscreen mode Exit fullscreen mode

Upon successful execution, you could observe all the relevant components along with observability tools are being provisioned.

Updating agents:

Now, let’s add one more tool to enhance the agent capabilities in the GitHub repository and re-run the deploy.shscript to trigger a new deployment. Refer to the commits in the repository for the details

Upon successful execution, we could see the updated version on the Bedrock AgentCore runtime.

  1. Latest version will be made as a default endpoint to serve the traffic.

During this process, we’ve got all the supporting components updated to the next versions.

  1. Source code is zipped and versioned in S3 bucket

  2. Code Build event is triggered for every commit to containerize the source code

  3. Container images are stored and versioned in ECR

Conclusion:

We have seen that, Amazon Bedrock AgentCore, together with the AgentCore Starter Toolkit, provides a structured and scalable approach to operationalize AI agents in all the environments. Whether you are enabling private VPC deployments, rolling out new tools, or building CI/CD pipelines for agent updates, the AgentCore Starter Toolkit offers a pragmatic path from code to invocation.

The configure() and launch() constructs, backed by the stateful .bedrock_agentcore.yaml file clearly separates the responsibility. Developers can focus on agent logic, tool integrations, and invocation patterns, while platform teams manage networking, security, observability, and lifecycle controls.
I know it is quite lengthy blog, Thanks for reading it till the end.

Top comments (0)