DEV Community

Arun Kumar Singh
Arun Kumar Singh

Posted on

Getting Started with AWS Bedrock AgentCore : Part 1

⚠️ Important: Amazon Bedrock AgentCore is in preview release and is subject to change. In this post, I will use an example from AWS and will extend it to have few more things.

What is Bedrock AgentCore

Amazon Bedrock AgentCore is a complete set of capabilities to deploy and operate agents securely, at scale. It is a serverless runtime purpose-built for deploying and scaling dynamic AI agents and tools using any open-source framework including Strands Agents, LangChain, LangGraph and CrewAI.

It supports any protocol such as MCP and A2A, and any model from any provider including Amazon Bedrock, OpenAI, Gemini, etc. Developers can securely and reliably run any type of agent including multi-modal, real-time, or long-running agents.

The platform operates on a modular architecture with seven core components that can be used independently or together.

AgentCore Runtime
AgentCore Memory
AgentCore Identity
AgentCore Gateway
AgentCore Observability
AgentCore Built-in Tools
I will not go through each and every component mentioned above but would like to cover the heart of this ecosystem.

Agent runtime
It represents a containerized application that processes user inputs, maintains context, and executes actions using AI capabilities. When you create an agent, you define its behavior, capabilities, and the tools it can access.

Protocol support

HTTP for simple request/response patterns
Model Context Protocol (MCP) for standardized agent-tool interactions

please note this service is available in 4 regions at the time of writing this post.

Unsupported region

How to deploy?
There are couple of ways how you can use Bedrock AgentCore. Let’s start with most basic way Bedrock AgentCore Runtime starter toolkit.

Amazon Bedrock AgentCore Runtime starter toolkit

this approach is good when you want to quickly deploy existing agent functions.

Let’s build an agent using Strands framework and deploy it using toolkit.

To start with toolkit, you need following items →

To run agent or tool in AgentCore Runtime you need an AWS Identity and Access Management execution role. Starter toolkit package your agent as a container image so ECR repo is also required. Both of the item I have created using cdk. (make sure you update your account id in scripts where ever required).

Role and ECR

Create required role and ECR using CDK script ->

https://github.com/arunksingh16/ai-projects/tree/main/aws-bedrock-agentcore-starter/infra

Once you are done with role and ECR, lets try deploy the agentcore config. To run the config in the folder you need agentcore utility. That you can get it via ->

pip install bedrock-agentcore-starter-toolkit

Enter fullscreen mode Exit fullscreen mode

Now you are ready, make sure you have set AWS env variables and secret.

# this command will ask for ECR repository details and auth details. 
# please note i have used default IAM based auth
agentcore configure --entrypoint agent.py -n strands_agentcore -r us-east-1 -er "arn:aws:iam::<accountID>:role/MyBedrockAgentRol
Enter fullscreen mode Exit fullscreen mode

The command will generate a Dockerfile, .dockerignore and .bedrock_agentcore.yaml configuration file in same folder.

Please note

Amazon Bedrock AgentCore requires ARM64 architecture for all deployed agents.

Launch your agent:

# agentcore launch
agentcore launch --code-build
# the above command will create a docker image and push it to repository
# i have selected option of code-build so it will utilise AWS code
# build pipeline for that
# At last it will deploy it in AgentCore runtime
Enter fullscreen mode Exit fullscreen mode

I noticed that the deployment started failing. In the latest version of the Bedrock AgentCore Starter Toolkit, there’s a new --codebuild flag for the agentcore launch command, which enables ARM64 container builds. You can view the build process directly in AWS.

Upon reviewing the logs, I realized it was referencing the wrong ECR. I updated the ECR value in the .bedrock_agentcore.yaml file and re-ran the agentcore launch command this time, it worked successfully.

You can view your deployments in runtime console.


Time to run some tests from cli.

(myenv) ➜  my-strands-agent agentcore invoke '{"prompt": "What is 2+2?"}'
Payload:
{
  "prompt": "What is 2+2?"
}
Invoking BedrockAgentCore agent 'strands_agentcore' via cloud endpoint
Session ID: aa53199a-1760-4a60-ba62-5a8d511b216f

Response:
{
  "ResponseMetadata": {
    "RequestId": "ebacaf55-1750-434b-963c-41bfea5cde8e",
    "HTTPStatusCode": 200,
    "HTTPHeaders": {
      "date": "Mon, 04 Aug 2025 10:09:11 GMT",
      "content-type": "application/json",
      "transfer-encoding": "chunked",
      "connection": "keep-alive",
      "x-amzn-requestid": "ebacaf55-1750-434b-963c-41bfea5cde8e",
      "baggage": "Self=1-689086c2-61df3327557ffd33689d8f7b,session.id=aa53199a-1760-4a60-ba62-5a8d511b216f",
      "x-amzn-bedrock-agentcore-runtime-session-id": "aa53199a-1760-4a60-ba62-5a8d511b216f",
      "x-amzn-trace-id": "Root=1-689086c2-37bd21e07a967574243b50d0;Self=1-689086c2-61df3327557ffd33689d8f7b"
    },
    "RetryAttempts": 0
  },
  "runtimeSessionId": "aa53199a-1760-4a60-ba62-5a8d511b216f",
  "traceId": "Root=1-689086c2-37bd21e07a967574243b50d0;Self=1-689086c2-61df3327557ffd33689d8f7b",
  "baggage": "Self=1-689086c2-61df3327557ffd33689d8f7b,session.id=aa53199a-1760-4a60-ba62-5a8d511b216f",
  "contentType": "application/json",
  "statusCode": 200,
  "response": [
    "b'\"The answer is **4**.\"'"
  ]
}
Enter fullscreen mode Exit fullscreen mode

This was a valuable exercise. Another important takeaway is that performance and monitoring data can be viewed in the CloudWatch GenAI Observability dashboard a newly released feature from AWS.

Can I place a streamlit front end?

Lets modify the agent code to handle streaming response first. The Strands Agents SDK includes a special function, stream_async(prompt), that enables streaming. When you use this method, the agent starts processing the request such as a user question and as each part of the response (like a word or sentence or agent action) is generated, it is sent back immediately to the requester.

So update the agent code entrypoint as ->

@app.entrypoint
async def invoke(payload):
    """Answer the user's question with streaming."""    
    user_message = payload.get("prompt")

    # Generate a response with streaming
    stream = agent.stream_async(user_message)
    async for event in stream:
        yield event
Enter fullscreen mode Exit fullscreen mode

Deploy the agent again using same command as earlier

agentcore launch --code-build
Enter fullscreen mode Exit fullscreen mode

Create a steamlit frontend as following code link, make sure you update the arn for agentcore runtime.

https://github.com/arunksingh16/ai-projects/blob/main/aws-bedrock-agentcore-starter/fronend.py

run the front end

streamlit run frontend.py
Enter fullscreen mode Exit fullscreen mode

Now pass your prompt here !

steamlit

How it is different from Strands Framework

While both AgentCore and Strands Agents are AWS offerings for building AI agents, they serve distinctly different purposes and complement each other rather than compete.

Strands Agents is an open-source SDK that focuses on simplifying the development of AI agents through a model-driven approach.

AgentCore, in contrast, is an infrastructure and deployment platform that provides the production-grade foundation for running any agent framework, including Strands. The relationship is complementary.

As one technical analysis noted: “Strands gives you the tools to build the agent, AgentCore gives you the infrastructure to run it at scale”. AgentCore explicitly supports Strands Agents as one of its compatible frameworks, allowing developers to build with Strands and deploy on AgentCore infrastructure

Key Features in AgentCore

  1. Framework Agnostic: Works with any AI agent framework (Strands, LangGraph, CrewAI, AutoGen, or custom logic)
  2. Enterprise Infrastructure: AWS-managed infrastructure with built-in gateway and memory integrations
  3. Memory Management: Sophisticated conversation memory with semantic strategies and branching
  4. Async Task Management: Built-in support for long-running background tasks
  5. Health Monitoring: Real-time health status and monitoring capabilities

I would like to cover majority of this in next posts.

Till then, Stay Safe and Take Care.


💬 Let’s Connect

Top comments (0)