DEV Community

Plug & Productionize Your AI Agents with AWS Bedrock AgentCore

Deploying a Local AI Agent to AWS Bedrock AgentCore

This post demonstrates a streamlined approach to deploying locally developed AI agents using AWS Bedrock AgentCore. We'll build a simple single-node LLM agent, extend it with real-time web search, and deploy it seamlessly.

Overview

I have created a simple LLM agent using OpenAI and extended it with DuckDuckGoSearchResults (LangChain) to fetch current internet information. Once tested locally, the agent can be deployed to AWS Bedrock AgentCore, giving you:

  • Automatic scaling
  • Serverless execution
  • Fully managed runtime

Tech Stack

OpenAI Model ** : gpt-5-nano
**Agent Framework
: LangGraph
Deployment : AWS Bedrock AgentCore
Tooling : DuckDuckGoSearchResults (LangChain)

Follow along Github repo link : https://github.com/sampathkaran/langgraph-openai-agentcore-demo

Prerequisites

Before starting, make sure you have:

  1. Python 3.10 installed
  2. AWS CLI configured with credentials
  3. Miniconda installed

Local Setup & Testing

  • Create a virtual environment.
conda create -n agentcore python=3.10
conda activate agentcore
Enter fullscreen mode Exit fullscreen mode
  • Install dependencies.
cd agentcore-deployment
pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode
  • Store OpenAI API key in AWS Secrets Manager.
  1. - Go to AWS Console → Secrets Manager → Store a new secret
  2. - Select Other type of secret
  3. - Key name: api-key, Value:
  4. - Name the secret: my-api-key
  • Now we can test the code locally. Edit agent.py Uncomment the print statement for a test query, and comment out aws_app.run()
#print(langgraph_bedrock({"messages": "Who won the Formula 1 Singapore 2025 race? Give a brief answer"}))
Enter fullscreen mode Exit fullscreen mode

Invoke the script
python agent.py

Architecture Overview

Here’s what happens when you execute a query:

The LLM checks if it already has the answer.

  • If not, it calls the DuckDuckGoSearchResults tool to fetch current information.
  • The agent may loop through the search until it has sufficient data.
  • Once complete, the LLM generates the final response.

Deploying to AWS Bedrock AgentCore

Amazon Bedrock AgentCore provides the following features:

  1. Serverless runtime
  2. Automatic scaling
  3. Session management
  4. Security isolation

We'll use the AgentCore Starter Toolkit CLI to deploy our agent.

Steps to Deploy

  • Update requirements.txt

bedrock-agentcore<=0.1.5
bedrock-agentcore-starter-toolkit==0.1.14

Enter fullscreen mode Exit fullscreen mode
  • Import AgentCore runtime library
from bedrock_agentcore.runtime import BedrockAgentCoreApp

aws_app = BedrockAgentCoreApp()
Enter fullscreen mode Exit fullscreen mode
  • Add the entrypoint decorator

@aws_app.entrypoint

  • Enable the runtime server
# print(langgraph_bedrock(...))  # Comment out local print
aws_app.run()  # Starts HTTP server on port 8080
Enter fullscreen mode Exit fullscreen mode
  • Configure AgentCore CLI

agentcore configure --entrypoint agent.py

Accept default options (IAM roles, permissions, etc.)
This generates a Dockerfile and deployment YAML

  • Launch the agent

agentcore launch

  • Attach the IAM role permission to access AWS Secrets Manager (secret-policy.json)

  • Verify your deployed agent in the AWS console

  • Invoke the agent

agentcore invoke '{"message": "Hello"}'

Conclusion

Using AWS Bedrock AgentCore, you can:

  • Take a local LLM agent to production
  • Enable serverless scaling and runtime management
  • Integrate external tools for real-time information
  • This approach keeps your AI agent lightweight locally while making it fully production-ready in the cloud.

Top comments (0)