Deploying a Local AI Agent to AWS Bedrock AgentCore
This post demonstrates a streamlined approach to deploying locally developed AI agents using AWS Bedrock AgentCore. We'll build a simple single-node LLM agent, extend it with real-time web search, and deploy it seamlessly.
Overview
I have created a simple LLM agent using OpenAI and extended it with DuckDuckGoSearchResults (LangChain) to fetch current internet information. Once tested locally, the agent can be deployed to AWS Bedrock AgentCore, giving you:
- Automatic scaling
- Serverless execution
- Fully managed runtime
Tech Stack
OpenAI Model ** : gpt-5-nano
**Agent Framework : LangGraph
Deployment : AWS Bedrock AgentCore
Tooling : DuckDuckGoSearchResults (LangChain)
Follow along Github repo link : https://github.com/sampathkaran/langgraph-openai-agentcore-demo
Prerequisites
Before starting, make sure you have:
- Python 3.10 installed
- AWS CLI configured with credentials
- Miniconda installed
Local Setup & Testing
- Create a virtual environment.
conda create -n agentcore python=3.10
conda activate agentcore
- Install dependencies.
cd agentcore-deployment
pip install -r requirements.txt
- Store OpenAI API key in AWS Secrets Manager.
- - Go to AWS Console → Secrets Manager → Store a new secret
- - Select Other type of secret
- - Key name: api-key, Value:
- - Name the secret: my-api-key
- Now we can test the code locally. Edit agent.py Uncomment the print statement for a test query, and comment out aws_app.run()
#print(langgraph_bedrock({"messages": "Who won the Formula 1 Singapore 2025 race? Give a brief answer"}))
Invoke the script
python agent.py
Architecture Overview
Here’s what happens when you execute a query:
The LLM checks if it already has the answer.
- If not, it calls the DuckDuckGoSearchResults tool to fetch current information.
- The agent may loop through the search until it has sufficient data.
- Once complete, the LLM generates the final response.
Deploying to AWS Bedrock AgentCore
Amazon Bedrock AgentCore provides the following features:
- Serverless runtime
- Automatic scaling
- Session management
- Security isolation
We'll use the AgentCore Starter Toolkit CLI to deploy our agent.
Steps to Deploy
- Update requirements.txt
bedrock-agentcore<=0.1.5
bedrock-agentcore-starter-toolkit==0.1.14
- Import AgentCore runtime library
from bedrock_agentcore.runtime import BedrockAgentCoreApp
aws_app = BedrockAgentCoreApp()
- Add the entrypoint decorator
@aws_app.entrypoint
- Enable the runtime server
# print(langgraph_bedrock(...)) # Comment out local print
aws_app.run() # Starts HTTP server on port 8080
- Configure AgentCore CLI
agentcore configure --entrypoint agent.py
Accept default options (IAM roles, permissions, etc.)
This generates a Dockerfile and deployment YAML
- Launch the agent
agentcore launch
Attach the IAM role permission to access AWS Secrets Manager (secret-policy.json)
Verify your deployed agent in the AWS console
Invoke the agent
agentcore invoke '{"message": "Hello"}'
Conclusion
Using AWS Bedrock AgentCore, you can:
- Take a local LLM agent to production
- Enable serverless scaling and runtime management
- Integrate external tools for real-time information
- This approach keeps your AI agent lightweight locally while making it fully production-ready in the cloud.
Top comments (0)