AWS just made building AI agents a lot easier for enterprises. On October 13, 2025, Amazon Bedrock AgentCore reached general availability. This marks a turning point for teams struggling to move AI projects from prototype to production.
The AI agents market hit $7.63 billion in 2025, according to Grand View Research. And Gartner predicts that 40% of enterprise applications will integrate task-specific AI agents by the end of 2026. That's up from less than 5% in early 2025.
Here's what AgentCore brings to the table and why it matters for your business.
What Is AWS Bedrock AgentCore?
AWS Bedrock AgentCore is a managed platform for building, deploying, and operating AI agents at enterprise scale. Think of it as the infrastructure layer that handles everything between your AI agent's logic and your production environment.
Before AgentCore, teams faced a common problem. Building a proof-of-concept AI agent took weeks. But getting that agent production-ready? That often took months. Security, session management, identity controls, and monitoring all needed custom solutions.
AgentCore removes that bottleneck.
Core Components
The platform includes six modular services you can use independently or together:
- Runtime: Managed execution environment with session isolation
- Gateway: Transforms your existing APIs into agent-compatible tools
- Memory: Short-term and long-term context storage for agents
- Identity: OAuth-based authentication and secure token storage
- Code Interpreter: Sandboxed Python execution environment
- Observability: CloudWatch integration for monitoring and debugging
Each service is priced on consumption. No upfront costs, no minimum fees.
Framework Flexibility
One thing that stands out? AgentCore works with any framework. LangGraph, CrewAI, or your own custom setup. You're not locked into a specific agent architecture.
This matters because agent development is still evolving. What works today might not be the best approach next year. AgentCore gives you flexibility to adapt.
Key Features in the GA Release
The October 2025 GA release brought several capabilities that were previously missing from AWS's agentic AI offerings.
VPC and PrivateLink Support
For enterprises with strict security requirements, this is big. Your agents can now run entirely within your Virtual Private Cloud. No data leaves your network boundary unless you explicitly allow it.
AWS PrivateLink ensures that traffic between your agents and other AWS services stays on the AWS backbone. This reduces exposure to public internet risks.
Eight-Hour Execution Windows
Most AI interactions finish in seconds. But enterprise workflows often take longer. AgentCore Runtime supports execution windows up to eight hours. That's enough time for complex research tasks, multi-step approval workflows, or data processing pipelines.
Wait, there's more.
Each session runs in complete isolation. One agent's execution can't affect another's. This prevents side effects that plague shared compute environments.
Agent-to-Agent Protocol (A2A)
Building multi-agent systems just got easier. The A2A protocol enables direct communication between agents without routing through a central orchestrator.
Picture a customer service scenario. A front-line agent handles the initial query. When specialized help is needed, it hands off directly to a billing agent or technical support agent. No bottlenecks. No manual routing rules.
Model Context Protocol (MCP) Server
The AgentCore MCP server became available alongside GA. It provides built-in support for runtime, gateway integration, identity management, and agent memory.
This accelerates development. You can prototype agent components quickly and scale them to production without rewriting integration code.
Pricing: What Does AgentCore Cost?
AWS uses consumption-based pricing for AgentCore. You pay for what you use. Let me break down the key cost factors.
Runtime, Browser, and Code Interpreter
These services bill based on CPU and memory usage per second. The good news? You're not charged during I/O wait periods. When your agent waits for an API response, the clock stops.
Memory costs are based on peak consumption up to that second, not average usage.
Gateway API Calls
The Gateway service costs $0.005 per 1,000 tool API invocations. If your agent makes 10,000 tool calls per day, you're looking at about $1.50 monthly for Gateway alone.
Memory Storage
Short-term memory events cost $0.25 per 1,000 events. Long-term memory storage follows standard S3 pricing.
Real-World Cost Example
AWS provides an example on their pricing page. A customer support agent handling 10 million monthly user requests would cost approximately $7,625 per month. That includes runtime, tool calls, and memory operations.
For smaller deployments, costs drop significantly. An internal IT helpdesk agent processing 50,000 requests monthly might run under $400.
AgentCore Pricing Breakdown
Here's a quick reference for planning your budget:
| Service | Pricing Model | Cost | Notes |
|---|---|---|---|
| Runtime | Per-second CPU/Memory | Varies by instance | No charge during I/O wait |
| Gateway | Per 1,000 API calls | $0.005 | Tool invocations only |
| Memory (Short-term) | Per 1,000 events | $0.25 | Session context storage |
| Memory (Long-term) | S3 Standard pricing | ~$0.023/GB | Episodic memory storage |
| Code Interpreter | Per-second CPU/Memory | Varies by instance | Sandboxed Python execution |
| Browser | Per-second CPU/Memory | Varies by instance | Headless web automation |
| Policy (Preview) | Per 1,000 input tokens | Free during preview | Natural language to Cedar |
All pricing is consumption-based with no minimum commitments. Network transfer charges apply starting November 2025.
How AgentCore Compares to Other Platforms
AWS isn't the only player in the AI agent space. Here's an honest comparison of mobile app development California teams and enterprises should consider.
AgentCore vs. OpenAI's ChatGPT Agent
OpenAI's approach targets consumer and prosumer use cases. You get a polished interface with built-in reasoning capabilities.
AgentCore takes the opposite approach. It's infrastructure, not an application. You build your own agent logic on top of it. This gives you more control but requires more engineering effort.
For enterprises needing custom security policies, audit trails, and integration with existing AWS infrastructure, AgentCore fits better.
AgentCore vs. Azure AI Agents
Microsoft's offering integrates tightly with Azure OpenAI Service. If your organization already runs on Azure, that ecosystem advantage matters.
AWS customers will find AgentCore more natural. IAM integrations, CloudWatch monitoring, and VPC networking all work as expected. No need to bridge two cloud providers.
Build vs. Buy Trade-offs
Both major cloud platforms offer managed agent services. The choice often comes down to existing infrastructure and team expertise.
Here's the bottom line. AgentCore reduces the custom code you need to write for security, scaling, and monitoring. But you still own the agent logic. You're responsible for prompt engineering, workflow design, and testing.
Expert Takes on AWS Bedrock AgentCore
Industry leaders haven't held back on their assessment of this release.
"The future of agentic AI is not on agents that can do everything. It's on agents we can rely on to do everything."
Swami Sivasubramanian, Vice President of Agentic AI at AWS (AWS re:Invent 2025 Keynote)
This quote captures AWS's philosophy. Reliability over feature breadth. An agent that handles 10 tasks flawlessly beats one that handles 100 tasks with occasional failures.
"This change is going to have as much impact on your business as the internet or the cloud."
Matt Garman, CEO of AWS (AWS re:Invent 2025 Keynote)
Garman's prediction is bold. But the numbers support his optimism. A G2 survey from August 2025 found that 57% of companies already have AI agents running in production. Another 22% are in pilot programs.
The shift from experimentation to deployment is happening faster than most predicted.
What AWS Announced at re:Invent 2025
The October GA was just the beginning. At AWS re:Invent in December 2025, more capabilities landed in preview.
Policy Controls (Preview)
Enterprise governance remains a top concern for AI adoption. AgentCore Policy lets you define boundaries for agent actions using natural language.
Behind the scenes, your policy statements convert to Cedar, AWS's open-source policy language. This means you get deterministic enforcement, not probabilistic guardrails. When you say "never access customer payment data," the agent physically can't do it.
AgentCore Evaluations (Preview)
How do you know if your agent is performing well? Evaluations provides 13 built-in evaluators covering helpfulness, tool selection, accuracy, and safety.
You can also create custom evaluators based on your business metrics. If response time matters more than detail for your use case, you can optimize for that.
Episodic Memory
Agents can now learn from past experiences. Episodic memory stores successful interaction patterns and applies them to future requests.
This creates more human-like interactions. An agent that helped you book a flight last month remembers your seat preference and frequent flyer number without you restating them.
Bidirectional Streaming
Voice agent use cases got a boost. Bidirectional streaming allows agents to listen and respond simultaneously. They can handle interruptions and context changes mid-conversation.
This matters for customer service applications where callers often change topics or provide additional context while the agent speaks.
Enterprise Use Cases in 2026
Based on early adopter patterns, here's where AgentCore deployments are showing the most value.
Customer Support Automation
Agents triage incoming tickets, draft initial responses, and route complex issues to human specialists. One Fortune 500 company reported reducing first-response time from 4 hours to 8 minutes after deploying AgentCore-based support agents.
Internal IT Helpdesk
Password resets, software provisioning, and access requests typically consume significant IT staff time. Agents handle these routine tasks while escalating exceptions.
IT teams gain bandwidth for strategic projects without hiring additional headcount.
Financial Reporting
Period-close activities involve pulling data from multiple systems, validating entries against compliance policies, and generating reports. Agents automate the data gathering and validation steps, reducing close cycles by 30-40% in early deployments.
Multi-Agent Collaboration
Complex business challenges benefit from specialized agents working together. A pharmaceutical company used separate R&D, legal, and finance agents to break down data silos. Each agent contributed domain expertise to a unified workflow.
This is where the A2A protocol shines. Direct agent communication eliminates the need for custom integration code between specialized systems.
Getting Started With AgentCore
Ready to test it out? Here's a practical starting point.
Free Trial and Credits
New AWS customers may qualify for up to $200 in Free Tier credits. These cover AgentCore services and related AWS costs.
Even without credits, the consumption-based pricing means small experiments cost very little. A developer testing an agent concept might spend under $10 in a month of active development.
Regions Available
AgentCore is available in nine AWS regions:
- US East (N. Virginia, Ohio)
- US West (Oregon)
- Asia Pacific (Mumbai, Singapore, Sydney, Tokyo)
- Europe (Frankfurt, Ireland)
More regions are expected throughout 2026.
First Steps
Start with a narrow use case. Pick one workflow in your organization that's repetitive but requires reasoning. Customer email triage works well. So does internal FAQ response.
Build a minimal agent, deploy it to AgentCore Runtime, and measure performance against your human baseline. Expand scope only after validating the core workflow.
Sample Code: Basic AgentCore Setup
Here's a Python example to get you started with AgentCore Runtime:
# Install the AgentCore SDK
# pip install boto3 amazon-bedrock-agentcore
import boto3
from bedrock_agentcore import AgentCoreRuntime, AgentCoreGateway
# Initialize the AgentCore client
client = boto3.client('bedrock-agentcore', region_name='us-east-1')
# Create a runtime session
runtime = AgentCoreRuntime(
client=client,
session_config={
'max_execution_time': 3600, # 1 hour max
'enable_isolation': True,
'memory_strategy': 'short_term'
}
)
# Define your agent's tools via Gateway
gateway = AgentCoreGateway(client=client)
gateway.register_tool(
name='search_knowledge_base',
openapi_spec='./tools/search_api.yaml',
auth_type='iam'
)
# Run the agent with a user query
response = runtime.invoke(
agent_id='my-support-agent',
input_text='How do I reset my password?',
tools=[gateway.get_tool('search_knowledge_base')]
)
print(response['output_text'])
print(f"Execution time: {response['metrics']['duration_ms']}ms")
This code initializes a runtime session, registers a custom tool through the Gateway, and invokes an agent. The session_config parameter controls isolation and execution limits.
For production deployments, you'll also want to configure Identity for authentication and Observability for monitoring.
Industry Voices on Social Media
AWS developers and tech leaders shared their reactions when AgentCore hit GA.
Dan O'Brien @danOBtech
"Research shows how Amazon Bedrock AgentCore from @awscloud cuts through the complexity. Finally, a managed platform that handles the infrastructure nightmares so we can focus on agent logic."
October 2025
This sentiment echoed across developer communities. The common frustration? Too much time spent on plumbing, not enough on actual AI capabilities.
Ian Kisali @IanKisali_
"Innovation announcements from @awscloud keep coming. Getting models on Amazon Bedrock was step one. AgentCore makes step two possible. 2025 is the year agents become real."
October 2025
The trajectory is clear. Models are now commoditized. Infrastructure for running agents in production is the new differentiator.
Frequently Asked Questions
How much does AWS Bedrock AgentCore cost per month?
Costs depend entirely on usage. The Gateway service runs $0.005 per 1,000 API calls. Runtime bills per second of CPU and memory. A typical enterprise deployment handling 10 million monthly requests costs approximately $7,625. Smaller projects can run for under $100.
Can I use AgentCore with models from other providers?
Yes. AgentCore is model-agnostic. You can use Amazon Bedrock foundation models, bring your own fine-tuned models, or integrate with third-party providers. The runtime doesn't care where your model inference happens.
What's the difference between Bedrock Agents and AgentCore?
Bedrock Agents is the AI framework for building agent logic. AgentCore is the infrastructure for running those agents in production. You can use Bedrock Agents without AgentCore, but you lose managed runtime, identity, and observability features.
Is AgentCore suitable for regulated industries like healthcare?
AWS designed AgentCore with compliance in mind. VPC isolation, PrivateLink, CloudWatch audit trails, and IAM-based access controls meet many regulatory requirements. The Policy feature (in preview) adds additional governance for sensitive operations.
You should still conduct your own compliance review with AWS representatives for specific regulatory frameworks like HIPAA or FedRAMP.
How long does it take to deploy an agent on AgentCore?
Basic deployment takes 1-2 hours for developers familiar with AWS. Full production-ready setup with authentication, monitoring, and policy controls typically requires 1-2 weeks of work.
Does AgentCore support multi-tenant applications?
Yes. AgentCore Identity gained support for custom claims in the December 2025 update. This enables fine-grained access controls across multi-tenant environments while using your existing identity provider.
Making Your Decision
AWS Bedrock AgentCore solves a real problem. Moving AI agents from prototype to production has been the biggest bottleneck for enterprise AI adoption. AgentCore addresses security, scaling, identity, and monitoring in a single managed platform.
The consumption-based pricing removes upfront risk. You can test with minimal investment and scale only when you've proven value.
Start with one workflow. Build a narrow agent. Deploy it on AgentCore Runtime. Measure results against your human baseline for two weeks. If the numbers work, expand from there. If they don't, you've spent less than the cost of a team lunch to find out.
Top comments (0)