DEV Community

Cover image for AWS Bedrock AgentCore Hands-On Workshop: A Recap
Sonia Rahal
Sonia Rahal

Posted on

AWS Bedrock AgentCore Hands-On Workshop: A Recap

Location: Montréal AWS User Group
Date: December 18, 2025


TL;DR

This workshop was a hands-on journey through Amazon Bedrock AgentCore (a platform to run AI agents at scale), covering Runtime, Gateway, Identity, Memory, Built-in Tools, and Observability. Participants learned how to take AI agents from simple PoC (Proof of Concept) to secure, enterprise-ready applications.

Note: Each demo shown here is just one example, and the tools mentioned are a subset of what was explored during the workshop, not exhaustive.


My Story: Why Cloud and Agents Matter

Getting into cloud development isn’t just about learning services—it’s about understanding the real problem first. Code is a tool for reliability, not the final asset. The bigger picture is knowing why a company would use Amazon Bedrock AgentCore.

Enterprises want AI agents that can go from experiments to real-life, secure, scalable, and observable applications.

This workshop helped me connect the dots: how modules and tools work together to create agents that are not just smart, but reliable and trustworthy.

Target audience: Enterprises or developers wanting AI agents without managing all the complex infrastructure themselves. Their goals include building reliable agents, scaling safely, integrating with external systems, and having full visibility (observability) into agent operations.


Workshop Modules: A Story Through Examples

Runtime (Demo: Weather + Calculator Agent)

Imagine you want to create an agent that can tell the weather or perform calculations for users. Runtime is the engine that makes this possible.

  • What it is: A secure environment that runs your agent (the software that answers questions or performs tasks), handling infrastructure, scaling, and session management.
  • Why it matters: Developers can focus on what the agent does instead of worrying about servers or security.
  • Example Demo: Weather + Calculator agent. Runtime handled all container orchestration and session isolation.
  • Prompt Example: How is the weather?
  • Tools Used: Strands Agent, Elastic Container Registry, Terminal prompts
  • Takeaway: Runtime is the backbone that turns a prototype into a production-ready agent.

Gateway (Demo: Mars Weather Agent)

Imagine your agent needs data from external sources, like NASA’s weather data for Mars. Gateway is what connects your agent to the outside world.

  • What it is: The integration layer that allows agents to interact with external systems or APIs.
  • Why it matters: To provide real-world insights, agents need access to external information safely and reliably. Gateway allows defining tools with metadata about name, description, input/output schemas, and behavior.
  • Example Demo: Mars Weather agent called NASA’s Open APIs using an API key. Here is an API response example.
  • Prompt Example: "Hi, can you list all tools available to you" "What is the weather in northern part of the Mars"

  • Tools Used: REST APIs, AgentCore Gateway, API keys

  • Takeaway: Gateway bridges the agent and external systems, enabling actionable intelligence and structured tool integration.


Identity (Demo: AgentCore Runtime with vs without Authorization)

Imagine that not everyone should be able to use your agent, or some tasks require special permissions. Identity handles that.

  • What it is: Manages who can invoke agents and what they can access.
  • Why it matters: Protects sensitive data and ensures compliance in enterprise environments.
  • Example Demo: Weather agent invoked with authorization worked; without authorization, it returned an error AccessDeniedException.
  • Prompt Example: "How is the weather?"

  • Tools Used: Amazon Cognito, JWT tokens

  • Takeaway: Identity ensures only authorized users or systems interact with agents.


Memory (Demo: AI Learning Agent)

Imagine talking to an agent that remembers you and what you’ve discussed before. Memory makes this possible.

  • What it is: Stores context for multi-turn conversations.
  • Short-term memory: remembers context during a session (e.g., last few questions)
  • Long-term memory: preserves key information across sessions (e.g., user preferences, summaries)
  • Why it matters: Memory enables agents to give personalized and context-aware responses, improving over time.
  • Example Demo: The agent remembered the user’s name (Alex) and topics of interest in AI across sessions.
  • Prompt Example:
    User: "My name is Alex and I'm interested in learning about AI."
    Agent: "Hi Alex! I’m excited to help you learn about AI!"
    Later:
    User: "What was my name again?"
    Agent: "Your name is Alex!"

  • Tools Used: AgentCore Memory, Strands MetricsClient

  • Takeaway: Short-term memory provides session-level context, long-term memory provides persistent context that improves user experience and enables agents to maintain continuity over time.


Built-in Tools (Demo: Amazon Revenue Extraction)

Imagine your agent needs to not just answer questions but extract and process data.

  • What it is: Pre-built tools like Browser or Code Interpreter extend agent capabilities.
  • Why it matters: Agents can perform specialized tasks safely and efficiently.
  • Example Demo: Extract Amazon revenue data from a website using Browser tool with Nova Act SDK.
  • Prompt Example: "Extract and return Amazon revenue for the last 4 years from stockanalysis.com."

  • Tools Used: Browser Tool, Code Interpreter, Nova Act SDK

  • Takeaway: Built-in tools enable agents to handle complex tasks, making them more useful in enterprise contexts.


Observability (Demo: CrewAI Travel Agent)

Imagine launching an agent in production and needing insight into its behavior. Observability solves this.

  • What it is: Monitoring and logging for agent workflows, tool usage, performance, and errors.
  • Why it matters: Ensures agents are traceable, measurable, and debuggable, which builds trust.
  • Example Demo Workflow:
  • Create a runtime-ready CrewAI agent using Amazon Bedrock, defining notably its role, goal, backstory, and task.
  • Instrument the agent with CrewAIInstrumentor().instrument() to enable observability.
  • Use Boto3 to invoke the agent: prompt = "What are some rodeo events happening in Oklahoma?"
    • Multiple responses are found in parallel.
  • Dashboards on CloudWatch show runtime metrics across all agents, and clicking on a specific agent shows detailed metrics with custom time-frame filtering.
  • Tools Used: Amazon CloudWatch, Boto3 SDK, Crew AI, Scarf, AWS Distro for OpenTelemetry
  • Takeaway: Observability ensures production agents are monitored and performance is visible, supporting reliability and optimization.

Why Amazon Bedrock AgentCore Matters

Enterprises adopt Bedrock AgentCore to move from a proof of concept to production-ready AI applications. It provides:

  • Scalable deployment without managing infrastructure
  • Secure, authorized execution
  • Contextual and persistent memory
  • Integration with external systems and workflows
  • Full observability for performance and errors

Understanding these modules helps developers deliver AI solutions that meet enterprise goals.


Final Takeaways

  • Cloud development is about seeing the big picture, not just writing code.
  • AgentCore offers a sandbox to experiment safely with enterprise-grade agents.
  • Observability ensures live agents can be monitored, optimized, and trusted.
  • Hands-on workshops and community engagement are invaluable for learning how tools solve real-world problems.

Top comments (0)