DEV Community

Seenivasa Ramadurai
Seenivasa Ramadurai

Posted on

Supercharge your AWS AI agents with Strands Hooks

Introduction

Before diving in, I recommend checking out the earlier parts of this series for better context:

Part I: Introducing AWS Bedrock AgentCore – A Modular Platform for Deploying AI Agents at Enterprise Scale

https://dev.to/sreeni5018/introducing-aws-bedrock-agentcore-a-modular-platform-for-deploying-ai-agents-at-enterprise-scale-369p

Part II: AWS AgentCore Deployment Guide – Deploying SreeniBot to Production

https://dev.to/sreeni5018/aws-agentcore-deployment-guide-part-ii-deploying-sreenibot-to-production-part-ii-327i

I recommend reading those first before continuing with this blog.

It reasons, uses tools, and completes tasks autonomously. It's an impressive feat, but what if you need more? What if you want to add custom logging, enforce security checks, or even build a dynamic memory system?

This is where Strands hooks come in. As a type-safe and composable extensibility mechanism, hooks allow you to tap into the agent's lifecycle and react to or modify its behavior at key moments. This elevates your agents from a prototype to a production-ready system.
Why use hooks? Beyond simple agents

The real power of a Strands agent lies in its flexibility. Hooks provide the perfect interface for adding sophisticated functionality without cluttering your core agent logic.

Observability and monitoring: You can use hooks to implement custom logging, track performance metrics, or integrate with tracing tools like OpenTelemetry. This gives you unparalleled visibility into how your agent makes decisions and uses its tools.

Security and guardrails: Intercept tool calls to validate arguments, redact sensitive information, or get human approval before performing a critical action. This makes your agents safer and more reliable.

Memory and state management: With hooks, you can retrieve conversational history or user context and inject it directly into the agent's prompt. After the agent responds, another hook can save the updated conversation to a persistent store.

Behavior modification: Modify a tool's parameters on the fly or even swap out a tool for a different one based on the agent's current state. For example, you could replace a standard search tool with a cached version to improve performance.

The Problem: We Need to See What's Happening

Picture this: you've built an AI agent using Strands, and it's working beautifully. Users are chatting with it, getting helpful responses, and everything seems great. But then your boss asks the inevitable question: "Can we see what conversations are happening? How long do they take? Are there any patterns we should know about?"

That's exactly where I found myself last week. I had a working Strands agent, but zero visibility into what was actually happening during those conversations. I needed a way to log every interaction, capture timing data, and store it all in a way that was easy to analyze.

Enter Strands Hooks

The Solution I Didn't Know I Needed
After some research, I discovered Strands hooks a powerful system for intercepting and monitoring agent interactions. Think of hooks as little spies that watch your agent work and can do things before and after each conversation.

The beauty of hooks is that they're completely non-intrusive. Your agent doesn't need to know they exist, and you can add or remove them without changing your core agent code. It's like having a security camera system that doesn't interfere with your daily life.

A practical example: Building a memory system

Let's walk through a common and powerful use case for hooks: creating a persistent memory system. We'll use a HookProvider to organize our callbacks cleanly.

First, define your HookProvider, which will handle saving and loading memory.

The Architecture: How It All Works Together

Let me walk you through what I built. The system has three main parts:

1. The Hook Provider (The Brain)

class FileLoggingHooks(HookProvider):
    def __init__(self, log_file="strands_conversations.json"):
        self.log_file = log_file
        self.start_time = None
        self.current_user_input = None
        self.current_agent_response = None
Enter fullscreen mode Exit fullscreen mode

This is the heart of the system. It extends Strands' HookProvider class and manages all the logging logic. I store timing information and conversation data in instance variables so I can pass them between the before and after hooks.

2. The Before Hook (The Setup)

def before_invocation(self, event: BeforeInvocationEvent) -> None:
    print("🔧 BEFORE HOOK: Setting up for agent invocation...")
    self.start_time = time.time()
Enter fullscreen mode Exit fullscreen mode

This fires right before your agent starts processing a user's request. It's like the "lights, camera, action" moment. I capture the start time here so I can calculate how long the agent takes to respond.

3. The After Hook (The Cleanup)

def after_invocation(self, event: AfterInvocationEvent) -> None:
    duration = time.time() - self.start_time
    self._log_interaction(user_input, agent_response, duration)
Enter fullscreen mode Exit fullscreen mode

This fires after your agent finishes processing. It's where the magic happens - I calculate the duration, capture the response, and log everything to a JSON file.

The Implementation: Code That Actually Works

Here's the complete working implementation:

import json
import time
from datetime import datetime
from strands.agent import Agent
from strands.hooks import HookProvider, HookRegistry
from strands.hooks.events import BeforeInvocationEvent, AfterInvocationEvent

class FileLoggingHooks(HookProvider):
    def __init__(self, log_file="strands_conversations.json"):
        self.log_file = log_file
        self.start_time = None
        self.current_user_input = None
        self.current_agent_response = None
        self._initialize_log_file()

    def register_hooks(self, registry: HookRegistry) -> None:
        registry.add_callback(BeforeInvocationEvent, self.before_invocation)
        registry.add_callback(AfterInvocationEvent, self.after_invocation)
        print(" Strands hooks registered successfully!")

    def before_invocation(self, event: BeforeInvocationEvent) -> None:
        print("🔧 BEFORE HOOK: Setting up for agent invocation...")
        self.start_time = time.time()

    def after_invocation(self, event: AfterInvocationEvent) -> None:
        print("📝 AFTER HOOK: Logging agent interaction...")
        duration = time.time() - self.start_time
        user_input = self.current_user_input or "Unknown request"
        agent_response = self.current_agent_response or "Unknown response"
        self._log_interaction(user_input, agent_response, duration)

    def set_user_input(self, user_input: str) -> None:
        self.current_user_input = user_input

    def set_agent_response(self, agent_response: str) -> None:
        self.current_agent_response = agent_response

    def _log_interaction(self, user_input, agent_response, duration):
        try:
            with open(self.log_file, 'r') as f:
                conversations = json.load(f)

            conversation = {
                "id": len(conversations) + 1,
                "timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
                "user_input": user_input,
                "agent_response": agent_response,
                "duration_seconds": round(duration, 2)
            }

            conversations.append(conversation)

            with open(self.log_file, 'w') as f:
                json.dump(conversations, f, indent=2)

        except Exception as e:
            print(f"❌ Error logging interaction: {e}")
Enter fullscreen mode Exit fullscreen mode

The Tricky Parts: What I Learned the Hard Way

Problem 1: Getting the User Input and Response
The biggest challenge was figuring out how to capture the actual user input and agent response. The BeforeInvocationEvent and AfterInvocationEvent objects don't directly expose this information in the way I initially expected.

Solution: I store the user input and agent response in instance variables before and after calling the agent:

Before calling the agent

hooks_provider.set_user_input(user_input)
Enter fullscreen mode Exit fullscreen mode

Call the agent

response = agent(user_input)
Enter fullscreen mode Exit fullscreen mode

After getting the response

hooks_provider.set_agent_response(str(response))
Enter fullscreen mode Exit fullscreen mode

Problem 2: Timing Information

I needed to capture the start time in the before hook and use it in the after hook, but the event objects don't have a shared context.

Solution: Store the start time in an instance variable of the hook provider class.

What You Can Do With This Data
Once you have this logging system in place, you can:

  1. Analyze conversation patterns: See what users ask about most
  2. Monitor performance: Track response times and identify slow queries
  3. Debug issues: Review conversations when users report problems
  4. Generate reports: Create analytics dashboards from the JSON data
  5. Compliance: Maintain records of all AI interactions

How to add Hooks to agent

Create the hooks provider

hooks_provider = FileLoggingHooks()
Enter fullscreen mode Exit fullscreen mode

Create agent with hooks

agent = Agent(
    system_prompt="You are a helpful assistant.",
    hooks=[hooks_provider]
)
Enter fullscreen mode Exit fullscreen mode

Beyond Basic Hooks: The Full Event System

While I focused on BeforeInvocationEvent and AfterInvocationEvent for my logging system, Strands hooks actually provide 8 different event types for granular control:

Model-Level Events

from strands.hooks.events import BeforeModelCallEvent, AfterModelCallEvent

def before_model_call(self, event: BeforeModelCallEvent) -> None:
    print(f"About to call model: {event.model}")
    # Perfect for monitoring model usage and costs

def after_model_call(self, event: AfterModelCallEvent) -> None:
    if event.stop_response:
        print(f"Model response: {event.stop_response.content}")
    elif event.exception:
        print(f"Model error: {event.exception}")
    # Great for tracking model performance and errors
Enter fullscreen mode Exit fullscreen mode

Tool-Level Events

from strands.hooks.events import BeforeToolCallEvent, AfterToolCallEvent

def before_tool_call(self, event: BeforeToolCallEvent) -> None:
    print(f"Calling tool: {event.tool.name}")
    # Monitor which tools are being used

def after_tool_call(self, event: AfterToolCallEvent) -> None:
    if event.result:
        print(f"Tool result: {event.result}")
    # Track tool execution results
Enter fullscreen mode Exit fullscreen mode

Agent Lifecycle Events

from strands.hooks.events import AgentInitializedEvent, MessageAddedEvent

def agent_initialized(self, event: AgentInitializedEvent) -> None:
    print(f"Agent {event.agent.name} is ready!")
    # Perfect for setup and initialization logging

def message_added(self, event: MessageAddedEvent) -> None:
    print(f"New message: {event.message.content}")
    # Track conversation flow in real-time
Enter fullscreen mode Exit fullscreen mode

Advanced HookRegistry Features

The HookRegistry provides several powerful features beyond basic callback registration:

Checking for Registered Callbacks

def register_hooks(self, registry: HookRegistry) -> None:
    registry.add_callback(BeforeInvocationEvent, self.before_invocation)

    if registry.has_callbacks():
        print("✅ Hooks are active and ready!")
Enter fullscreen mode Exit fullscreen mode

Manual Callback Invocation

You can manually trigger callbacks for testing or special scenarios

event = BeforeInvocationEvent(agent=my_agent)
registry.invoke_callbacks(event)

Reverse Callback Ordering

The AfterInvocationEvent uses reverse callback ordering - callbacks registered later are invoked first. This is perfect for cleanup scenarios:

def register_hooks(self, registry: HookRegistry) -> None:
    # This will be called FIRST during cleanup
    registry.add_callback(AfterInvocationEvent, self.cleanup_resources)
Enter fullscreen mode Exit fullscreen mode

# This will be called SECOND during cleanup

registry.add_callback(AfterInvocationEvent, self.log_interaction)
Type Safety and Composability Benefits
One of the biggest advantages of Strands hooks is their strongly-typed and composable nature:

Type Safety

The event parameter is strongly typed - no guessing what properties are available

def before_invocation(self, event: BeforeInvocationEvent) -> None:
    # IDE knows exactly what properties are available
    agent_name = event.agent.name  #  Type-safe
    # event.some_random_property  # IDE will catch this error
Enter fullscreen mode Exit fullscreen mode

Composability

# You can easily combine multiple hook providers
agent = Agent(
    system_prompt="You are a helpful assistant.",
    hooks=[
        FileLoggingHooks(),      # For conversation logging
        PerformanceHooks(),      # For timing metrics  
        SecurityHooks(),         # For input validation
        AnalyticsHooks()         # For usage tracking
    ]
)
Enter fullscreen mode Exit fullscreen mode

Each hook provider is completely independent and can be added/removed without affecting others.

Error Handling and Interrupts

Hooks can handle errors gracefully and even interrupt the agent flow for human-in-the-loop scenarios:

Exception Handling

def after_invocation(self, event: AfterInvocationEvent) -> None:
    try:
        self._log_interaction(event)
    except Exception as e:
        # Log the error but don't crash the agent
        print(f"Logging failed: {e}")
        # The agent continues normally
Enter fullscreen mode Exit fullscreen mode

Human-in-the-Loop Interrupts

from strands.hooks import InterruptException, Interrupt

def before_invocation(self, event: BeforeInvocationEvent) -> None:
    if self._requires_human_approval(event.user_input):
        # Pause agent execution for human review
        raise InterruptException(
            Interrupt(
                name="human_approval",
                message="This request requires human approval",
                data={"user_input": event.user_input}
            )
        )
Enter fullscreen mode Exit fullscreen mode

Real-World Use Cases

Here are some practical applications using the full event system:

1. Cost Monitoring

class CostTrackingHooks(HookProvider):
    def before_model_call(self, event: BeforeModelCallEvent) -> None:
        self.tokens_sent = self._estimate_tokens(event.prompt)

    def after_model_call(self, event: AfterModelCallEvent) -> None:
        if event.stop_response:
            tokens_received = self._estimate_tokens(event.stop_response.content)
            cost = self._calculate_cost(self.tokens_sent + tokens_received)
            self._log_cost(cost)
Enter fullscreen mode Exit fullscreen mode

2. Tool Usage Analytics

class ToolAnalyticsHooks(HookProvider):
    def before_tool_call(self, event: BeforeToolCallEvent) -> None:
        self._track_tool_usage(event.tool.name)

    def after_tool_call(self, event: AfterToolCallEvent) -> None:
        if event.exception:
            self._track_tool_errors(event.tool.name, event.exception)
Enter fullscreen mode Exit fullscreen mode

3. Conversation Quality Monitoring

class QualityHooks(HookProvider):
    def after_invocation(self, event: AfterInvocationEvent) -> None:
        response_quality = self._analyze_response_quality(event.response)
        if response_quality < 0.7:
            self._flag_for_review(event)
Enter fullscreen mode Exit fullscreen mode

Start extending your agents today

Strands hooks are a powerful tool for monitoring and logging agent interactions. They're easy to implement, non-intrusive, and give you complete visibility into what's happening with your AI agent.

The file-based approach I used is perfect for getting started, but you could easily extend this to use databases, send data to analytics services, or trigger alerts based on certain conditions.

The key insight is that hooks let you observe your agent without changing how it works. It's like having a conversation recorder that doesn't interfere with the conversation itself.

Hooks offer a clean, robust, and future-proof way to extend the capabilities of your AI agents. They are the key to moving beyond simple prototypes and building the sophisticated, enterprise-grade applications that production environments demand. Ready to take your agents to the next level? Start exploring the power of Strands hooks today.

Continuing the Journey: Cost Monitoring

After building this conversation logging system, I realized there was another critical piece missing: cost monitoring. Without proper cost tracking, you can easily rack up hundreds of dollars in unexpected AI model charges.

I've written a follow-up blog that shows how to implement real-time cost monitoring using the same Strands hooks system. The new blog covers:

https://dev.to/sreeni5018/building-cost-monitoring-with-aws-strands-hooks-a-complete-guide-2il1

Real-time cost tracking using BeforeModelCallEvent and AfterModelCallEvent

Token estimation and accurate cost calculation
Model-specific pricing for Claude 4 Sonnet and other models
Cost alerts and budgets to prevent overspending
Production-ready implementation with error handling

Thanks
Sreeni Ramadorai

Top comments (0)