DEV Community

Tying It All Together : How Strands Agents Enhance Retail Agent Performance Analysis

Background

Strands Agents is a simple-to-use, code-first framework for building agents. It is an open-source SDK by Amazon Web Services [AWS]. Strands comprise three key components: a language model, a system prompt, and a set of tools. Strands supports multiple agent architecture patterns, scaling from a single agent up to complex networks of agents.

Strands is not tied to a single LLM provider, and can work with models on Amazon Bedrock by default. It also supports open source models such as LlamaAPI, Ollama, OpenAI, and others. Strands supports running agents in various environments – including Amazon EC2, AWS Lambda, AWS Fargate, and Amazon Bedrock AgentCore.

Software performance testing evaluates how retail applications behave under various workloads and conditions, ensuring a reliable customer experience. This is crucial for systems such as e-commerce platforms, point-of-sale systems, and inventory management. Identifying and resolving performance bottlenecks before they impact users minimizes lost sales.

The rise of Large Language Models [LLMs] and Generative AI presents new challenges for performance testing and engineering. Unlike traditional applications, testers now deal with dynamic, context-aware AI agents that interact with different knowledge bases and tools. LLM agents introduce new performance considerations as mentioned below:

  • Orchestration latency
  • External Tool Invocation Performance
  • Knowledge Base Retrieval Latency
  • Token Generation and Cost
  • Model Inference Latency

The Use Case: A Retail Customer Support Agent

Preamble
Imagine a customer support agent for an e-commerce company that
Answer policy questions by searching the Knowledge Base
Check order status by calling an API through an Action Group.
Consolidate information and provide a helpful response to the customer.
In production, performance is everything, and even a 10-second delay can lead to a frustrated customer and a lost sale.

The Performance Puzzle
The total time a user waits for an answer is the sum of several processes. The Strands Agents framework provides visibility to understand each of these steps. The agent's trace usually includes:

  • Orchestration & Reasoning: The agent's underlying Foundation Model (FM) interprets the user's prompt and decides what to do.
  • Knowledge Base Retrieval: If it's a policy question, the agent queries the knowledge base.
  • Action Group Invocation: To check an order, the agent triggers a Lambda function that calls an internal Order Status API.
  • Final Response Generation: The retrieved information is passed back to the LLM, which generates the final response.

Strands Agents in Action

The Jupyter Notebook walks through the process of creating an Amazon Bedrock Agent, based on the e-commerce customer support use case described earlier. This demonstrates a practical approach to performance testing and engineering an LLM-powered "Retail Customer Support Agent", which can:

  • Answer policy questions from a Knowledge Base
  • Check order status using a custom tool (Action Group)

Prerequisites:

  • An active AWS account.
  • An Amazon Bedrock Agent created with:
  • A Knowledge Base attached
  • An Action Group configured to invoke a Lambda function for checking order status.
  • The agentId and agentAliasId of your created agent.
  • The boto3 library installed and configured with appropriate IAM permissions

Helper Function to Invoke the Agent
Using the Strands SDK's tool interfaces, we can build our own custom tools. Any Python function can be used as a tool by using the @tool decorator.
Create a reusable function to invoke our agent. This function will capture the agent's response and also the full trace of its internal operations, which is crucial for performance analysis.

from strands.tools import tool
@tool
def invoke_bedrock_agent(prompt: str, session_id: str):
    """
    Invokes the Bedrock agent, captures the response, and returns the full event stream.

    Args:
        prompt (str): The user's query for the agent.
        session_id (str): A unique identifier for the conversation session.

    Returns:
        list: A list of all events received from the agent's response stream.
    """
    print(f"\nUser prompt: '{prompt}'")

    events = []
    start_time = time.time()

    try:
        response = bedrock_agent_runtime_client.invoke_agent(
            agentId=AGENT_ID,
            agentAliasId=AGENT_ALIAS_ID,
            sessionId=session_id,
            inputText=prompt,
            enableTrace=True # CRITICAL: This enables the detailed trace!
        )

        event_stream = response['completion']
        for event in event_stream:
            events.append(event)

        final_response = ""

        trace_data = None
        # Extract the final response and the trace data
        for event in events:
          if 'chunk' in event:
            final_response += event['chunk']['bytes'].decode('utf-8')
          if 'trace' in event:
            trace_data = event['trace']['trace']

    except Exception as e:
        print(f"An error occurred: {e}")
        return None
    finally:
        end_time = time.time()
        print("")
        print(f"Total Latency: {end_time - start_time:.2f} seconds")

    return events
Enter fullscreen mode Exit fullscreen mode

Performance Test Scenarios
Let’s run a few tests to establish a baseline for different types of queries.

  1. Scenario A: Knowledge Base Query
    This tests the agent's ability to retrieve information from the attached knowledge base. The primary latency here will be in the Retrieve step.
    session_id_kb = str(uuid.uuid4())
    prompt_kb = "What is the return policy for clothes?"
    kb_events = invoke_bedrock_agent(prompt_kb, session_id_kb)

  2. Scenario B: Action Group Query (API Call)
    This tests the agent's ability to invoke an external tool (our order status Lambda). Latency will be a combination of reasoning and the actual Lambda/API execution time.
    session_id_ag = str(uuid.uuid4())
    prompt_ag = "Can you check the status for order #B-98765?"
    ag_events = invoke_bedrock_agent(prompt_ag, session_id_ag)

Analyzing the Performance Trace
The real value comes from parsing the trace data returned in the event stream. Let's create a function, trace_analysis, to extract and analyze this data and use @tool decorator.

Invoke the trace_analysis function with knowledge base events and action group events data.

  • def trace_analysis(kb_events)

  • def trace_analysis(ag_events)

** Prompt Strands Agent **

import strands
from strands import Agent
agent = Agent(
tools=[trace_analysis,invoke_bedrock_agent]
)

Agent - Knowledge Base Query

Agent – Lambda Function Invocation

CloudWatch Monitoring

The token count and invocation latency can also be observed in AWS Cloud Watch, under GenAI Observability section.

Performance Insights and Optimization Actions

  • Knowledge Base Bottleneck: If the "Knowledge Base Retrieval Time" is high, you should investigate your knowledge base settings. Are you retrieving too many chunks? Is your vector database under-provisioned?
  • API Bottleneck: If the "Lambda/API Call Time" is high, the performance issue lies outside of Bedrock. You need to use tools like AWS X-Ray and CloudWatch Logs to optimize your Lambda function and any downstream services it calls.
  • Model Latency: If "Final Response Generation Latency" is high, consider switching to a faster, more cost-effective model. Refine your agent's instructions to produce more concise answers, thereby reducing the outputTokenCount.

References

Top comments (0)