DEV Community

Cover image for Designing a Bedrock Agent with Action Groups and Knowledge Bases for Wildfire Analysis
Dipayan Das
Dipayan Das

Posted on

Designing a Bedrock Agent with Action Groups and Knowledge Bases for Wildfire Analysis

Introduction

Wildfires are an increasing risk across many regions of the United States, impacting communities, infrastructure, and operational planning. For builders working on data-driven applications, there is growing demand for location-based risk insights that can be delivered quickly, safely, and at scale.

In this post, we walk through how to build a ZIP code–based wildfire risk assessment agent using Amazon Bedrock Agents. The agent accepts a U.S. ZIP code as input and returns a wildfire-prone percentage, with rainfall context when applicable. Rather than relying on free-form model responses, the solution combines deterministic tools and structured data to ensure reliable and explainable outputs.

The goal of this walkthrough is not only to show how to create an agent, but also to demonstrate when and how to orchestrate Action Groups and Knowledge Bases in Amazon Bedrock. Along the way, we highlight common pitfalls—such as semantic search over structured datasets—and share practical lessons learned from testing and debugging the agent.

By the end of this post, you’ll understand how to:

  • Design a Bedrock Agent for a real-world environmental risk use case
  • Use Action Groups for structured, ZIP code–level lookups
  • Fall back to a Knowledge Base when tool data is unavailable
  • Validate and troubleshoot agent behavior using Bedrock traces

This pattern is broadly applicable beyond wildfire risk analysis and can be adapted for any scenario where structured inputs, deterministic logic, and controlled AI reasoning are required.

Background Concepts

Before walking through the implementation, it’s useful to understand the core Amazon Bedrock components used in this solution.

Amazon Bedrock Agent

An Amazon Bedrock Agent is a managed, reasoning-capable AI construct that can plan, decide, and act to fulfill user requests. Unlike a simple chat interface, an agent can:

  • Interpret user intent
  • Orchestrate multi-step workflows
  • Invoke tools (Action Groups) for deterministic operations
  • Query Knowledge Bases for grounded, document-based answers
  • Apply guardrails for safety and compliance

Agents enable you to move from conversational AI to goal-driven, tool-assisted AI workflows.

Amazon Bedrock Agent Builder

The Bedrock Agent Builder is the console-based experience used to configure agents. It provides a structured way to:

  • Define agent instructions and behavior
  • Select a foundation model
  • Attach Action Groups (Lambda-backed tools)
  • Attach Knowledge Bases
  • Test, version, and deploy agents using aliases

The Agent Builder abstracts much of the orchestration complexity while still giving builders fine-grained control over logic and execution.

Action Groups

Action Groups allow Bedrock Agents to call external logic through AWS Lambda functions. They are used when the agent needs deterministic, structured, or real-time data that cannot be reliably inferred by a foundation model.

Key characteristics:

  • Each Action Group maps to one Lambda function
  • Functions have defined input schemas
  • Responses must follow a strict agent-compatible format
  • Ideal for calculations, lookups, validations, and API integrations

In this blog, Action Groups are used to retrieve ZIP code–level wildfire risk and rainfall metrics.

Knowledge Base with S3 Vector Store

An Amazon Bedrock Knowledge Base enables Retrieval-Augmented Generation (RAG) by grounding model responses in your own data.

When using Amazon S3 as the data source:

  • Documents (CSV, JSONL, text, PDFs) are stored in S3
  • Bedrock generates embeddings for the content
  • Embeddings are stored in a vector store (for example, OpenSearch Serverless)

The agent can semantically search and retrieve relevant content at runtime

Knowledge Bases are best suited for:

  • Historical data
  • Reference documents
  • Unstructured or semi-structured information

In this solution, the Knowledge Base serves as a fallback when structured data is not available via the Action Group.

Below high level architecture digram will help to understand this agentic solution.

Step 1: Create Agent

Step 2: Create Action Group

At this stage, I am defining an Action Group for my Amazon Bedrock Agent. This is the step where the agent gains the ability to take actions, rather than only reasoning over text.

An Action Group allows my agent to invoke external logic, such as an AWS Lambda function, whenever it determines that deterministic processing is required.

Defining the Action Group

I start by providing an Action group name. In my case, I have named it action_group_quick_start_dipayan. This name is important because it becomes part of the internal tool identifier the agent will use during orchestration.

Optionally, I can add a description to explain what this Action Group does. While not required, it’s helpful for documentation and future maintenance, especially as the number of tools grows.

Choosing the Action Group Type

Next, I select how the Action Group should be defined.

I choose “Define with function details”, which allows me to explicitly define:

  • The function name
  • The input parameters the agent will pass
  • The structure of the expected response

This option is ideal for my use case because I want the agent to perform deterministic ZIP-code–based lookups for wildfire risk and rainfall data. By defining the function and its parameters, I ensure the agent knows exactly when and how to invoke this logic.

The alternative option—defining the Action Group using API schemas—is better suited for REST-style integrations, which I don’t need here.

Configuring the Invocation

In the Action group invocation section, I specify what should be executed when the agent calls this Action Group.

I select “Quick create a new Lambda function”, which lets Bedrock automatically generate a Lambda function stub and configure the required permissions. This is particularly useful during prototyping and for walkthroughs like this one.

For production scenarios, I could instead choose an existing Lambda function, but for this implementation, the quick-create option keeps things simple.

What This Enables

Once this step is complete, my agent can:

  • Decide when it needs structured data
  • Invoke the Action Group during its reasoning process
  • Pass validated parameters (such as a ZIP code) to Lambda
  • Use the Lambda response to construct a final answer

This is the moment where my agent transitions from being purely conversational to becoming tool-enabled and action-driven.

Why This Step Matters

Defining an Action Group is essential for building reliable agents. It allows me to:

  • Avoid hallucinations by relying on deterministic logic
  • Separate reasoning (handled by the agent) from execution (handled by Lambda)
  • Build scalable, auditable workflows
  • Without an Action Group, the agent would be limited to inference and would not be able to safely retrieve or compute ZIP-level wildfire data.

Configuring the Action Group Function and Parameters

In this step, I am editing the Action Group function that my Amazon Bedrock Agent will invoke to retrieve wildfire and rainfall data. This screen defines the exact contract between the agent and the underlying Lambda function.

Naming the Function

I have named the function get_fire_rain_data.
This name is critical because it is the identifier the agent will use during orchestration. The function name must exactly match what my Lambda code expects; otherwise, the agent will treat the function as unsupported.

Adding a Description

In the Description field, I explain what the function does:

  • The user provides a ZIP code for the USA to invoke a Lambda function that returns wildfire-prone score and rainfall data for the last three years.
  • This description helps the agent understand when this function is relevant and improves the accuracy of the agent’s planning and decision-making.

Enabling Confirmation (Optional)

I have enabled confirmation of the action group function.
With this option turned on, the agent can ask for user confirmation before invoking the function. This is useful for safeguarding against unintended or malicious invocations, especially when the function performs sensitive operations.

For read-only data retrieval like this use case, confirmation is optional, but it can be helpful during testing and early iterations.

Defining Input Parameters

The most important part of this screen is the Parameters section.

Here, I define a single required parameter:

  • Name: ZipCode
  • Description: USA ZipCode
  • Type: string
  • Required: true
    This tells the agent:

  • It must collect a ZIP code from the user

  • The ZIP code is mandatory before the function can be invoked

  • The value will be passed to Lambda exactly as ZipCode

Because parameter names are case-sensitive, the Lambda implementation must read ZipCode exactly as defined here.

Adding the Function to the Action Group

Once the function name, description, and parameters are defined, I add the function to the Action Group. At this point:

  • The agent knows this function exists
  • The agent knows what input it requires
  • The agent can decide when to invoke it during reasoning
  • Enabling the Action

At the bottom of the screen, I ensure the Action status is set to Enable.
If this is disabled, the agent will ignore the Action Group entirely, even if everything else is configured correctly.

Why This Step Is Critical

This screen defines the execution interface for the agent. It ensures:

  • The agent can reason about structured inputs
  • The Lambda invocation is deterministic
  • Tool usage is auditable and predictable
  • Hallucinations are avoided for ZIP-based data

Without correctly defining this function and its parameters, the agent would be unable to retrieve wildfire or rainfall data reliably.

Step 3: Defining Lambda Function for Action Group

At this point, I am reviewing the AWS Lambda function that was automatically created when I configured the Action Group in the Bedrock Agent Builder. This Lambda function is the execution layer for my agent’s deterministic logic.

Lambda Function Overview

The function shown is named:

action_group_quick_start_dipayan-94dzd

This name is generated by Bedrock when I chose “Quick create a new Lambda function” while creating the Action Group. Bedrock also automatically associates this function with the agent, so no additional triggers or integrations are required.

From the Function overview section, I can confirm:

  • The function exists in the same AWS account and region as the agent
  • The execution role has already been created and attached
  • No external triggers (API Gateway, EventBridge, etc.) are needed because the function is invoked directly by the Bedrock Agent

Code Source

In the Code source panel, I can see the Python file ( dummy_lambda.py which I willl update later with requied code) that contains the Lambda handler logic.

This code is where:

  • The agent’s Action Group function name is interpreted
  • Input parameters (such as ZipCode) are extracted from the event
  • Business logic is executed (wildfire and rainfall lookup)
  • A response is returned in the Bedrock Agent–compatible format

At this stage, this file represents the contract implementation between the agent and the real-world data logic.

How This Lambda Is Used by the Agent

This Lambda function is not triggered by external events. Instead:

  1. The Bedrock Agent reasons about the user request
  2. The agent decides to invoke the Action Group function
  3. Bedrock invokes this Lambda with a structured event that includes:
  • actionGroup
  • function
  • parameters

4.The Lambda processes the request and returns a structured response

The agent uses that response to construct the final answer to the user

Because of this tight integration, the Lambda handler must strictly follow the Bedrock Agent Action Group response schema.

Why This Screen Is Important

This screen confirms that:

  • The Action Group is correctly backed by a Lambda function
  • The Lambda is deployed and ready to receive agent invocations
  • I have full control over the deterministic logic used by the agent
  • I can update the code to improve accuracy without changing the agent configuration

It’s also the place where I troubleshoot issues such as:

  • “Unsupported function” errors
  • Incorrect parameter handling
  • Response formatting problems

Lambda source code is provided below. In its current form, the Lambda function:

  • Accepts a 5-digit U.S. ZIP code passed by the Bedrock Agent as a structured parameter
  • Validates the ZIP code format before processing
  • Retrieves a wildfire-prone score and rainfall totals for the last three years from a controlled dataset
  • Returns results using a Bedrock Agent–compatible response schema, ensuring predictable orchestration
  • Explicitly signals when data is not available so the agent can safely fall back to a Knowledge Base

By handling these steps outside the foundation model, the Lambda ensures deterministic behavior, avoids hallucinations, and keeps the agent’s reasoning grounded in data.

What It Is Designed to Support Next

This Lambda is intentionally structured to evolve beyond the demo dataset and support production-grade workflows. Future enhancements can include:

  • Replacing the in-memory dataset with authoritative sources such as:
  • FEMA or USFS wildfire risk indices
  • NOAA or PRISM rainfall datasets
  • Third-party environmental risk providers

Adding temporal intelligence, such as:

  • Multi-year trend analysis
  • Seasonal risk adjustments
  • Confidence scoring based on data freshness
  • Scaling lookups by integrating with:
  • Amazon DynamoDB for low-latency ZIP lookups
  • Amazon Athena or Snowflake for analytical queries
  • API-based data sources for near real-time updates
  • Enhancing governance and observability by:
  • Adding structured error codes
  • Logging metrics for monitoring and audit
  • Applying Bedrock Guardrails for safety and compliance

Because the Lambda already enforces strict input validation and response formatting, these enhancements can be added without changing the agent’s orchestration logic.

Why This Design Matters

This design separates responsibilities cleanly:

  • The Bedrock Agent handles reasoning, planning, and decision-making
  • The Lambda function handles facts, calculations, and structured data retrieval

As a result, the agent can safely scale to more complex environmental risk scenarios while maintaining accuracy, transparency, and control.

import json
import logging
from typing import Dict, Any

logger = logging.getLogger()
logger.setLevel(logging.INFO)

DEMO_ZIP_DATA = {
    "94568": {"wildfire_prone_score": 42, "rainfall_last_3_years_mm": 980},
    "94102": {"wildfire_prone_score": 12, "rainfall_last_3_years_mm": 1560},
    "95630": {"wildfire_prone_score": 55, "rainfall_last_3_years_mm": 1120},
}

SUPPORTED_FUNCTIONS = {"get_fire_rain_data", "get_wildfire_and_rainfall"}  # accept both

def _params_to_dict(parameters):
    out = {}
    for p in parameters or []:
        name = p.get("name")
        value = p.get("value")
        if name:
            out[name] = value
    return out

def _agent_text_response(action_group: str, function: str, message_version: str, payload: Dict[str, Any]) -> Dict[str, Any]:
    # Bedrock Agents are most reliable with TEXT.body as a string
    return {
        "response": {
            "actionGroup": action_group,
            "function": function,
            "functionResponse": {
                "responseBody": {
                    "TEXT": {
                        "body": json.dumps(payload)
                    }
                }
            }
        },
        "messageVersion": message_version
    }

def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:
    action_group = event["actionGroup"]
    function = event["function"]
    message_version = str(event.get("messageVersion", "1.0"))

    params = _params_to_dict(event.get("parameters", []))

    # Accept either ZipCode (as in your trace) or zip_code
    zip_code = str(params.get("ZipCode") or params.get("zip_code") or "").strip()

    logger.info("ActionGroup=%s Function=%s Zip=%s Params=%s", action_group, function, zip_code, params)

    if function not in SUPPORTED_FUNCTIONS:
        return _agent_text_response(action_group, function, message_version, {
            "error": "Unsupported function",
            "supported_functions": sorted(list(SUPPORTED_FUNCTIONS))
        })

    if (not zip_code.isdigit()) or (len(zip_code) != 5):
        return _agent_text_response(action_group, function, message_version, {
            "error": "Invalid ZIP code",
            "message": "Provide a valid 5-digit U.S. ZIP code.",
            "received": zip_code
        })

    data = DEMO_ZIP_DATA.get(zip_code)
    if not data:
        return _agent_text_response(action_group, function, message_version, {
            "zip_code": zip_code,
            "wildfire_prone_score": None,
            "rainfall_last_3_years_mm": None,
            "message": "No data available for this ZIP code in the current dataset."
        })

    return _agent_text_response(action_group, function, message_version, {
        "zip_code": zip_code,
        "wildfire_prone_score": data["wildfire_prone_score"],
        "rainfall_last_3_years_mm": data["rainfall_last_3_years_mm"],
        "units": {"rainfall": "mm", "wildfire_prone_score": "0-100"}
    })

Enter fullscreen mode Exit fullscreen mode

Step 4: Create Knowledge Base and

At this stage, I am reviewing the Amazon Bedrock Knowledge Base that I created to support the fallback retrieval logic in my agent. This Knowledge Base is named knowledge-base-dipayan, and it is used when structured data is not returned by the Action Group.

Knowledge Base Overview

From the Knowledge Base overview section, I can confirm the following:

  • Knowledge Base name: knowledge-base-dipayan
  • Status: Available
  • Knowledge Base ID: MM1YREPZVM
  • RAG type: Vector store

The “Vector store” designation indicates that this Knowledge Base uses Retrieval-Augmented Generation (RAG). Documents stored in the data source are embedded, indexed, and retrieved at runtime based on semantic similarity.

Data Source Configuration (CSV in S3)

  • Under the Data source section, I can see that:
  • The data source type is Amazon S3
  • The source link points to an S3 location (s3://…)
  • The data source status is Available
  • The last sync completed successfully

This S3 location contains the CSV file I uploaded earlier. The CSV holds structured wildfire data, such as:

  • ZIP code
  • Year
  • Area burned (acres)

During ingestion:

  • Bedrock reads the CSV file from S3
  • The content is parsed and chunked (using the default text chunking strategy)
  • Embeddings are generated for each chunk
  • Those embeddings are stored in the underlying vector store
  • Once the sync is complete, the data becomes searchable by the Knowledge Base at runtime.

How the CSV Data Is Used by the Agent

This Knowledge Base is not the primary data source for the agent. Instead, it serves as a fallback mechanism.

The runtime behavior looks like this:

  • The user provides a ZIP code
  • The agent first invokes the Action Group (Lambda) for a deterministic lookup
  • If the Lambda returns no data for that ZIP code
  • The agent queries knowledge-base-dipayan
  • The Knowledge Base searches the embedded CSV content for matching ZIP-level records
  • If matching data exists, the agent summarizes and presents it
  • If no relevant records are found, the agent explicitly states that the data is not available

This approach avoids hallucination while still allowing the agent to respond meaningfully when structured lookups fail.

Why This Setup Matters

  • This screenshot confirms several important things:
  • The Knowledge Base is successfully created and available
  • The CSV data exists in S3 and has been ingested
  • The Knowledge Base is configured for vector-based retrieval
  • The agent can safely rely on this Knowledge Base for historical or unstructured fallback data

It also reinforces a key design decision:
Knowledge Bases work best for reference and historical data, while Action Groups handle deterministic, structured queries.

Important Note on CSV Retrieval

Although the CSV is ingested successfully, semantic search over CSV data can be less precise than over text or JSONL formats. For best retrieval accuracy—especially when querying by ZIP code—it’s often helpful to:

  • Convert CSV rows into JSONL or line-based text
  • Ensure ZIP codes appear explicitly and consistently in the text
  • Use simple search queries (e.g., just the ZIP code)

This explains why some ZIP codes may return results while others do not, even though the data exists in the CSV.

Step 5: Configuring the Agent in the Bedrock Agent Builder

At this stage, Iam in the Agent builder screen, where I define the core behavior and identity of my Amazon Bedrock Agent. This is the central configuration that controls how the agent reasons, which model it uses, and how it orchestrates tools and knowledge bases.

Agent Details

I start by setting the Agent name, in this case:

agent-quick-start-dipayan

This name uniquely identifies the agent within my account and is used when managing versions, aliases, and testing.

Optionally, I can add an Agent description to document the agent’s purpose. While not required, this is helpful when multiple agents exist in the same environment.

Agent Resource Role

Next, I configure the Agent resource role.

I choose to use an existing service role, which grants the agent permissions to:

  • Invoke foundation models
  • Call Action Groups (Lambda functions)
  • Query Knowledge Bases
  • Emit logs and traces

This role is critical—without the correct permissions, the agent would fail during orchestration or tool invocation.

Selecting the Foundation Model

In the Select model section, I choose Amazon Nova Pro 1.0 as the foundation model.

This model provides:

  • Strong reasoning capabilities
  • Reliable tool orchestration
  • Consistent responses for structured workflows

The selected model becomes the reasoning engine for the agent, while deterministic logic is delegated to Action Groups.

Defining Instructions for the Agent

The most important part of this screen is the Instructions for the Agent section.

Here, I explicitly define the rules and decision logic the agent must follow. In my case, the instructions specify that the agent should:

  • Accept a ZIP code from the user
  • Ask the user to provide a correct ZIP code if it is not a valid 5-digit U.S. ZIP
  • Determine wildfire risk using the get_fire_rain_data Action Group function
  • Return only the wildfire-prone percentage when it is greater than or equal to 30
  • Return both wildfire-prone percentage and rainfall data when the risk is below 30
  • If the Action Group returns no data, search the knowledge-base-dipayan Knowledge Base
  • Provide historical area-burned data when available
  • Explicitly state when no data is available, rather than making assumptions
Accept a ZIP code from the user 
Ask user to provide correct US Zipcode if only it is not 5 digit
Determine wildfire risk based on get_fire_rain_data function
Return a wildfire-prone percentage only and no need to mention about rain data in response. 
If wildfire-prone percentage is less than 30, then provide amount of rain amount in mm for last 3 years. 
If zipcode doesn't return from get_fire_rain_data, then serch  in knowledge base knowledge-base-dipayan and provide area burned by year. 
Otherwise mention data is not availble. 
Enter fullscreen mode Exit fullscreen mode

These instructions act as the control plane for the agent’s reasoning, ensuring that:

  • The agent does not hallucinate values
  • Tools are invoked deterministically
  • Fallback logic is predictable and auditable

Why This Step Is Critical

This screen defines how the agent thinks and behaves. Even with perfectly implemented Action Groups and Knowledge Bases, unclear or incomplete instructions can cause the agent to:

  • Skip tool invocation
  • Use the wrong fallback
  • Return incomplete or misleading responses

By clearly encoding business rules and orchestration logic here, I ensure the agent behaves consistently across different user inputs.

What Happens Next

Once this configuration is saved:

  • I can create a new agent version
  • Attach or update an alias
  • Test the agent end-to-end using real ZIP codes
  • Inspect traces to validate tool calls and Knowledge Base lookups

This completes the reasoning layer of the agent and connects it to the execution and data layers configured earlier.

Step 6: Test and Validate Created Agent

At this stage, Iam validating the end-to-end behavior of my Amazon Bedrock Agent using the Test Agent experience. This is where I confirm that the agent is not only producing the correct answer, but also following the intended orchestration logic behind the scenes.

Confirming Correct Agent Behavior

On the left side of the screen, I can see the test interaction:

I provided the ZIP code 94568 as user input.

The agent explicitly confirms:

“You’ve given parameters: ZipCode=‘94568’.”

The agent then states:

“This function is a part of action group: action_group_quick_start_dipayan.”

This confirms that:

  • The agent correctly validated the input
  • The agent selected the Action Group as the next step
  • The correct function was invoked with the expected parameter

The final response returned is:

“The wildfire-prone percentage for the ZIP code 94568 is 42%.”

This matches the deterministic output from the Lambda function, which means the agent is behaving exactly as designed.

Why the Trace View Is Critical

The most important part of this screen is the Trace panel on the right.

The Trace view provides deep visibility into how the agent reasoned and acted, step by step. Instead of treating the agent as a black box, I can inspect every phase of execution.

Understanding the Trace Sections

In this screenshot, I am viewing the Orchestration and Knowledge Base trace, which includes:

Routing Trace
Shows how the agent interpreted the user request and decided which path to take.

Orchestration and Knowledge Base Trace (highlighted)
This section confirms:

  • The agent selected the Action Group
  • The Knowledge Base was not used in this scenario
  • The result came directly from the Lambda function

Post-Processing Trace
Shows how the agent formatted and returned the final response to the user.

Each trace step can be expanded to inspect:

  • Inputs passed to the Action Group
  • Outputs returned by Lambda
  • Decisions made by the agent between steps
  • What This Trace Confirms
  • From this trace, I can confidently verify that:
  • The agent did not hallucinate the wildfire percentage
  • The agent correctly invoked the Action Group
  • The agent did not unnecessarily query the Knowledge Base
  • The final response aligns with the business rules defined in the agent instructions

This level of transparency is essential for building trustworthy, auditable, and production-ready agents.

Why This Matters for Builders

  • Without tracing, it would be difficult to answer questions like:
  • Why did the agent call a tool?
  • Why didn’t it use the Knowledge Base?
  • Where did this number come from?
  • The Trace view allows me to:
  • Debug orchestration issues quickly
  • Validate tool selection logic
  • Ensure fallback behavior works as intended
  • Prove deterministic execution to stakeholders

Step 7: Validating Knowledge Base Fallback Using Multi-Step Traces

In this test run, I am validating how my Amazon Bedrock Agent behaves when structured data is not available from the Action Group and the agent must fall back to the Knowledge Base. This scenario is critical to ensure the agent follows the intended decision logic rather than hallucinating results.

The user input in this case is the ZIP code 90000.

What I Observe on the Left Panel

After providing the ZIP code, the agent first asks for confirmation before running the Action Group function:

“Are you sure you want to run this action group function: get_fire_rain_data(ZipCode=‘90000’)?”

This confirms:

  • The agent validated the input
  • The agent selected the correct Action Group
  • Confirmation is enabled for this function

Once confirmed, the agent returns a response stating:

“The area burned by wildfires for ZIP code 90000 is as follows…”

This response does not come from the Action Group. Instead, it is sourced from the Knowledge Base, which is exactly the fallback behavior I designed.

Understanding the Three Trace Steps

The Trace panel on the right shows three distinct trace steps, each representing a phase of the agent’s orchestration logic.

Trace Step 1 – Routing and Initial Orchestration

In the first trace step, the agent:

  • Interprets the user’s input (90000)
  • Validates that it is a correctly formatted ZIP code
  • Determines that it must invoke the Action Group as the primary data source

This step answers the question:
“What should I do next?”

At this point, no data has been retrieved yet—the agent is simply planning.

Trace Step 2 – Action Group Execution and Evaluation

In the second trace step, the agent:

  • Invokes the Action Group function (get_fire_rain_data)
  • Passes ZipCode=90000 as a parameter
  • Receives a response indicating that no structured data is available for this ZIP code

This step is crucial because:

  • The Action Group behaves deterministically
  • It explicitly signals the absence of data
  • No assumptions or inferred values are introduced

This trace step answers the question:
“Did my primary data source return usable results?”

The answer here is no, which triggers the fallback logic.

Trace Step 3 – Knowledge Base Retrieval and Response Construction

In the third trace step, the agent:

  • Decides to query the Knowledge Base
  • Searches for historical wildfire information related to ZIP code 90000
  • Retrieves relevant records (area burned by year)
  • Summarizes those records into a user-friendly response

This is where the agent leverages Retrieval-Augmented Generation (RAG):

  • The response is grounded in ingested data
  • The agent summarizes instead of quoting raw documents
  • The output follows the rules defined in the agent instructions

This step answers the question:
“How do I still help the user when structured data is missing?”

Why These Three Trace Steps Matter

Together, these trace steps prove that:

  • The agent follows a clear decision hierarchy
  • Structured tools are used before unstructured search
  • Knowledge Base queries are used only when necessary
  • Every decision is traceable and auditable

This is exactly the behavior expected from a production-ready agent.

Top comments (0)