DEV Community

Langchain Agent with Structured Tools Schema & Langfuse using AWS Bedrock Nova 🤖

AI agents are becoming the brains of modern apps, they make decisions, use tools, and give smart responses in real time. But to build them properly at scale, you need a solid agent framework (e.g. LangChain/LangGraph, AWS Strands Agents, etc.).

In this post we’ll look at how LangChain agents use structured tool calling to do that. Also, we'll configure Langfuse to follow toolcalls, flow, input/output.

What is Tool Calling?

  • Tool calling is when an AI agent invokes external functions or APIs to perform actions or retrieve data.
  • The agent decides to call a tool based on the user’s request and the available tool descriptions.
  • It generates a structured request (following a schema) with the required parameters for that tool.

Why Structured Tool Calling Important?

  • Tool calling with a structured schema ensures that inputs and outputs are predictable, validated, and machine-readable, reducing ambiguity, hallucination and runtime errors.
  • It also enables reliable automation and easier integration by enforcing consistent structured data.

Whether you’re curious about how agents actually work or looking to build one yourself, this is your starting point 😉.

AWS Bedrock, Agent, Tool Calls

Table of Contents

Dependencies & Configuration

  • Please install dependencies and libraries:
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
# deactivate
Enter fullscreen mode Exit fullscreen mode

Requirements.txt:

langchain>=1.0.0
langchain-aws>=1.2.0
langgraph>=1.0.0
python-dotenv>=1.0.0
boto3>=1.34.0
langfuse>=4.0.0
Enter fullscreen mode Exit fullscreen mode
  • Enable AWS Bedrock model access in your region (e.g. eu-central-1, us-east-1)

    • AWS Bedrock > Bedrock Configuration > Model Access > AWS Nova-Pro, or Claude 3.7 Sonnet
  • In this code, we'll use AWS Nova-Pro, because it's served in different regions by AWS.

  • After model access, give permission in your IAM to access AWS Bedrock services: AmazonBedrockFullAccess

  • 2 Options to reach AWS Bedrock Model using your AWS account:

    • AWS Config: With aws configure, to create configand credentials files
    • Getting variables using .env file: Add .env file:
AWS_ACCESS_KEY_ID= PASTE_YOUR_ACCESS_KEY_ID_HERE
AWS_SECRET_ACCESS_KEY=PASTE_YOUR_SECRET_ACCESS_KEY_HERE
Enter fullscreen mode Exit fullscreen mode

Agent with AWS Bedrock, Nova Pro

Langchain agent with Bedrock:

from langchain_aws import ChatBedrockConverse
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy

llm = ChatBedrockConverse(
    model="us.amazon.nova-pro-v1:0",
    temperature=0,
    # Reads AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION from .env
)

tools = [search_books, search_movies, get_trending]

agent = create_agent(
    model=llm,
    tools=tools,
    response_format=ToolStrategy(RecommendationResult),
    system_prompt=(
        "You are a creative media concierge. "
        "Detect the user's mood, search for matching books and movies using your tools, "
        "then return a fully structured recommendation."
    ),
)
Enter fullscreen mode Exit fullscreen mode

Tools

Tools definitions as function:

from langchain_core.tools import tool

@tool
def search_books(mood: str) -> list[dict]:
    """Search the book catalogue by mood keyword."""
    mood = mood.lower()
    return [b for b in BOOK_DB if any(mood in m for m in b["moods"])]

@tool
def search_movies(mood: str) -> list[dict]:
    """Search the movie catalogue by mood keyword."""
    mood = mood.lower()
    return [m for m in MOVIE_DB if any(mood in t for t in m["moods"])]


@tool
def get_trending(media_type: Literal["book", "movie"]) -> list[dict]:
    """Return the 3 most recently released items for the given media type."""
    db = BOOK_DB if media_type == "book" else MOVIE_DB
    return sorted(db, key=lambda x: x["year"], reverse=True)[:3]
Enter fullscreen mode Exit fullscreen mode

Structured Output Schema

Tool calling with a structured schema ensures that inputs and outputs are predictable, validated, and machine-readable, reducing ambiguity and runtime errors. It also enables reliable automation and easier integration with downstream systems by enforcing consistent data contracts.

We can design structured tool output with Pydantic basemodel and fields.

from pydantic import BaseModel, Field

# Structured output schema
class MediaItem(BaseModel):
    """A single book or movie recommendation."""
    title: str           = Field(description="Title of the book or movie")
    creator: str         = Field(description="Author (book) or director (movie)")
    year: int            = Field(description="Year of release or publication")
    media_type: Literal["book", "movie"] = Field(description="Type of media")
    reason: str          = Field(description="Why this matches the user's mood, 1-2 sentences")
    mood_tags: list[str] = Field(description="Mood tags, lowercase, 1-3 words each")


class RecommendationResult(BaseModel):
    """Structured recommendation response returned by the agent."""
    detected_mood: str               = Field(description="The mood detected from the user's message")
    confidence: float                = Field(description="Confidence score 0.0-1.0", ge=0.0, le=1.0)
    recommendations: list[MediaItem] = Field(description="List of 2-4 recommendations")
    summary: str                     = Field(description="One-sentence summary of the recommendations")

Enter fullscreen mode Exit fullscreen mode

Langfuse Handler

  • Langfuse helps by providing detailed tracing, logging, and evaluation of LLM interactions, making it easier to debug and improve agent behavior.
  • It also enables monitoring of performance, costs, and user flows, which is critical for optimizing and maintaining reliable AI applications in production.
from langfuse import Langfuse
from langfuse.langchain import CallbackHandler

langfuse = Langfuse(
    public_key = os.getenv("LANGFUSE_PUBLIC_KEY"),
    secret_key = os.getenv("LANGFUSE_SECRET_KEY"),
    host       = os.getenv("LANGFUSE_BASE_URL"),
)
langfuse_handler = CallbackHandler()

result = agent.invoke({
            "messages": [{"role": "user", "content": user_message}]
        },
        {
            "callbacks": [langfuse_handler]   # if you don't want to use langfuse, remove callbacks and handler
        }
    )
Enter fullscreen mode Exit fullscreen mode

All Code & Demo

GitHub Link: Project on GitHub

Agent App:

import os
from pydantic import BaseModel, Field
from typing import Literal
from langchain_aws import ChatBedrockConverse
from langchain_core.tools import tool
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
from langfuse import Langfuse
from langfuse.langchain import CallbackHandler
from dotenv import load_dotenv
load_dotenv()

langfuse = Langfuse(
    public_key = os.getenv("LANGFUSE_PUBLIC_KEY"),
    secret_key = os.getenv("LANGFUSE_SECRET_KEY"),
    host       = os.getenv("LANGFUSE_BASE_URL"),
)
langfuse_handler = CallbackHandler()

# Structured output schema
class MediaItem(BaseModel):
    """A single book or movie recommendation."""
    title: str           = Field(description="Title of the book or movie")
    creator: str         = Field(description="Author (book) or director (movie)")
    year: int            = Field(description="Year of release or publication")
    media_type: Literal["book", "movie"] = Field(description="Type of media")
    reason: str          = Field(description="Why this matches the user's mood, 1-2 sentences")
    mood_tags: list[str] = Field(description="Mood tags, lowercase, 1-3 words each")


class RecommendationResult(BaseModel):
    """Structured recommendation response returned by the agent."""
    detected_mood: str               = Field(description="The mood detected from the user's message")
    confidence: float                = Field(description="Confidence score 0.0-1.0", ge=0.0, le=1.0)
    recommendations: list[MediaItem] = Field(description="List of 2-4 recommendations")
    summary: str                     = Field(description="One-sentence summary of the recommendations")


# Mock catalogue tools
BOOK_DB = [
    {"title": "The Midnight Library",   "creator": "Matt Haig",       "year": 2020, "moods": ["melancholic", "hopeful", "reflective"]},
    {"title": "Project Hail Mary",      "creator": "Andy Weir",       "year": 2021, "moods": ["adventurous", "curious", "uplifting"]},
    {"title": "Anxious People",         "creator": "Fredrik Backman", "year": 2019, "moods": ["funny", "warm", "melancholic"]},
    {"title": "The Hitchhiker's Guide", "creator": "Douglas Adams",   "year": 1979, "moods": ["funny", "adventurous", "curious"]},
    {"title": "Atomic Habits",          "creator": "James Clear",     "year": 2018, "moods": ["motivated", "focused", "uplifting"]},
]

MOVIE_DB = [
    {"title": "Everything Everywhere All at Once", "creator": "Daniels",           "year": 2022, "moods": ["adventurous", "funny", "melancholic", "reflective"]},
    {"title": "Soul",                              "creator": "Pete Docter",        "year": 2020, "moods": ["reflective", "hopeful", "warm"]},
    {"title": "The Grand Budapest Hotel",          "creator": "Wes Anderson",       "year": 2014, "moods": ["funny", "adventurous", "warm"]},
    {"title": "Interstellar",                      "creator": "Christopher Nolan",  "year": 2014, "moods": ["curious", "adventurous", "melancholic"]},
    {"title": "Amelie",                            "creator": "Jean-Pierre Jeunet", "year": 2001, "moods": ["warm", "hopeful", "funny"]},
]


@tool
def search_books(mood: str) -> list[dict]:
    """Search the book catalogue by mood keyword."""
    mood = mood.lower()
    return [b for b in BOOK_DB if any(mood in m for m in b["moods"])]


@tool
def search_movies(mood: str) -> list[dict]:
    """Search the movie catalogue by mood keyword."""
    mood = mood.lower()
    return [m for m in MOVIE_DB if any(mood in t for t in m["moods"])]


@tool
def get_trending(media_type: Literal["book", "movie"]) -> list[dict]:
    """Return the 3 most recently released items for the given media type."""
    db = BOOK_DB if media_type == "book" else MOVIE_DB
    return sorted(db, key=lambda x: x["year"], reverse=True)[:3]

llm = ChatBedrockConverse(
    model="us.amazon.nova-pro-v1:0",
    temperature=0,
    # Reads AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION from .env
)

tools = [search_books, search_movies, get_trending]

agent = create_agent(
    model=llm,
    tools=tools,
    response_format=ToolStrategy(RecommendationResult),
    system_prompt=(
        "You are a creative media concierge. "
        "Detect the user's mood, search for matching books and movies using your tools, "
        "then return a fully structured recommendation."
    ),
)

def recommend(user_message: str) -> RecommendationResult:
    result = agent.invoke({
            "messages": [{"role": "user", "content": user_message}]
        },
        {
            "callbacks": [langfuse_handler]   # if you don't want to use langfuse, remove callbacks and handler
        }
    )
    return result["structured_response"]

if __name__ == "__main__":
    queries = [
        "I'm feeling a bit low today and want something that lifts my spirits.",
        "I'm in a curious and adventurous mood — blow my mind!",
        "I want something funny and light after a long week.",
    ]

    for query in queries:
        print(f"\n{'='*65}")
        print(f"USER    : {query}")
        rec: RecommendationResult = recommend(query)
        print(f"MOOD    : {rec.detected_mood}  (confidence: {rec.confidence:.0%})")
        print(f"SUMMARY : {rec.summary}")
        for item in rec.recommendations:
            icon = "book" if item.media_type == "book" else "film"
            print(f"\n  [{icon}]  {item.title} ({item.year}) — {item.creator}")
            print(f"          {item.reason}")
            print(f"          tags: {', '.join(item.mood_tags)}")
Enter fullscreen mode Exit fullscreen mode

Run app.py:

python agent.py
Enter fullscreen mode Exit fullscreen mode

Demo

Demo Output on GitHub (better resolution)

Output

Langfuse on GitHub (better resolution)

Langfuse

Conclusion

In this post, we mentioned:

  • how to create agent with structured tool schema,
  • how to follow/debug with langfuse.

If you found the tutorial interesting, I’d love to hear your thoughts in the blog post comments. Feel free to share your reactions or leave a comment. I truly value your input and engagement 😉

For other posts 👉 https://dev.to/omerberatsezer 🧐

References

Your comments 🤔

  • Which framework/s are you using to develop AI Agents (e.g. AWS Strands, Langchain/LangGraph, etc.)? Please mention in the comment your experience, your interest?
  • What are you thinking about Tools with Structured Schema?

Top comments (1)

Collapse
 
omerberatsezer profile image
Ömer Berat Sezer AWS Community Builders • Edited

Structured tools tell the AI exactly what to send, so it makes less mistake.
Without them, the AI guesses and often gets inputs wrong.
So, structured => more reliable, unstructured => error-prone