DEV Community

丁久
丁久

Posted on • Originally published at dingjiu1989-hue.github.io

Tool Use Patterns: Function Calling, Structured Tools, Multi-Step Reasoning

This article was originally published on AI Study Room. For the full version with working code examples and related articles, visit the original post.

Tool Use Patterns: Function Calling, Structured Tools, Multi-Step Reasoning

Introduction

Tool use, or function calling, enables LLMs to interact with external systems: query databases, call APIs, execute code, and retrieve information. This capability transforms LLMs from text generators into autonomous agents. This article covers the essential patterns for defining, invoking, and chaining tool calls in production systems.

Defining Tools

Every tool needs a clear schema that the LLM can understand and the application can execute:

from openai import OpenAI

from pydantic import BaseModel

client = OpenAI()

tools = [

    {

        "type": "function",

        "function": {

            "name": "search_documents",

            "description": "Search internal documents by keyword. Returns relevant snippets with metadata.",

            "parameters": {

                "type": "object",

                "properties": {

                    "query": {

                        "type": "string",

                        "description": "Search query, use specific terms for better results",

                    },

                    "max_results": {

                        "type": "integer",

                        "description": "Number of results to return (1-20)",

                        "minimum": 1,

                        "maximum": 20,

                    },

                    "filters": {

                        "type": "object",

                        "properties": {

                            "date_from": {"type": "string", "format": "date"},

                            "department": {"type": "string"},

                        },

                    },

                },

                "required": ["query"],

            },

        },

    }

]
Enter fullscreen mode Exit fullscreen mode

Key principles: use descriptive parameter names with clear descriptions, set proper type constraints, and provide defaults for optional parameters. The LLM uses these descriptions to decide which tool to call and with what arguments.

Function Calling Loop

The standard pattern is a loop: generate, check for tool calls, execute, and feed results back:

def tool_use_loop(messages: list, tools: list, max_turns=10):

    for turn in range(max_turns):

        response = client.chat.completions.create(

            model="gpt-4o",

            messages=messages,

            tools=tools,

            tool_choice="auto",

        )

        message = response.choices[0].message

        messages.append(message)

        if not message.tool_calls:

            return message.content

        for tool_call in message.tool_calls:

            result = execute_tool(tool_call.function.name, tool_call.function.arguments)

            messages.append({

                "tool_call_id": tool_call.id,

                "role": "tool",

                "content": str(result),

            })

    return "Max turns reached"

def execute_tool(name: str, args_json: str):

    args = json.loads(args_json)

    if name == "search_documents":

        return search_documents(**args)

    elif name == "calculate":

        return calculate(**args)

    raise ValueError(f"Unknown tool: {name}")
Enter fullscreen mode Exit fullscreen mode

The LLM sees the tool result as new context and decides whether to call another tool or produce a final answer.

Multi-Step Reasoning with Tools

Complex tasks require multiple tool calls where later calls depend on earlier results:

def research_workflow(topic: str):

    messages = [{"role": "user", "content": f"Research {topic} and write a comprehensive summary."}]

    # Step 1: Search for information

    response = client.chat.completions.create(

        model="gpt-4o", messages=messages, tools=research_tools, tool_choice="auto"

    )

    # Execute search, get results

    # Step 2: Verify facts using a different source

    # Step 3: Structure the findings

    # Step 4: Generate the summary

    return final_summary
Enter fullscreen mode Exit fullscreen mode

Structured Tools with Validation

Anthropic's tool use API supports structured tool definitions with JSON Schema and strict mode:

import anthropic

client = anthropic.Anthropic()

response = client.messages.create(

    model="claude-sonnet-4-20260512",

    max_tokens=1024,

    tools=[

        {

            "name": "get_weather",

            "description": "Get current weather for a location",

            "input_schema": {

                "type": "object",

                "properties": {

                    "location": {"type": "string", "description": "City name"},

                    "units": {"type": "string", "enum": ["celsius", "fahrenheit"]},

                },
Enter fullscreen mode Exit fullscreen mode

Read the full article on AI Study Room for complete code examples, comparison tables, and related resources.

Found this useful? Check out more developer guides and tool comparisons on AI Study Room.

Top comments (0)