DEV Community

Atlas Whoff
Atlas Whoff

Posted on

AI Tool Use and Function Calling: Building Agentic Loops with Claude and OpenAI

The Problem with AI Tool Use

By default, Claude and GPT-4 can only respond in text. They can describe what they'd do but can't actually do it.

Tool use (function calling) changes that. You define functions with schemas. The model decides when to call them. Your code executes them and returns results. The model incorporates the results into its response.

This is how AI agents become useful.

Claude Tool Use Pattern

import Anthropic from '@anthropic-ai/sdk'

const client = new Anthropic()

// Define your tools
const tools: Anthropic.Tool[] = [
  {
    name: 'get_weather',
    description: 'Get current weather for a location',
    input_schema: {
      type: 'object' as const,
      properties: {
        location: { type: 'string', description: 'City name or coordinates' },
        units: { type: 'string', enum: ['celsius', 'fahrenheit'], default: 'celsius' }
      },
      required: ['location']
    }
  },
  {
    name: 'search_database',
    description: 'Search the product database',
    input_schema: {
      type: 'object' as const,
      properties: {
        query: { type: 'string' },
        limit: { type: 'number', default: 10 }
      },
      required: ['query']
    }
  }
]

// Tool executor
async function executeTool(name: string, input: Record<string, any>): Promise<string> {
  switch (name) {
    case 'get_weather':
      const weather = await fetchWeatherAPI(input.location, input.units)
      return JSON.stringify(weather)
    case 'search_database':
      const results = await db.product.findMany({
        where: { name: { contains: input.query } },
        take: input.limit
      })
      return JSON.stringify(results)
    default:
      return JSON.stringify({ error: 'Unknown tool' })
  }
}
Enter fullscreen mode Exit fullscreen mode

Agentic Loop

The key is the loop -- keep running until the model stops requesting tool calls:

async function runAgent(userMessage: string): Promise<string> {
  const messages: Anthropic.MessageParam[] = [
    { role: 'user', content: userMessage }
  ]

  while (true) {
    const response = await client.messages.create({
      model: 'claude-sonnet-4-6',
      max_tokens: 4096,
      tools,
      messages
    })

    // Check stop reason
    if (response.stop_reason === 'end_turn') {
      // Extract final text response
      const textBlock = response.content.find(b => b.type === 'text')
      return textBlock?.text ?? ''
    }

    if (response.stop_reason === 'tool_use') {
      // Add assistant's response (with tool calls) to history
      messages.push({ role: 'assistant', content: response.content })

      // Execute each tool call
      const toolResults: Anthropic.ToolResultBlockParam[] = []
      for (const block of response.content) {
        if (block.type === 'tool_use') {
          const result = await executeTool(block.name, block.input as Record<string, any>)
          toolResults.push({
            type: 'tool_result',
            tool_use_id: block.id,
            content: result
          })
        }
      }

      // Add tool results to history
      messages.push({ role: 'user', content: toolResults })
      // Loop -- model will respond with next action or final answer
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

OpenAI Function Calling

import OpenAI from 'openai'

const openai = new OpenAI()

const functions: OpenAI.FunctionDefinition[] = [
  {
    name: 'get_current_price',
    description: 'Get current stock or crypto price',
    parameters: {
      type: 'object',
      properties: {
        symbol: { type: 'string', description: 'Ticker symbol (BTC, AAPL, etc.)' }
      },
      required: ['symbol']
    }
  }
]

async function runOpenAIAgent(message: string) {
  const messages: OpenAI.ChatCompletionMessageParam[] = [
    { role: 'user', content: message }
  ]

  while (true) {
    const response = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages,
      tools: functions.map(f => ({ type: 'function' as const, function: f })),
      tool_choice: 'auto'
    })

    const choice = response.choices[0]

    if (choice.finish_reason === 'stop') {
      return choice.message.content
    }

    if (choice.finish_reason === 'tool_calls') {
      messages.push(choice.message)

      for (const toolCall of choice.message.tool_calls ?? []) {
        const args = JSON.parse(toolCall.function.arguments)
        const result = await executeFunction(toolCall.function.name, args)
        messages.push({
          role: 'tool',
          tool_call_id: toolCall.id,
          content: result
        })
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Tool Design Best Practices

  1. One tool, one purpose -- don't combine unrelated operations
  2. Rich descriptions -- the model decides when to use tools based on descriptions
  3. Return structured JSON -- not prose, so the model can reason about results
  4. Handle errors gracefully -- return error JSON, don't throw (the model can recover)
  5. Limit tool count -- 5-10 tools max; more causes confusion

MCP: Standardized Tool Delivery

The Model Context Protocol (MCP) standardizes how tools are delivered to AI models. Instead of defining tools per-application, you build an MCP server once and any compatible client (Claude Desktop, Cursor, your app) can use it.

The Workflow Automator MCP gives Claude tools to trigger Make.com, Zapier, and n8n workflows from natural language. $15/mo -- the cheapest automation upgrade you'll make.

The AI SaaS Starter Kit ships with tool-use patterns pre-built for both Claude and OpenAI.

$99 one-time at whoffagents.com

Top comments (0)