Large Language Models (LLMs) can generate human-like text, but what if you want your LLM-powered app to do more than chat? Like extract structured data, trigger logic, or interact with databases/APIs?
Tool Use/Function Calling helps our LLMs do more than generate responses based on trained data.
π‘ What Is Tool Use/Function Call in LLMs?
Function call/Tool use is the pattern where the LLM decides when and how to invoke external capabilities (APIs, DB queries, search, calculators, code runtimes, and more) by returning a structured call
.
Your application executes that call, returns the result to the model, and the model produces the final user response.
Why use it?
LLMs are smart, but they have limitations:
- They hallucinate
- They donβt fetch real-time data
- They canβt execute backend logic directly
- They return freeform text, which is sometimes hard to parse
Using tools solves this:
β
Return structured outputs (like JSON)
β
Fetch real-time information
β
Integrate with APIs or your database
β
Run backend logic (math, validation, scheduling, etc.)
β
Trigger workflows or APIs
π A currency converter tool use
example
1. Tool Definition
Define a function
that converts currency:
const convert_currency = {
"name": "convert_currency",
"description": "Converts an amount from one currency to another",
"parameters": {
"type": "object",
"properties": {
"amount": { "type": "number" },
"from": { "type": "string", "description": "Currency code, e.g., USD" },
"to": { "type": "string", "description": "Currency code, e.g., EUR" }
},
"required": ["amount", "from", "to"]
}
}
2. User Prompt
"How much is 100 dollars in euros?"
3. What Happens
- The LLM understands the request
- Calls the
convert_currency
tool with:
{
"amount": 100,
"from": "USD",
"to": "EUR"
}
- Tool returns:
91.23 EUR
- LLM responds:
100 USD is approximately 91.23 EUR.
π Function Calling
With OpenAI: Job Description Analyzer
In Job Application Assistant, I used Function Calling
to extract job insights.
The LLM pulls out from the job description:
- Required skills
- Responsibilities
- Experience or qualifications
Step 1: Define the Schema
const jobInsightFunction = {
name: "extract_job_insights",
description: "Extracts skills, responsibilities, and experience from a job description.",
parameters: {
type: "object",
properties: {
skills: {
type: "array",
items: { type: "string" },
description: "List of skills required for the job",
},
responsibilities: {
type: "array",
items: { type: "string" },
description: "Job responsibilities",
},
experience: {
type: "array",
items: { type: "string" },
description: "Qualifications or experience needed",
},
},
required: ["skills", "responsibilities", "experience"],
},
};
Step 2: Call the Model with Tool
const response = await openai.chat.completions.create({
model: "gpt-4-0613",
messages: [
{ role: "system", content: "You are a helpful AI job assistant." },
{
role: "user",
content: `Extract the key skills, responsibilities, and required experience from the following job description:\n\n${jobDescription}`,
},
],
tools: [
{
type: "function",
function: jobInsightFunction,
},
],
tool_choice: "auto",
});
Step 3: Get and Use the Arguments
const toolCall = response.choices?.[0]?.message?.tool_calls?.[0];
const args = JSON.parse(toolCall?.function?.arguments ?? "{}");
Output:
args = {
skills: [...],
responsibilities: [...],
experience: [...],
}
With this output, I can:
β
Display in UI
β
Match with resumes
β
Generate cover letters
π Quick Tips
- Use clear schema definitions
- Validate the output
- Use
tool_choice: "auto"
to let the model decide - Chain tasks if needed: extract ππΌ reason ππΌ act
Happy coding!!!
Top comments (0)