DEV Community

seah-js
seah-js

Posted on • Originally published at seah-js.github.io

Function Calling & Tool Schemas — Review

This is a learning session between me and Klover, my AI assistant. Spaced repetition review — testing what stuck from the first session.


What does the LLM actually output?

Klover: When an LLM "calls a function," what does it actually output, and what does it NOT do?

Me: It outputs a JSON table with all the relevant parameters which will be passed to an orchestrator which will parse the JSON and call the actual tool.

Klover: Core is right — the LLM outputs structured JSON, it does NOT execute anything itself. The orchestrator handles actual execution. Small note: "JSON table" — it's more accurately a JSON object (key-value pairs), not a table. Minor terminology thing but worth being precise.

Where do schemas come from?

Klover: Where do tool schemas come from? Are they baked into the model during training, or something else? And what does a schema typically contain?

Me: It is external to the model. It is passed like a system prompt to the model. A schema usually contains the description of the tool, the required and optional parameters.

Klover: Solid. Schemas are injected at runtime, not baked in during training. They contain the tool name, description, and typed parameters (required + optional). ✅

Why schema quality matters

Klover: Why does the quality of your schema descriptions matter so much?

Me: It might cause the LLM to use the tool wrongly which might result in useless or non-relevant outputs from the tool called.

Klover: Right — poor descriptions lead to wrong tool selection or bad parameter values. Schema quality is basically prompt engineering for tools. Garbage in, garbage out. ✅

The orchestrator's role

Klover: Beyond just executing the tool call, what else does the orchestrator do?

Me: It acts as a security layer. It ensures that the tool called is allowed for this model. Also, it prevents prompt injection from directly calling a tool.

Klover: Security is a big one. But the complete list:

  1. Validation — are the params correct/allowed?
  2. Rate limiting — preventing abuse
  3. Security/filtering — blocking unauthorized calls
  4. Logging — audit trail

I got 2 out of 4. Missed rate limiting (prevents runaway agents hammering APIs) and logging (critical for debugging agent behavior in long chains).

Connecting to ReAct

Klover: How does function calling relate to the ReAct pattern?

Me: Tool calls fit in the Action part. The Observation is the output from the tool call.

Klover: Spot on. Action = LLM outputs tool call JSON, orchestrator executes. Observation = raw result fed back for next Thought. ✅


Review verdict: Pass → Developing

Core concepts solid. Missed rate limiting and logging from orchestrator responsibilities. Next review: February 12.

Top comments (0)