DEV Community

Sergey Inozemtsev
Sergey Inozemtsev

Posted on

One Tool Calling Interface for OpenAI, Claude, and Gemini

llm-api-adapter is an open‑source Python library designed to simplify working with multiple LLM providers.

Many AI applications today need to support multiple LLM providers.

Common reasons include:

  • cost optimization
  • fallback when a provider is unavailable
  • access to different model capabilities
  • experimentation with new models

In practice, the moment you try to support OpenAI, Claude, and Gemini, the integration becomes messy.

Tool calling alone already breaks portability:

Provider Tool format
OpenAI tool_calls
Anthropic tool_use blocks
Google Gemini functionCall / functionResponse

These are not just syntax differences.\
They require different request structures, response parsing, and execution loops.

Supporting multiple providers usually leads to:

  • provider-specific integration logic
  • provider-specific request/response handling
  • duplicated tool execution flows
  • multiple SDK dependencies

The result is more code, more bugs, and much harder provider switching.

To simplify this, I built llm-api-adapter — a small Python library that provides one unified interface for multiple LLM APIs.

Define tools once and run the same application logic across OpenAI, Anthropic, and Gemini.


Architecture

The adapter acts as a translation layer between your application and LLM providers.

              Application Logic
                     │
                     ▼
           UniversalLLMAPIAdapter
                     │
                     ▼
          Provider Translation Layer
                     │
                     ▼
 ┌─────────────┬─────────────┬─────────────┐
 │   OpenAI    │  Anthropic  │   Gemini    │
 │ tool_calls  │  tool_use   │ functionCall│
 └─────────────┴─────────────┴─────────────┘
Enter fullscreen mode Exit fullscreen mode

Your application communicates with one interface, while the adapter converts requests and responses to the provider-specific formats.


Installation

pip install llm-api-adapter
Enter fullscreen mode Exit fullscreen mode

The "Strawberry" problem

A classic example showing why tool calling matters:

How many "r" letters are in "strawberry"?
Enter fullscreen mode Exit fullscreen mode

The correct answer is 3, but models often fail because they reason over tokens, not characters.

Best practice is:

Let the LLM reason, but delegate deterministic tasks to code.

This is exactly what tool calling enables.


Defining a tool once

With llm-api-adapter, tools are defined using a provider-agnostic schema.

from llm_api_adapter.models.tools import ToolSpec

tools = [
    ToolSpec(
        name="count_letter_in_word",
        description="Count how many times a specific letter appears in a word",
        json_schema={
            "type": "object",
            "properties": {
                "word": {"type": "string"},
                "letter": {"type": "string", "minLength": 1, "maxLength": 1},
            },
            "required": ["word", "letter"],
            "additionalProperties": False,
        },
    )
]
Enter fullscreen mode Exit fullscreen mode

The adapter automatically converts this schema to:

  • OpenAI tools
  • Anthropic tool_use
  • Gemini functionCall

Running the same code across providers

The application logic remains identical.

Only the provider name, model, and API key change.

import json
from typing import Any, Dict

from llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter
from llm_api_adapter.models.messages.chat_message import (
    UserMessage,
    AIMessage,
    ToolMessage,
)

def run_tool(name: str, args: Dict[str, Any]) -> Dict[str, Any]:
    if name == "count_letter_in_word":
        word, letter = args["word"], args["letter"]
        return {
            "word": word,
            "letter": letter,
            "count": word.lower().count(letter.lower()),
        }

providers = [
    ("openai", "gpt-5.2", openai_api_key),
    ("anthropic", "claude-haiku-4-5", anthropic_api_key),
    ("google", "gemini-2.5-flash", google_api_key),
]

for org, model, key in providers:
    adapter = UniversalLLMAPIAdapter(
        organization=org,
        model=model,
        api_key=key,
    )

    messages = [
        UserMessage('How many "r" letters are in "strawberry"?')
    ]

    first = adapter.chat(
        messages=messages,
        tools=tools,
        tool_choice="auto",
        max_tokens=1000,
    )

    if first.tool_calls:
        messages.append(
            AIMessage(content="", tool_calls=first.tool_calls)
        )
        for tc in first.tool_calls:
            result = run_tool(tc.name, tc.arguments)
            messages.append(
                ToolMessage(
                    tool_call_id=tc.call_id,
                    content=json.dumps(result),
                )
            )
        final = adapter.chat(
            messages=messages,
            previous_response_id=first.response_id,
            max_tokens=1000,
        )

        print(f"--- {org} / {model} ---")
        print(final.content)
        print()
Enter fullscreen mode Exit fullscreen mode

Example output

Even though the models use different tokenization internally, they all trigger the tool correctly.

--- openai / gpt-5.2 ---
There are 3 letters "r" in "strawberry".

--- anthropic / claude-haiku-4-5 ---
There are 3 "r" letters in "strawberry".

--- google / gemini-2.5-flash ---
There are three "r" letters in "strawberry".
Enter fullscreen mode Exit fullscreen mode

Without vs with an adapter

Problem Without Adapter With llm-api-adapter
Tool definitions Provider specific One universal schema
Tool execution Custom logic per provider Unified interface
Response parsing Different formats Single response model
Provider switching Rewrite code Change model string
Dependencies Multiple SDKs One library

Why this matters

Supporting multiple LLM providers normally requires separate integrations and duplicated logic.

A unified interface lets you:

  • keep application logic provider-agnostic
  • switch models without rewriting code
  • simplify agent architectures

Instead of adapting your code to each provider, you adapt the providers to your code.


GitHub

The project is open source.

👉 https://github.com/Inozem/llm_api_adapter

You will find full documentation, examples, and the source code in the repository.

Top comments (0)