DEV Community

novatechnolab
novatechnolab

Posted on

πŸš€ ide_model_ext β€” One Extension to Chat with GPT, Gemini, and Claude

The Problem

If you're working with AI models, you've probably dealt with this:

  • Install openai for GPT
  • Install google-generativeai for Gemini
  • Install anthropic for Claude
  • Learn three different APIs
  • Manage three sets of authentication
  • Write adapter code every time you want to switch models

What if one package handled all of it?


Introducing ide_model_ext

ide_model_ext gives you a single, universal interface to interact with OpenAI (GPT-4o), Google Gemini, and Anthropic Claude β€” from Python scripts, the terminal CLI, or Jupyter notebooks.

Install

pip install ide_model_ext
Enter fullscreen mode Exit fullscreen mode

Use in 3 Lines

from ide_model_ext import ModelClient

client = ModelClient()  # auto-selects best available model
response = client.chat("Explain quantum computing")
print(f"{response.content}\nβ€” {response.model} ({response.provider})")
Enter fullscreen mode Exit fullscreen mode

That's it. The client auto-detects which API keys you've set and picks the best available model. Every response tells you exactly which model and provider answered.


✨ Features

Feature Description
πŸ”— Unified API One ModelClient for GPT-4o, Gemini, Claude Sonnet, and more
πŸ”€ Dynamic Selection Auto-picks the best model from your configured API keys
πŸ” Transparent Routing Every response includes .model and .provider
🌊 Streaming Real-time streamed responses from all providers
πŸ““ Jupyter Magics %ai and %%ai magic commands
πŸ’» CLI Tool Chat, interactive REPL, model listing
πŸ”Œ Extensible Add providers by subclassing BaseProvider

Quick Setup

Set API keys for the providers you want (you only need one):

export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AIza..."
export ANTHROPIC_API_KEY="sk-ant-..."
Enter fullscreen mode Exit fullscreen mode

Python Usage

Explicit Model

from ide_model_ext import ModelClient

# Pick a specific model
client = ModelClient(model="claude-3-5-sonnet-20241022")
response = client.chat("Write a haiku about Python")
print(response.content)
Enter fullscreen mode Exit fullscreen mode

Switch Models On-the-Fly

client = ModelClient(model="gpt-4o")
response1 = client.chat("Hello from GPT!")

client.set_model("gemini-2.0-flash")
response2 = client.chat("Hello from Gemini!")

print(response1.provider)  # "openai"
print(response2.provider)  # "gemini"
Enter fullscreen mode Exit fullscreen mode

Streaming

for chunk in client.stream("Tell me a story"):
    print(chunk, end="", flush=True)
Enter fullscreen mode Exit fullscreen mode

CLI Usage

# One-shot
ide-model-ext chat "What is AI?"

# Specific model
ide-model-ext chat --model gemini-pro "Summarize this"

# Interactive REPL
ide-model-ext interactive

# List all models
ide-model-ext models
Enter fullscreen mode Exit fullscreen mode

Jupyter Notebook

%load_ext ide_model_ext

%ai Tell me a joke about programming

%ai_model gemini-2.0-flash

%%ai
Write a Python function that calculates
the Fibonacci sequence recursively.
Enter fullscreen mode Exit fullscreen mode

Works in Jupyter Notebook, JupyterLab, Google Colab, and VS Code notebooks.


Architecture

Your Code / CLI / Jupyter
        β”‚
   ModelClient          ← simple user-facing API
        β”‚
   ModelRouter          ← resolves model β†’ provider
      β”Œβ”€β”€β”Όβ”€β”€β”
  OpenAI Gemini Claude  ← provider adapters
      β””β”€β”€β”Όβ”€β”€β”˜
   BaseProvider         ← abstract interface
Enter fullscreen mode Exit fullscreen mode

The key design decision: every response includes .model and .provider so you always know which model answered β€” even under dynamic routing.


Supported Models

Provider Models
OpenAI gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4, gpt-3.5-turbo, o1-preview, o1-mini
Gemini gemini-2.0-flash, gemini-1.5-pro, gemini-1.5-flash, gemini-1.0-pro
Claude claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus, claude-3-sonnet, claude-3-haiku

Links


Built by Novatechnolab

MIT Licensed β€’ Contributions welcome!

If this saves you time, give it a ⭐ on GitHub!

Top comments (0)