DEV Community

Cover image for How to Use Bifrost CLI with Any AI Model (Claude Code, Codex, Gemini & Opencode)
Hadil Ben Abdallah
Hadil Ben Abdallah

Posted on

How to Use Bifrost CLI with Any AI Model (Claude Code, Codex, Gemini & Opencode)

Most AI coding tools work well... until you try to use more than one model.

If you're using Claude Code, Codex CLI, or Gemini CLI, you've probably hit the same wall:
switching between models is slow, messy, and full of configuration overhead.

At first, everything feels simple. You install a tool, connect it to a model, and start building. But as your workflow grows, so does the complexity. Different tools require different setups, environment variables start piling up, and even small changes become frustrating.

Bifrost CLI (Command Line Interface) solves this by letting you use any AI model with any coding agent from one interface.

No repeated setup. No environment variable chaos. No vendor lock-in.

In this guide, you'll learn exactly how Bifrost CLI works, why it matters for modern development workflows, and how to set it up in minutes.


What is Bifrost CLI?

Bifrost CLI is an interactive AI CLI tool that acts as a unified interface between coding agents (like Claude Code, Codex CLI, and Gemini CLI) and multiple AI providers.

Instead of manually configuring each tool, managing environment variables, or switching between provider-specific setups, Bifrost lets you launch everything from a single command. You simply choose your agent, select a model, and start working.

Under the hood, it connects your tools to a centralized AI gateway, handling authentication, routing, and configuration automatically.

This makes it possible to run a true multi-model workflow without changing how you work, just with far less friction.


What Bifrost CLI Actually Solves

The problem Bifrost CLI addresses isn’t just tool fragmentation; it’s workflow complexity.

When you start working with multiple AI models, things break down quickly.

Each coding agent has its own setup. Switching models means updating environment variables, reconfiguring providers, or even restarting sessions. What starts as a simple setup turns into constant friction.

Bifrost CLI removes that friction by centralizing everything behind a single interface.

Instead of configuring each tool separately, the CLI handles API keys, base URLs, model selection, and provider routing automatically.

In practical terms, this means you can use any model from any provider without interrupting your workflow.

You keep using your favorite tools, but eliminate the operational overhead that slows you down.

Explore Bifrost CLI


Why This Matters for Real Development Workflows

If you're already working with multiple AI models or experimenting with coding agents, you've probably started running into these limitations.

At some point, the problem stops being about "which model is better" and becomes an infrastructure question.

  • How do you route requests?
  • How do you switch providers without rewriting everything?
  • How do you keep visibility and control across all of it?

If you're building coding agents or developer tools, this pattern becomes even more important at scale.

I broke down the full architecture behind multi-provider AI gateways (and how tools like Bifrost fit into it) here: How to Build a Multi-Provider LLM Infrastructure with an AI Gateway (OpenAI, Claude, Azure & Vertex)

Different models excel at different things. For example, you might prefer one model for code generation, another for debugging, and another for long-context reasoning.

Without a unifying layer, switching between these models requires manual reconfiguration every time. That friction adds up quickly and slows down development.

Bifrost CLI removes that friction entirely by turning model selection into a runtime decision instead of a setup problem.

This is where having a structured multi-model workflow becomes critical.


How Bifrost CLI Works Under the Hood

Bifrost is an open-source AI gateway that introduces a layer between your tools and AI providers.

Instead of your CLI tool talking directly to OpenAI, Anthropic, or Google, everything routes through Bifrost.

The architecture looks like this:

Your CLI Tool (Claude / Codex / Gemini / Opencode)
→ Bifrost CLI
→ Bifrost Gateway
→ Any AI Model (OpenAI, Anthropic, Google, etc.)

This design has an important implication.

Bifrost CLI architecture showing how Claude Code, Codex CLI, and Gemini connect to multiple AI models through a unified gateway
Bifrost acts as a control layer between developer tools and multiple AI providers

Your tools no longer need to know anything about providers. They simply send requests, and Bifrost handles routing, authentication, and configuration.


Getting Started with Bifrost CLI

One of the most impressive aspects of Bifrost CLI is how quickly you can get started.

First, you need to run the gateway:

npx -y @maximhq/bifrost
Enter fullscreen mode Exit fullscreen mode

This starts the gateway locally, typically on http://localhost:8080.

Then, in a separate terminal, launch the CLI:

npx -y @maximhq/bifrost-cli
Enter fullscreen mode Exit fullscreen mode

If you’ve already installed it, you can simply run:

bifrost
Enter fullscreen mode Exit fullscreen mode

From there, everything happens interactively.

Bifrost interface launching the CLI


The Setup Flow (What Actually Happens)

Instead of forcing you to configure everything manually, the CLI walks you through a structured setup process.

First, you provide the base URL of your gateway. In most local setups, this will be:

http://localhost:8080
Enter fullscreen mode Exit fullscreen mode

Next, you can optionally enter a virtual key. This is used for authentication and is stored securely in your operating system’s keyring, not in plaintext files.

Then comes one of the most important steps: choosing your coding agent.

You can select from Claude Code, Codex CLI, Gemini CLI, or Opencode. If the tool is not installed, Bifrost will install it automatically using npm. This removes another common source of friction in CLI-based workflows.

After selecting the agent, the CLI fetches available models directly from the gateway. You can search through them interactively or type a model identifier manually.

For example:

openai/gpt-5
gemini/gemini-2.5-pro
anthropic/claude-sonnet
Enter fullscreen mode Exit fullscreen mode

Once everything is selected, you simply press Enter to launch your session.

At that moment, Bifrost configures everything for you, including API keys, environment variables, and provider-specific settings, without any manual intervention.


Running Multiple AI Agents in Parallel

One of the most powerful features of Bifrost CLI is its tabbed terminal interface.

Unlike traditional CLI tools that exit after a session, Bifrost keeps everything running inside a persistent environment. You can launch multiple agents at the same time and switch between them instantly.

For example, you might run:

  • Codex with GPT-5 for code generation
  • Gemini for reasoning-heavy tasks
  • Claude Code for file operations and tooling

Each session runs in its own tab, and you can switch between them using keyboard shortcuts.

This transforms your terminal into a multi-agent workspace instead of a single-threaded tool.


Why the Tabbed Interface Changes Everything

At first glance, the tab system might seem like a small feature. In practice, it completely changes how you work.

Instead of stopping one task to start another, you can keep multiple flows running in parallel. One agent can analyze a codebase while another generates a feature and a third debugs an issue.

This kind of workflow is difficult to achieve with standard CLI tools.

Bifrost makes it feel natural.


Advanced Features That Matter in Practice

Beyond the basic setup, Bifrost CLI includes several features that become increasingly valuable as your workflow grows.

First, it eliminates the need for environment variables. You no longer have to export API keys or manage provider-specific configurations manually. Everything is handled automatically.

Second, it stores sensitive data securely using your operating system’s keyring. This is especially important for teams or shared environments where security matters.

Third, it integrates with MCP when using Claude Code. This allows your agent to access tools and external systems without requiring additional setup commands.

Finally, it supports persistent sessions. Your last configuration is saved, so the next time you launch the CLI, you can resume instantly without repeating the setup process.


Configuration and Flexibility

Bifrost CLI stores its configuration in a local file:

~/.bifrost/config.json
Enter fullscreen mode Exit fullscreen mode

A typical configuration might look like this:

{
  "base_url": "http://localhost:8080",
  "default_harness": "claude",
  "default_model": "anthropic/claude-sonnet"
}
Enter fullscreen mode Exit fullscreen mode

This allows you to maintain consistent setups across sessions while still being able to override settings when needed.


Common Use Cases for Bifrost CLI

Here are a few real-world scenarios where Bifrost CLI becomes especially useful:

  • Multi-model development: Use GPT-5 for generation, Claude for tooling, and Gemini for reasoning
  • AI experimentation: Quickly compare outputs across providers without reconfiguration
  • Team environments: Standardize how developers interact with different AI models
  • Agent workflows: Run multiple coding agents in parallel with different models

These are the kinds of scenarios where Bifrost CLI starts to provide real leverage in your workflow.


When Bifrost CLI Becomes Essential

Not every project needs a tool like Bifrost.

If you are working on a small prototype with a single model, direct integration is usually enough.

However, the moment you start working with multiple models, multiple tools, or team environments, the complexity grows quickly.

That’s when a unified interface becomes valuable.

Bifrost CLI simplifies:

  • Switching between models
  • Managing multiple agents
  • Maintaining consistent configuration
  • Scaling workflows across projects

It turns what would normally be a fragmented setup into a cohesive system.


Final Thoughts

The real value of Bifrost CLI is not just convenience. It is the shift in how you think about AI tools.

Instead of choosing a tool based on its default model, you choose the best model for the task and use it through the tool you already like.

That separation between tools and models is what makes modern AI workflows scalable.

Once you experience that flexibility, going back to single-provider setups feels limiting.

If you're working with multiple AI models, Bifrost CLI is one of those tools that quickly becomes hard to live without.


Thanks for reading! 🙏🏻
I hope you found this useful ✅
Please react and follow for more 😍
Made with 💙 by Hadil Ben Abdallah
LinkedIn GitHub Daily.dev

Top comments (4)

Collapse
 
aidasaid profile image
Aida Said

Really enjoyed this 🔥 especially the focus on workflows rather than just tools.
The multi-agent, multi-model setup is clearly where things are heading.
Turning model selection into a runtime decision is such an underrated advantage.
Great job making a complex setup feel simple and actionable.

Collapse
 
hadil profile image
Hadil Ben Abdallah

Really appreciate that 🙌🏻
That shift from “tool choice” to “runtime decisions” is exactly what stood out to me too while working on this.
Feels like we’re moving toward workflows where models are interchangeable, not fixed, and that changes everything.

Collapse
 
hanadi profile image
Ben Abdallah Hanadi

This hit way too close 😅
I’ve definitely wasted hours juggling env variables and configs just to switch models. Bifrost CLI feels like the kind of tool you don’t realize you need… until you try it.

Collapse
 
hadil profile image
Hadil Ben Abdallah

Haha same here 😅
I didn’t realize how much time I was losing on configs until I stopped needing to touch them at all.
That “you don’t realize you need it” feeling is exactly what makes tools like this stick.