DEV Community

Cover image for How to Integrate Multiple LLM Providers Without Turning Your Codebase Into a Mess - Provider Strategy in Practice
Danilo Assis
Danilo Assis

Posted on

How to Integrate Multiple LLM Providers Without Turning Your Codebase Into a Mess - Provider Strategy in Practice

Provider Strategy in Practice

How to Integrate Multiple LLM Providers Without Turning Your Codebase Into a Mess

Most companies eventually hit the same wall.

They start with one external provider — payments, maps, messaging, or LLMs. Everything works fine… until the day they need a second provider.

  • The main vendor goes down
  • Pricing changes
  • A specific feature exists only in another provider
  • Latency or regional availability becomes a problem

And suddenly, what used to be a clean integration becomes a maintenance nightmare.

This post walks through a practical, production-ready pattern to integrate multiple LLM providers while keeping your core platform clean, scalable, and resilient.


The Common Pain of Multi-Provider Integrations

When teams integrate providers directly into the domain logic, two types of complexity tend to leak into the system.

1. Provider-specific data leaks into the platform

Each provider has its own:

  • request and response formats
  • error codes
  • retry behavior
  • streaming protocols
  • rate limits
  • latency characteristics

Soon your core logic starts looking like this:

if (provider === "openai") {
  // handle OpenAI logic
} else if (provider === "gemini") {
  // handle Gemini logic
}
Enter fullscreen mode Exit fullscreen mode

At this point, your domain layer is no longer about business rules — it is about vendor behavior.


2. Provider behavior leaks into the product flow

Different providers behave differently under load:

  • good p75 but terrible p99
  • aggressive throttling
  • partial outages
  • inconsistent output quality

Without isolation, every new provider:

  • increases regression risk
  • slows down development
  • makes refactoring scary

The result is predictable: giant integration files, fragile logic, and painful maintenance.


The Solution: Provider Strategy

To solve this problem, I like to use what I call Provider Strategy.

It is not a single pattern, but a combination of well-known patterns working together:

  • Strategy – each provider implements the same interface
  • Adapter – providers translate external formats into internal contracts
  • Registry – a central place to resolve the active provider
  • Failover Policy – logic to switch providers when the primary is degraded

The core principle is simple:

Treat external systems as external.

Your platform should only work with its own internal contract.


A Real Example: Services Marketplace Using LLMs

Imagine a platform that connects customers with professionals within a geographic range.

User input:

  • ZIP code
  • service type (e.g., mason)
  • desired date
  • optional notes

The platform:

  1. Searches the database for available professionals
  2. Aggregates ratings, availability, and pricing signals
  3. Uses an LLM to generate a short, friendly proposal message

This is exactly the kind of flow that often starts with one provider and later needs multiple.


Step 1: Define a Stable Internal Contract

Your domain must not care which provider is being used.

type ProviderId = "openai" | "gemini" | "claude";

type LLMGenerateRequest = {
  prompt: string;
  temperature?: number;
  maxTokens?: number;
};

type LLMGenerateResult = {
  text: string;
  providerId: ProviderId;
  usage?: {
    inputTokens: number;
    outputTokens: number;
  };
};
Enter fullscreen mode Exit fullscreen mode

This contract becomes the backbone of your platform.


Step 2: Provider Strategy + Adapter

Each provider implements the same interface and adapts its SDK internally.

type LLMProvider = {
  id: ProviderId;
  generate: (req: LLMGenerateRequest) => Promise<LLMGenerateResult>;
};
Enter fullscreen mode Exit fullscreen mode

Example: OpenAI Provider

const openAIProvider: LLMProvider = {
  id: "openai",
  generate: async (req) => {
    const res = await openAIGenerate({
      prompt: req.prompt,
      temperature: req.temperature ?? 0.3,
      maxTokens: req.maxTokens ?? 300,
    });

    return {
      text: res.text,
      providerId: "openai",
      usage: res.usage,
    };
  },
};
Enter fullscreen mode Exit fullscreen mode

Example: Gemini Provider

const geminiProvider: LLMProvider = {
  id: "gemini",
  generate: async (req) => {
    const res = await geminiGenerate({
      input: req.prompt,
      temp: req.temperature ?? 0.3,
      maxOutputTokens: req.maxTokens ?? 300,
    });

    return {
      text: res.outputText,
      providerId: "gemini",
      usage: res.usage,
    };
  },
};
Enter fullscreen mode Exit fullscreen mode

Step 3: Provider Registry

Providers are registered at startup and resolved dynamically.

const createProviderRegistry = (providers: LLMProvider[]) => {
  const map = new Map(providers.map((p) => [p.id, p]));

  const resolve = (id: ProviderId) => {
    const provider = map.get(id);
    if (!provider) throw new Error(`Provider not registered: ${id}`);
    return provider;
  };

  return { resolve };
};
Enter fullscreen mode Exit fullscreen mode

Step 4: Failover Policy + Circuit Breaker

Provider selection becomes a policy decision, not scattered logic.

const pickProvider = ({ primary, fallbacks, health }) => {
  const candidates = [primary, ...fallbacks];
  return candidates.find((id) => health(id) === "healthy");
};
Enter fullscreen mode Exit fullscreen mode

This allows:

  • automatic failover
  • controlled retries
  • predictable p99 behavior

Putting It All Together

const generateProposal = async (input) => {
  const settings = await getPlatformSettings();
  const registry = createProviderRegistry([openAIProvider, geminiProvider]);

  const providerId = pickProvider({
    primary: settings.primaryProvider,
    fallbacks: settings.fallbackProviders,
    health: getProviderHealth,
  });

  const provider = registry.resolve(providerId);

  return provider.generate({
    prompt: buildProposalPrompt(input),
    temperature: 0.3,
    maxTokens: 220,
  });
};
Enter fullscreen mode Exit fullscreen mode

Performance, p75, p99, and Cost Control

LLM systems often show:

  • good p75
  • poor p99

To control this:

  • aggressive timeouts
  • retry budgets
  • deterministic caching
  • async processing with queues
  • circuit breakers

Measure:

  • provider latency percentiles
  • retry rate
  • cache hit rate
  • token usage and cost

Monolith First, Microservices Later

This pattern works well in:

  • a single monolith
  • background workers
  • fully distributed systems

The internal contract stays stable while the architecture evolves.


Final Thoughts

Provider Strategy allows teams to:

  • isolate external complexity
  • scale integrations safely
  • improve reliability and p99
  • avoid vendor lock-in

Instead of fighting providers, your platform becomes provider-aware but provider-independent.


Follow me on Twitter
See more in https://linktr.ee/daniloab.

Photo by Denys Nevozhai on Unsplash

Top comments (0)