DEV Community

John Medina
John Medina

Posted on

The hidden cost of GPT-4o: what every SaaS founder should know about per-user LLM spend it

So you're running a SaaS that leans on an LLM. You check your OpenAI bill at the end of the month, it's a few hundred bucks, you shrug and move on. As long as it's not five figures, who cares, right?

Wrong. That total is hiding a nasty secret: you're probably losing money on some of your users.

I'm not talking about the obvious free-tier leeches. I'm talking about paying customers who are costing you more in API calls than they're giving you in subscription fees. You're literally paying for them to use your product.

The problem with averages

Let's do some quick, dirty math. GPT-4o pricing settled at around $3/1M tokens for input and $10/1M for output. It's cheap, but it's not free.

Say you have a summarization feature. A user pastes in 50,000 tokens of text (around 37.5k words) and gets a 1,000 token summary back.

• Input cost: 50,000 / 1,000,000 * $3.00 = $0.15
• Output cost: 1,000 / 1,000,000 * $10.00 = $0.01
• Total cost for one summary: $0.16

If a user on a $19/mo plan does this just four times a day, every day, their usage looks like this:

• Daily cost: $0.16 * 4 = $0.64
• Monthly cost: $0.64 * 30 = $19.20

You just lost twenty cents on that customer. And that's one feature. What if your app is a chatbot? What if they're running complex agentic workflows? It's easy to see how a single "power user" can quietly burn through their subscription fee and start eating into your margins.

Your monthly bill averages this out. You see the total, you see your total MRR, and if one is bigger than the other, you think you're fine. But you're flying blind. You have no idea which customers are profitable and which are financial dead weight.

You can't fix what you can't see

The real issue is attribution. The OpenAI invoice is just a number. It doesn't tell you that customer-123 on the Pro plan cost you $45 last month while customer-456 cost you $1.50. Without that breakdown, you can't make smart decisions.

• You can't identify users who need to be moved to a higher tier.
• You can't set fair rate limits.
• You can't detect abuse.
• You can't accurately price your service.

You're just guessing.

To give you a clearer picture, let's look at how the main providers stack up. Prices are always in flux, but as of early 2026, here's the landscape for the flagship models per million tokens:

Model Input Cost / 1M tokens Output Cost / 1M tokens
OpenAI GPT-4o ~$3.00 ~$10.00
Anthropic Claude 3.5 Sonnet ~$3.00 ~$15.00
Google Gemini 1.5 Pro ~$3.50 ~$10.50

As you can see, output costs for a model like Claude 3.5 Sonnet are 50% higher than for GPT-4o. If your application is write-heavy (generating long reports, articles, etc.), that difference will show up on your bill. Without per-user tracking, you'd have no idea if a profitable GPT-4o user would become a loss-leader on a different model.

4 ways to stop the bleeding

Okay, so tracking is the first step. But once you can see the problem, how do you fix it? Here are a few practical strategies. This isn't rocket science, but it's amazing how many startups ignore the basics.

  1. Strategic Rate Limiting
    This is the simplest tool in your arsenal. Don't offer an unlimited buffet. Set generous but firm limits based on your tiers. A free user might get 10 complex summaries per day, while a Pro user gets 100. This prevents a single user from running up a massive bill, accidentally or maliciously.

  2. Introduce Usage-Based Tiers
    Flat-rate subscriptions are simple, but they're a poor fit for variable costs like LLM APIs. A better model is to include a generous token allowance with each plan (e.g., 5 million tokens/month for $19) and then charge for overages. This ensures your power users pay for what they use, keeping your business profitable.

  3. Implement Smart Caching
    Is your tool summarizing popular articles? Are multiple users asking the same question to your chatbot? Cache the results. Hitting a database is orders of magnitude cheaper than hitting an LLM API. A simple Redis cache layer can save a surprising amount of money on redundant queries.

  4. Use Cheaper Models for Simpler Tasks
    Not every task needs a flagship model. For things like text classification, basic formatting, or simple Q&A, a cheaper and faster model like Claude 3 Haiku or Gemini 1.5 Flash can do the job for a fraction of the cost. Route tasks intelligently based on complexity. Use the expensive scalpel, not the expensive chainsaw, for delicate work.

A simple logging wrapper (example)

You don't need a complex system to get started. Here’s a conceptual JavaScript snippet showing how you could wrap your OpenAI calls to log usage per customer.

// This is a simplified example, not production code.

async function callOpenAIWithCostTracking(prompt, customerId) {
  // Your existing OpenAI API call logic
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: prompt }],
  });

  const usage = response.usage; // { prompt_tokens: 123, completion_tokens: 456 }
  const inputCost = (usage.prompt_tokens / 1000000) * 3.00; // GPT-4o input pricing
  const outputCost = (usage.completion_tokens / 1000000) * 10.00; // GPT-4o output pricing
  const totalCost = inputCost + outputCost;

  // Log it to your database
  console.log(`- LOGGING: Customer ${customerId} request cost $${totalCost.toFixed(4)}`);
  // await db.logLLMUsage({ 
  //   customerId: customerId,
  //   model: 'gpt-4o',
  //   promptTokens: usage.prompt_tokens,
  //   completionTokens: usage.completion_tokens,
  //   cost: totalCost
  // });

  return response.choices[0].message.content;
}

// When a user makes a request:
// const result = await callOpenAIWithCostTracking("Summarize this for me...", "customer-123");
Enter fullscreen mode Exit fullscreen mode

Start tracking today

Building your own logging wrapper is a solid first step, but maintaining it at scale gets annoying fast. fwiw, I use a simple open-source tool called LLMeter that does exactly this — it wraps the provider APIs and logs costs per user to a dashboard, no proxying required. Might be worth a look if you're in the same boat and don't want to build the tracking yourself.

But honestly, whether you build it, use a tool, or just run a script, the important thing is to start tracking your per-user LLM spend today. Your bottom line will thank you.

Top comments (0)