DEV Community

Cover image for The Token Waste Problem: Why your AI Agents shouldn't evaluate permissions
Glendel Joubert Fyne Acosta
Glendel Joubert Fyne Acosta

Posted on

The Token Waste Problem: Why your AI Agents shouldn't evaluate permissions

We are burning millions of API tokens on problems that if statements solved 20 years ago.

I speak with developers building Multi-Agent Systems (MAS) every day, and I keep seeing the same massive architectural anti-pattern: Routing everything through the AI model.

  • Need to check an agent's permissions? "Ask the LLM."
  • Need to route a message? "Ask the LLM."
  • Need to validate a data schema? "Ask the LLM."

Language models are extraordinary reasoning engines. But they are also expensive, probabilistic, and relatively slow. If a problem has a deterministic, correct answer (like checking an access policy), it should be evaluated by runtime code, not guessed by a neural network.

The Anti-Pattern

Instead of doing this (Probabilistic):

// BAD: Asking the LLM to check permissions
const prompt = `You are an agent. The user wants to delete a file. 
Here are their permissions: ${user.permissions}. 
Should you allow it?`;

const decision = await llm.generate(prompt);
Enter fullscreen mode Exit fullscreen mode

The Solution

We need to get back to doing this (Deterministic):

// GOOD: Let code handle policy, let AI handle reasoning
if (!user.hasPermission('delete_file')) {
  throw new Error("Unauthorized"); 
}

// Only call the LLM for actual cognitive tasks
const plan = await agent.reasonAboutFile(file);
Enter fullscreen mode Exit fullscreen mode

AI should decide what to do. Deterministic code should execute it and enforce the boundaries.

Are we forgetting basic software engineering principles just because AI is exciting? The MAS space doesn't need more wrappers; we need standardized frameworks that enforce these boundaries. Let's get back to building solid infrastructure.

Top comments (0)