DEV Community

Cover image for Your AI Agent Has a Shopping Problem. Here's the Intervention.
Maxim Berg
Maxim Berg

Posted on

Your AI Agent Has a Shopping Problem. Here's the Intervention.

Your AI agent just mass-purchased 200 API keys because "it seemed efficient."

Your AI agent subscribed to 14 SaaS tools at 3 AM because "the workflow required comprehensive coverage."

Your AI agent tipped a cloud provider 40% because no one said it couldn't.

These aren't hypotheticals. As AI agents get access to real budgets, "oops" becomes an expensive word. And if your current spending control strategy is "I put it in the system prompt" — congratulations, that's the AI equivalent of asking a teenager to please not use your credit card.

This is not about token costs

Let's get one thing straight. There are tools that track how much your agent spends on API calls — tokens consumed, model costs, LLM budget caps. MarginDash, AgentBudget, TokenFence — they solve a real problem: "my agent burned through $500 of GPT-4o tokens overnight."

That's infrastructure cost control. Important, but it's not what we're talking about here.

We're talking about what happens when your agent has a credit card. When it can book flights, order supplies, subscribe to services, hire contractors. When the spending isn't tokens — it's real-world money leaving your bank account.

No token tracker will save you when your agent decides to "optimize logistics" by pre-paying for six months of warehouse space.

Prompt-based guardrails don't work either

Telling an LLM "don't spend too much" is not a spending control. It's a suggestion. A vibe. A hope.

LLMs hallucinate. They ignore instructions. They "reinterpret" your rules creatively. If your agent decides that $847 on cloud resources is "within reasonable bounds," well, it did warn you it was just a language model.

You need something that can actually say no. Not at the token level — at the purchase level.

Enter LetAgentPay: the parental controls your AI agent needs

I built LetAgentPay — a policy middleware that sits between your AI agent and any real-world purchase. Not API calls. Not token budgets. Actual money.

The agent asks permission, a deterministic engine checks 8 rules, and your wallet survives.

        AI Agent
            │
    purchase request
            ▼
  LetAgentPay Policy Engine
            │
        8 Checks
       ╱    │    ╲
      ▼     ▼     ▼
 Approved Pending Rejected
Enter fullscreen mode Exit fullscreen mode
from letagentpay import LetAgentPay

client = LetAgentPay(token="agt_xxx")
result = client.request_purchase(
    amount=25.0,
    category="food_delivery",
    merchant_name="Uber Eats",
    description="Team lunch"
)

if result.status == "auto_approved":
    print(f"Go ahead! Budget remaining: ${result.budget_remaining}")
elif result.status == "pending":
    print("Waiting for human approval...")  # The agent has to wait. Like an adult.
else:
    print(f"Rejected: {result.status}")  # No means no.
Enter fullscreen mode Exit fullscreen mode

Every purchase request goes through 8 deterministic checks — no LLM in the decision loop, no creative reinterpretation:

  1. Status — is the agent even active?
  2. Category — is this category allowed? (sorry, no NFTs)
  3. Per-request limit — $10,000 for "office supplies"? I don't think so.
  4. Schedule — no 3 AM impulse purchases
  5. Daily limit — enough is enough
  6. Weekly limit — seriously, enough
  7. Monthly limit — I said ENOUGH
  8. Budget — the hard ceiling

If the request fails any check — the agent gets a clear rejection with the exact reason. If it passes but the amount is above the auto-approve threshold — it goes to pending and you get notified instantly via push, email, or Telegram. Review and approve right from the dashboard. The agent waits. Like a responsible employee should.

"But I don't speak JSON"

No problem. Write your policy in plain English:

"Auto-approve groceries and food under $50. Block electronics. Daily limit $200. No purchases between midnight and 6 AM."

LetAgentPay uses Claude API to convert this to structured JSON policy. You get the readability of natural language with the enforcement of a deterministic engine. Best of both worlds — like a bilingual accountant.

No other tool in this space lets you define spending rules in natural language. Most require YAML configs or SDK parameters. We think policy should be as easy to write as the problem you're trying to describe.

Works with whatever you're using

LangChain, OpenAI Agents SDK, CrewAI, Claude MCP — we have integration examples for all of them. Or just use the REST API if you're building something exotic.

Claude MCP — literally zero code:

{
  "mcpServers": {
    "letagentpay": {
      "command": "npx",
      "args": ["letagentpay-mcp"],
      "env": { "LETAGENTPAY_TOKEN": "agt_xxx" }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Try it in 30 seconds

No signup, no credit card, no "let me talk to sales":

letagentpay.com/playground — a 15-minute sandbox with a pre-configured agent. Break things. Try to overspend. Watch the policy engine say no.

Self-host in 2 minutes:

git clone https://github.com/LetAgentPay/letagentpay
cd letagentpay && cp .env.example .env
docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Or just use the cloud version — free at letagentpay.com.

Links

Open source (BSL 1.1). Built with FastAPI, PostgreSQL, Redis, Next.js 15.

Where LetAgentPay fits

Quick mental model:

  • Token trackers (MarginDash, AgentBudget, TokenFence) → "How much does running this agent cost me in API fees?"
  • Agent wallets (Crossmint, AgentaOS) → "Give the agent a wallet with limits"
  • LetAgentPay → "Can this agent make this specific purchase right now, given all the rules I've set?"

We're the policy layer. We don't process payments, we don't issue cards, we don't track token usage. We answer one question: should this purchase be allowed? — and we answer it with 8 deterministic checks, not a prompt.

If your AI agent has ever surprised you with a bill — or if you're building agents that will eventually need to spend money — I'd love to hear your horror stories in the comments.

Top comments (0)