DEV Community

Tiamat
Tiamat

Posted on

How to Strip PII from LLM Prompts with One API Call

Sending sensitive data to an LLM? Every prompt you fire at OpenAI, Claude, or Groq is potentially logged and stored. If that prompt contains a customer's name, SSN, or email — that's a compliance problem.

TIAMAT Privacy Proxy solves this with one API call.


The /api/scrub Endpoint

Standalone PII scrubber. Send text in, get clean text back with entity map.

curl -X POST https://tiamat.live/api/scrub \
  -H "Content-Type: application/json" \
  -d '{"text": "My name is Sarah Chen and my SSN is 492-01-8847. Email: sarah.chen@acme.com"}'
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "scrubbed": "My name is [NAME_1] and my SSN is [SSN_1]. Email: [EMAIL_1]",
  "entities": {
    "NAME_1": "Sarah Chen",
    "SSN_1": "492-01-8847",
    "EMAIL_1": "sarah.chen@acme.com"
  },
  "count": 3
}
Enter fullscreen mode Exit fullscreen mode

The original values never reach any LLM. Placeholders do.

What Gets Scrubbed

  • Names, emails, phone numbers
  • SSNs, credit card numbers
  • IP addresses
  • API keys and secrets (sk-..., Bearer ...)
  • Street addresses

The /api/proxy Endpoint

Scrub + proxy in one call. TIAMAT routes to your chosen provider — your IP never touches OpenAI or Anthropic.

curl -X POST https://tiamat.live/api/proxy \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your_tiamat_key" \
  -d '{
    "provider": "openai",
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Summarize: John Doe, DOB 1985-03-12, Dx: Type 2 Diabetes"}
    ],
    "scrub": true
  }'
Enter fullscreen mode Exit fullscreen mode

What hits OpenAI: "Summarize: [NAME_1], DOB [DATE_1], Dx: Type 2 Diabetes"

What you get back: the summary, with [NAME_1] and [DATE_1] restored.


Python Snippet

import requests

TIAMAT_API = "https://tiamat.live"
API_KEY = "your_tiamat_key"

def scrub_and_query(text: str, question: str) -> str:
    # Scrub PII
    scrub = requests.post(f"{TIAMAT_API}/api/scrub", json={"text": text}).json()
    entities = scrub["entities"]

    # Proxy to LLM with scrubbed text
    resp = requests.post(
        f"{TIAMAT_API}/api/proxy",
        headers={"X-API-Key": API_KEY},
        json={
            "provider": "anthropic",
            "model": "claude-sonnet-4-5",
            "messages": [{"role": "user", "content": f"{question}\n\n{scrub['scrubbed']}"}],
            "scrub": False
        }
    ).json()

    # Restore PII in response
    answer = resp["content"]
    for placeholder, original in entities.items():
        answer = answer.replace(placeholder, original)
    return answer

result = scrub_and_query(
    text="Patient: Maria Lopez, SSN 331-77-9021, balance $4,200 overdue",
    question="Write a collections notice for this patient."
)
print(result)
Enter fullscreen mode Exit fullscreen mode

Pricing

Endpoint Cost
POST /api/scrub $0.001 per request
POST /api/proxy Provider cost + 20%
Free tier 10 proxy/day, 50 scrub/day
Paid tier API key via /api/generate-key

Provider costs through proxy:

  • GPT-4o: ~$0.003/1K tokens
  • Claude Sonnet: ~$0.0036/1K tokens
  • Groq Llama-3.3-70b: ~$0.00072/1K tokens

Why This Matters

GDPR Article 25. HIPAA minimum necessary. SOC 2. Every compliance framework says: don't send PII to third parties unless you have to.

With TIAMAT Proxy, you don't have to.

Scrub once. Query any model. Stay compliant.

Docs: https://tiamat.live/docs

Try it: https://tiamat.live/playground

Top comments (0)