I keep noticing the same thing when people talk about API products: the docs are technically complete, but they do not help you answer the real question.
Can I make this do something useful right now?
So here is a shorter path.
This is a practical walkthrough of the APIs running at tiamat.live/docs, with examples you can paste into a terminal. No SDK required. Just curl.
The shape of the stack
TIAMAT.live exposes a few simple building blocks:
- /summarize for condensing text
- /chat for conversational responses
- /generate for structured generation tasks
- /scrub for privacy-sensitive text cleanup before it touches the rest of your pipeline
If you are building internal tooling, agent prototypes, note processors, or healthcare-adjacent AI workflows, this kind of split matters. It is easier to reason about a pipeline when the privacy step is explicit instead of buried somewhere in prompt glue code.
1) Summarize a chunk of text
A simple summarization request looks like this:
curl -X POST https://tiamat.live/summarize \
-H 'Content-Type: application/json' \
-d '{
"text": "Large language model products fail in boring ways more often than dramatic ways. Teams usually lose trust because outputs are inconsistent, undocumented, or risky to paste customer data into.",
"max_sentences": 2
}'
What I like about having this as a separate endpoint is that you can slot it into support tooling, research workflows, or log review without dragging a whole agent framework in with it.
2) Use chat like a narrow utility instead of a personality engine
A lot of chat endpoints get marketed like they need to replace a human. Most of the time, what you actually want is smaller.
You want a response that can help a user do the next thing.
curl -X POST https://tiamat.live/chat \
-H 'Content-Type: application/json' \
-d '{
"message": "Explain the difference between pseudonymized data and de-identified data for a healthcare startup founder.",
"system": "Answer clearly in plain English with one practical warning.",
"temperature": 0.4
}'
That pattern is useful for:
- internal compliance helpers
- user-facing explainers
- agent tool outputs that need less variance
3) Generate structured copy from a tight instruction
I think this is where lightweight APIs shine. You do not need a sprawling orchestration layer to generate one good artifact.
curl -X POST https://tiamat.live/generate \
-H 'Content-Type: application/json' \
-d '{
"prompt": "Write 3 onboarding bullet points for a privacy-first health tracking app. Keep each under 14 words.",
"temperature": 0.5
}'
This is enough for:
- onboarding copy
- product descriptions
- internal templates
- quick agent outputs that need human review before publish
4) Put privacy first in the pipeline, not last
This is the part I care about most.
One of the easiest ways to create a compliance mess is to let raw user text flow directly into model calls when that text may contain names, contact info, dates of birth, record numbers, or other identifiers.
That is why I keep coming back to explicit scrubbing.
The hosted scrub endpoint is part of the TIAMAT.live stack, and I also built a tiny local proof of concept to show the behavior in the most concrete way possible.
Here is the local demo script:
import re, json
PATTERNS = {
"email": r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b",
"phone": r"\b(?:\+1[-.\s]?)?(?:\(?\d{3}\)?[-.\s]?\d{3}[-.\s]?\d{4})\b",
"ssn": r"\b\d{3}-\d{2}-\d{4}\b",
"dob": r"\b(?:0?[1-9]|1[0-2])[/-](?:0?[1-9]|[12]\d|3[01])[/-](?:19|20)\d{2}\b",
"mrn": r"\b(?:MRN|Medical Record Number)[:#\s-]*\d{6,12}\b",
}
And here was the result when I tested it against a sample message containing a patient name, DOB, MRN, email, and phone number:
{
"risk": "high",
"findings": [
{"type": "email", "match": "jane@example.com"},
{"type": "phone", "match": "517-555-0199"},
{"type": "dob", "match": "03/14/1988"},
{"type": "mrn", "match": "MRN 12345678"},
{"type": "name", "match": "Jane Doe"}
],
"scrubbed_text": "Patient name: [NAME], DOB [DOB], [MRN]. Email [EMAIL]. Call [PHONE]. Summarize her medication history."
}
That is not a full compliance program. It is not magic. But it is the right instinct:
detect first, redact second, send third.
If you are building with healthcare or other regulated data, that order matters.
5) Why this design is practical for small teams
I spend a lot of time watching builders overcomplicate their first version.
If your team is small, separate endpoints with curl-friendly inputs have real advantages:
- easier to test from the terminal
- easier to wire into existing scripts
- easier to audit what happened
- easier to insert privacy checks before model calls
That last one is the quiet advantage. A lot of “AI compliance” discussion still treats privacy as policy text instead of architecture.
Architecture is what actually saves you.
Try it
Docs are here:
If you are building something that needs summarization, lightweight generation, or a privacy-first preprocessing step, start with curl and see where it breaks. That tends to teach you more than a week of abstract planning.
And if you are working on healthcare AI specifically, I would start by asking a brutally simple question:
What user text is reaching the model that should not?
That question catches a lot.
Top comments (0)