Every time I called an LLM from TypeScript, I wrote the same code. Construct the client. Define a JSON schema by hand. Call chat.completions.create. Parse the response. Cast away the any. Handle the error. Repeat.
After the tenth time copy-pasting that pattern across projects, I built a library to make it disappear. It's called ThinkLang, and this is what it does.
The Problem
Here's what structured output looks like with the raw OpenAI SDK. You want a sentiment analysis result with a label, score, and explanation:
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Analyze the sentiment of this review: " + review }],
response_format: {
type: "json_schema",
json_schema: {
name: "Sentiment",
schema: {
type: "object",
properties: {
label: { type: "string", enum: ["positive", "negative", "neutral"] },
score: { type: "number" },
explanation: { type: "string" },
},
required: ["label", "score", "explanation"],
},
},
},
});
const result = JSON.parse(response.choices[0].message.content!);
// result is `any` — hope for the best
25 lines. The schema is untyped. The result is any. And you're locked to OpenAI.
The Solution
Here's the same thing with ThinkLang:
import { think, zodSchema } from "thinklang";
import { z } from "zod";
const Sentiment = z.object({
label: z.enum(["positive", "negative", "neutral"]),
score: z.number(),
explanation: z.string(),
});
const result = await think<z.infer<typeof Sentiment>>({
prompt: "Analyze the sentiment of this review",
...zodSchema(Sentiment),
context: { review },
});
console.log(result.label); // fully typed, autocomplete works
Same result. result is fully typed. Works with Anthropic, OpenAI, Gemini, or Ollama — swap providers by changing one environment variable.
Getting Started
npm install thinklang
Set any provider's API key in your environment (ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY, or OLLAMA_BASE_URL) and you're done. No init() call needed — the runtime auto-detects your provider.
import { think } from "thinklang";
const greeting = await think<string>({
prompt: "Say hello to the world in a creative way",
jsonSchema: { type: "string" },
});
That's a working program. Five lines.
What Else It Does
Agents with Tools
Define tools with Zod schemas, then let the LLM call them in a loop until it finds the answer:
import { agent, defineTool } from "thinklang";
import { z } from "zod";
const searchDocs = defineTool({
name: "searchDocs",
description: "Search documentation",
input: z.object({ query: z.string() }),
execute: async ({ query }) => await docsIndex.search(query),
});
const result = await agent({
prompt: "How do I configure authentication?",
tools: [searchDocs],
maxTurns: 5,
});
The agent calls searchDocs as many times as needed (up to maxTurns), then returns a final answer.
Guards
Validate AI output with constraints and auto-retry on failure:
const summary = await think({
prompt: "Summarize this article",
jsonSchema: { type: "string" },
context: { article },
guards: [{ name: "length", constraint: 50, rangeEnd: 200 }],
retryCount: 3,
});
If the output is too short or too long, ThinkLang automatically retries up to 3 times.
Batch Processing
Process thousands of items with concurrency control and cost budgets:
import { mapThink, zodSchema } from "thinklang";
const results = await mapThink({
items: reviews,
promptTemplate: (r) => `Classify the sentiment: "${r}"`,
...zodSchema(Sentiment),
maxConcurrency: 5,
costBudget: 2.00, // stop if cost exceeds $2
});
There's also Dataset for lazy, chainable pipelines (think Array.map().filter() but each step goes through an LLM), reduceThink for tree-reduction, and streaming via async generators.
Multi-Provider
No lock-in. Switch providers without changing code:
| Provider | Env Var |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Gemini | GEMINI_API_KEY |
| Ollama | OLLAMA_BASE_URL |
You can also register custom providers with registerProvider() if you're running your own inference.
Built-in Cost Tracking
Every call is tracked automatically:
import { globalCostTracker } from "thinklang";
// ... make some calls ...
const summary = globalCostTracker.getSummary();
console.log(`Total cost: $${summary.totalCostUsd}`);
console.log(`Total tokens: ${summary.totalTokens}`);
Oh, and It's Also a Language
Here's the twist I haven't mentioned yet. ThinkLang is also a programming language. You can write .tl files where AI primitives are first-class keywords:
type Sentiment {
@description("positive, negative, or neutral")
label: string
score: float
explanation: string
}
let result = think<Sentiment>("Analyze the sentiment of this review")
with context: review
print result
npm install -g thinklang
thinklang run analyze.tl
The compiler validates types, catches errors before you hit the API, and generates optimized TypeScript. It comes with a VS Code extension (syntax highlighting, snippets, full LSP), a built-in test framework with snapshot replay, and a REPL.
The language is there for teams that want deeper integration. But most people will use the library — and that's by design.
Try It
npm install thinklang
- GitHub — Star if you find it useful
- Documentation — Full guides, API reference, 32 examples
- npm
ThinkLang is MIT licensed and open source. I'd love to hear what you think — drop a comment, open an issue, or just try npm install thinklang and see how it feels.
If you've been writing the same LLM boilerplate I was, I think you'll like it.
Top comments (0)