You take the Blue Pill, the story ends. You go back to writing
400‑line if/else blocks to handle every single JSON schema from
Stripe, Shopify, and GitHub. You keep paying OpenAI $0.50 every
time a user wants to see a bar chart of their own data. You stay in
the "Context Window" prison.
You take the Red Pill, you stay in Wonderland, and I show you how
Deep the data profiling goes.
The Problem: The "Generic Function" Trap
Every service---Stripe, GitHub, Jira, or your own internal
database---exposes data differently. As a developer, you usually face
two painful choices:
- Choice A: Write a custom parser for every single API provider. (Goodbye, weekend.)
- Choice B: Dump a 5MB JSON payload into an LLM and watch your API bill explode while the model hallucinates math that doesn't exist.
What I wanted was Choice C:
A plug‑and‑play SDK where the LLM acts as the Architect, but a
high‑performance engine (like Polars) acts as the Contractor.
The Redpill Philosophy: Profile, Don't Dump
Most AI tools fail because they try to be too smart. They scan
everything, which is slow and expensive.
Redpillx takes a different approach.
A local Data Profiler inspects a small sample (default: 100
rows) of your data first.
Benefits
Context Window Freedom
We only send the shape (schema) of your data to the LLM.
Whether you have 10 rows or 1 million rows, the token cost stays
the same.Deterministic Math
The LLM generates a ChartSpec (the instructions).
The actual calculation happens locally using Polars (Python)
or our optimized JavaScript execution engine.
🛠️ Tutorial: Building Your First Dynamic Chart
Let's see how easy it is to exit the simulation.
1. Installation
# For the JS/TS fans
npm install redpillx
# For the Python / Data Science crew
pip install redpillx
2. The "Bring Your Own LLM" Setup
Redpillx doesn't lock you into any specific provider.
You bring the brain (LLM).
Redpillx provides the muscle (execution engine).
import { Redpill } from "redpillx";
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const rp = new Redpill()
.setLlm(async (messages) => {
const res = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages,
});
return {
content: res.choices[0].message.content
};
})
.build();
3. Generate & Execute
Point Redpillx at any JSON data and ask a question in plain
English.
const myData = {
tickets: [
{ status: "open", priority: "high" },
// ...
]
};
// 1. The Architect creates the plan (Chart Spec)
const { spec } = await rp.generateSpec(
myData,
"Show me ticket count by status"
);
// 2. The Executor performs the calculation locally
const result = rp.execute(spec, myData);
console.log(result.data);
/*
Output:
[
{ x: "open", y: 42, labelX: "Status", labelY: "Count" }
]
*/
🧬 Why This Works
Because the Spec is separate from the Data, it becomes reusable.
If your data updates every minute, you don't need to call the LLM
again.
Just run:
rp.execute(spec, newData)
And you're done.
- ⚡ Sub‑millisecond execution
- đź§ LLM used only once
- đź’° Zero extra tokens
🤝 Credits & Inspiration
This project stands on the shoulders of giants:
PyGWalker
A massive inspiration for how interactive data exploration should be
feel.OpenCode
For supporting the spirit of open‑source collaboration.OpenRouter
For making it easy to test the SDK against dozens of models (Llama,
Claude (GPT) with a single API.
📦 Get Started
The project is fully open‑source and ready for contributions.
JavaScript SDK
GitHub -- red-pill-js | NPMPython SDK
GitHub -- red-pill-py | PyPI
Final Question
Which pill will you take?
Let me know in the comments how you're dealing with the " Dashboard
Tax" in your apps 👇
Top comments (3)
"The dashboard is lying to you" is one of those titles that forces you to click — and the underlying problem (simulated data that looks real during demos) is genuinely underappreciated in the indie dev world.
The trust problem with AI-generated demos is even worse: not only can the data be fake, but the reasoning the AI applies to that fake data is also optimistic. If you prompt an AI to analyze a dashboard, it'll find patterns whether they're real or not.
I've been building flompt (flompt.dev) — a visual prompt builder that structures prompts into 12 semantic blocks including an explicit constraints block. One of the things I use constraints for is exactly this: "do not infer trends from fewer than N data points", "flag when sample size is insufficient", etc. Structured constraints make AI analysis more honest than free-form prompts. Free, open-source, MCP server available:
claude mcp add flompt https://flompt.dev/mcp/What's the core insight behind Redpillx — is it about better data sourcing, better visualization, or something else entirely?
I completely agree with your point on 'optimistic reasoning.' That’s actually a core reason why I built Redpillx to be LLM-minimalist.
I didn't want to build a tool that overuses the LLM just for the sake of saying 'AI is used.' In fact, Redpillx is designed to treat the LLM as a last resort for logic, not a continuous engine for data.
Here is how we minimize the 'AI noise':
Logic vs. Calculation: The LLM is strictly the 'Architect.' It creates a ChartSpec (a reusable instruction set). Once that's done, the LLM is fired. The actual math and filtering are handled by our deterministic execution engine (Polars/JS). We don't ask the AI to count; we ask it to write the formula.
The 'One-and-Done' Spec: Because we generate a spec-based logic, you can reuse that same logic across different data providers or updated datasets without ever calling the LLM API again. It’s about caching intelligence.
Strict Profiling: By profiling the data locally first, we give the LLM a tiny, high-fidelity 'map' of the schema. This prevents the LLM from 'guessing' or over-analyzing. We give it only what it needs to understand the structure, nothing more.
To your point about honesty: Redpillx acts as the middleware. It doesn't care about the visualization or the data sourcing—it just ensures that the bridge between the two is a reusable, prompt-based logic that doesn't burn tokens on things code can do better.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.