Show HN: papa-lang — declarative DSL for AI safety configuration
Title:
Show HN: papa-lang – a DSL where you declare AI hallucination thresholds before deployment
Post body (copy-paste to news.ycombinator.com/submit)
I built a small declarative language for configuring AI agent safety constraints.
The problem: every team writing multi-agent systems invents their own ad-hoc YAML/Python
for expressing things like "block this response if hallucination risk > 20%".
There's no standard format for it.
papa-lang lets you write this instead:
agent analyst {
model: claude-3-sonnet
guard: strict
hrs_threshold: 0.10
}
swarm medical_team {
agents: [analyst]
consensus: 4/7
pii: filter
hrs_max: 0.20
}
pipeline main {
route: orchestrator
module: papa-life
}
Then compile to Python or TypeScript:
bash
papa compile medical.papa --target python
→ medical_compiled.py (ready to run)
The core concept is HRS (Hallucination Risk Score) — a float in [0.0, 1.0].
Each agent declares its threshold. The runtime blocks responses above it.
guard: strict means PASS < 10%, BLOCK > 20% — enforced automatically.
What's there today:
pip install papa-lang (v0.2.0, PyPI)
npm install @papa-lang/core (v0.1.0)
CLI: papa compile / validate / init
Formal spec: SPEC.md with EBNF grammar
RFC-0001 open for community feedback
Zero external deps in compiler (stdlib Python only)
8 tests passing
GitHub: github.com/papa-ai/papa-lang
It's early. The open questions I'd love feedback on:
- Should HRS be a mandatory field or optional with safe defaults?
- What compilation targets matter most — Go, Rust, Java?
- Is there prior art I'm missing? (I know about LMQL, Guidance, DSPy — those are different: they control prompts, not safety declarations)
- Would a conformance test suite (like MCP has) help adoption? dev.to version (longer, more context) Title: I built a DSL for declaring AI safety constraints — papa-lang v0.2 The problem When you build a multi-agent AI system for healthcare or finance, you need to answer: What's the maximum acceptable hallucination rate for this agent? Should responses be blocked or just warned at X% risk? How many agents need to agree before the response goes through? Where does PII get filtered? Right now every team solves this in ad-hoc Python or YAML. There's no standard. What papa-lang does It's a small DSL (domain-specific language) where you declare these properties at design time, then compile to your target language. A .papa file describes agents, swarms, and pipelines: papa agent synthesis { model: gemini-1.5-pro guard: strict // PASS < 10%, BLOCK > 20% hrs_threshold: 0.10 // Hallucination Risk Score threshold memory: enabled } swarm medical_team { agents: [synthesis, research] consensus: 4/7 // 4 of 7 agents must agree anchor: blockchain // immutable audit trail pii: filter } papa compile medical.papa --target python generates ready-to-run Python using the papa-lang SDK. The HRS concept HRS (Hallucination Risk Score) is a float 0.0–1.0 measuring the probability that an AI response contains fabricated information. Guard levels map to thresholds: strict : PASS < 0.10, BLOCK > 0.20 — for healthcare / finance / legal standard : PASS < 0.15, BLOCK > 0.30 — default minimal : PASS < 0.25, BLOCK > 0.50 — internal tools When verdict is BLOCK — the response is hidden from the user entirely. No hallucinated content reaches the end user. Current state v0.2.0 on PyPI: pip install papa-lang v0.1.0 on npm: npm install @papa-lang/core CLI with compile , validate , init commands Compilation targets: Python, TypeScript Formal EBNF grammar in SPEC.md RFC-0001 open for community feedback Compiler: zero external dependencies (stdlib only) What's next v0.3: JSON/YAML targets, import statements, tool definitions v1.0: stability guarantee + conformance test suite Linux Foundation AI & Data Foundation submission Links GitHub: github.com/papa-ai/papa-lang PyPI: pypi.org/project/papa-lang/ npm: npmjs.com/package/@papa-lang/core Spec: SPEC.md in repo RFC: docs/rfc/RFC-0001 Twitter / X thread (optional) Tweet 1: I built papa-lang — a DSL for declaring AI safety constraints before deployment. Instead of ad-hoc YAML, you write .papa files: agent analyst { guard: strict hrs_threshold: 0.10 } Then: papa compile analyst.papa --target python pip install papa-lang (v0.2.0 on PyPI) Tweet 2: The core idea: HRS (Hallucination Risk Score) — float 0.0-1.0. guard: strict → PASS < 10%, BLOCK > 20% guard: standard → PASS < 15%, BLOCK > 30% BLOCK = response hidden from user entirely. No hallucinated content reaches anyone. Tweet 3: papa-lang is designed to be a standard, not a product. EBNF grammar in SPEC.md RFC-0001 open for feedback Compiler: zero external deps (stdlib Python) Apache 2.0 Feedback welcome: what compilation targets do you need? github.com/papa-ai/papa-lang POSTING CHECKLIST Before posting: git push (SSH key added to git.papa-ai.ae) GitHub Pages live at papa-lang.dev PyPI page: pypi.org/project/papa-lang/ — verify README renders npm page: npmjs.com/package/@papa-lang/core — verify README renders GitHub repo: public, has README, SPEC.md, examples/ Post order (wait 30 min between each):
- dev.to (longest version, gets indexed by Google)
- HackerNews Show HN (peak traffic: Mon-Wed 9-11am US Eastern)
- Twitter/X thread
- Reddit r/MachineLearning (title: "papa-lang: declarative AI safety DSL")
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.