<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Юрий Рипперт</title>
    <description>The latest articles on DEV Community by Юрий Рипперт (@__9e8b043a985a).</description>
    <link>https://dev.to/__9e8b043a985a</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/__9e8b043a985a"/>
    <language>en</language>
    <item>
      <title>I built a DSL for declaring AI safety constraints — papa-lang v0.2</title>
      <dc:creator>Юрий Рипперт</dc:creator>
      <pubDate>Thu, 26 Feb 2026 10:23:02 +0000</pubDate>
      <link>https://dev.to/__9e8b043a985a/i-built-a-dsl-for-declaring-ai-safety-constraints-papa-lang-v02-4815</link>
      <guid>https://dev.to/__9e8b043a985a/i-built-a-dsl-for-declaring-ai-safety-constraints-papa-lang-v02-4815</guid>
      <description>&lt;p&gt;Show HN: papa-lang — declarative DSL for AI safety configuration&lt;br&gt;
Title:&lt;br&gt;
Show HN: papa-lang – a DSL where you declare AI hallucination thresholds before deployment&lt;br&gt;
Post body (copy-paste to news.ycombinator.com/submit)&lt;br&gt;
I built a small declarative language for configuring AI agent safety constraints.&lt;br&gt;
The problem: every team writing multi-agent systems invents their own ad-hoc YAML/Python&lt;br&gt;
for expressing things like "block this response if hallucination risk &amp;gt; 20%".&lt;br&gt;
There's no standard format for it.&lt;br&gt;
papa-lang lets you write this instead:&lt;br&gt;
agent analyst {&lt;br&gt;
model: claude-3-sonnet&lt;br&gt;
guard: strict&lt;br&gt;
hrs_threshold: 0.10&lt;br&gt;
}&lt;br&gt;
swarm medical_team {&lt;br&gt;
agents: [analyst]&lt;br&gt;
consensus: 4/7&lt;br&gt;
pii: filter&lt;br&gt;
hrs_max: 0.20&lt;br&gt;
}&lt;br&gt;
pipeline main {&lt;br&gt;
route: orchestrator&lt;br&gt;
module: papa-life&lt;br&gt;
}&lt;br&gt;
Then compile to Python or TypeScript:&lt;br&gt;
bash&lt;br&gt;
papa compile medical.papa --target python&lt;/p&gt;

&lt;h1&gt;
  
  
  → medical_compiled.py (ready to run)
&lt;/h1&gt;

&lt;p&gt;The core concept is HRS (Hallucination Risk Score) — a float in [0.0, 1.0].&lt;br&gt;
Each agent declares its threshold. The runtime blocks responses above it.&lt;br&gt;
guard: strict means PASS &amp;lt; 10%, BLOCK &amp;gt; 20% — enforced automatically.&lt;br&gt;
What's there today:&lt;br&gt;
pip install papa-lang (v0.2.0, PyPI)&lt;br&gt;
npm install @papa-lang/core (v0.1.0)&lt;br&gt;
CLI: papa compile / validate / init&lt;br&gt;
Formal spec: SPEC.md with EBNF grammar&lt;br&gt;
RFC-0001 open for community feedback&lt;br&gt;
Zero external deps in compiler (stdlib Python only)&lt;br&gt;
8 tests passing&lt;br&gt;
GitHub: github.com/papa-ai/papa-lang&lt;br&gt;
It's early. The open questions I'd love feedback on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Should HRS be a mandatory field or optional with safe defaults?&lt;/li&gt;
&lt;li&gt;What compilation targets matter most — Go, Rust, Java?&lt;/li&gt;
&lt;li&gt;Is there prior art I'm missing? (I know about LMQL, Guidance, DSPy — those are different: they
control prompts, not safety declarations)&lt;/li&gt;
&lt;li&gt;Would a conformance test suite (like MCP has) help adoption?
dev.to version (longer, more context)
Title: I built a DSL for declaring AI safety constraints — papa-lang v0.2
The problem
When you build a multi-agent AI system for healthcare or finance, you need to answer:
What's the maximum acceptable hallucination rate for this agent?
Should responses be blocked or just warned at X% risk?
How many agents need to agree before the response goes through?
Where does PII get filtered?
Right now every team solves this in ad-hoc Python or YAML. There's no standard.
What papa-lang does
It's a small DSL (domain-specific language) where you declare these properties
at design time, then compile to your target language.
A .papa file describes agents, swarms, and pipelines:
papa
agent synthesis {
model: gemini-1.5-pro
guard: strict // PASS &amp;lt; 10%, BLOCK &amp;gt; 20%
hrs_threshold: 0.10 // Hallucination Risk Score threshold
memory: enabled
}
swarm medical_team {
agents: [synthesis, research]
consensus: 4/7 // 4 of 7 agents must agree
anchor: blockchain // immutable audit trail
pii: filter
}
papa compile medical.papa --target python generates ready-to-run Python using the papa-lang SDK.
The HRS concept
HRS (Hallucination Risk Score) is a float 0.0–1.0 measuring the probability
that an AI response contains fabricated information.
Guard levels map to thresholds:
strict : PASS &amp;lt; 0.10, BLOCK &amp;gt; 0.20 — for healthcare / finance / legal
standard : PASS &amp;lt; 0.15, BLOCK &amp;gt; 0.30 — default
minimal : PASS &amp;lt; 0.25, BLOCK &amp;gt; 0.50 — internal tools
When verdict is BLOCK — the response is hidden from the user entirely.
No hallucinated content reaches the end user.
Current state
v0.2.0 on PyPI: pip install papa-lang
v0.1.0 on npm: npm install @papa-lang/core
CLI with compile , validate , init commands
Compilation targets: Python, TypeScript
Formal EBNF grammar in SPEC.md
RFC-0001 open for community feedback
Compiler: zero external dependencies (stdlib only)
What's next
v0.3: JSON/YAML targets, import statements, tool definitions
v1.0: stability guarantee + conformance test suite
Linux Foundation AI &amp;amp; Data Foundation submission
Links
GitHub: github.com/papa-ai/papa-lang
PyPI: pypi.org/project/papa-lang/
npm: npmjs.com/package/@papa-lang/core
Spec: SPEC.md in repo
RFC: docs/rfc/RFC-0001
Twitter / X thread (optional)
Tweet 1:
I built papa-lang — a DSL for declaring AI safety constraints before deployment.
Instead of ad-hoc YAML, you write .papa files:
agent analyst { guard: strict hrs_threshold: 0.10 }
Then: papa compile analyst.papa --target python
pip install papa-lang (v0.2.0 on PyPI)
Tweet 2:
The core idea: HRS (Hallucination Risk Score) — float 0.0-1.0.
guard: strict → PASS &amp;lt; 10%, BLOCK &amp;gt; 20%
guard: standard → PASS &amp;lt; 15%, BLOCK &amp;gt; 30%
BLOCK = response hidden from user entirely.
No hallucinated content reaches anyone.
Tweet 3:
papa-lang is designed to be a standard, not a product.
EBNF grammar in SPEC.md
RFC-0001 open for feedback
Compiler: zero external deps (stdlib Python)
Apache 2.0
Feedback welcome: what compilation targets do you need?
github.com/papa-ai/papa-lang
POSTING CHECKLIST
Before posting:
git push (SSH key added to git.papa-ai.ae)
GitHub Pages live at papa-lang.dev
PyPI page: pypi.org/project/papa-lang/ — verify README renders
npm page: npmjs.com/package/@papa-lang/core — verify README renders
GitHub repo: public, has README, SPEC.md, examples/
Post order (wait 30 min between each):&lt;/li&gt;
&lt;li&gt;dev.to (longest version, gets indexed by Google)&lt;/li&gt;
&lt;li&gt;HackerNews Show HN (peak traffic: Mon-Wed 9-11am US Eastern)&lt;/li&gt;
&lt;li&gt;Twitter/X thread&lt;/li&gt;
&lt;li&gt;Reddit r/MachineLearning (title: "papa-lang: declarative AI safety DSL")&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>opensource</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
