DEV Community

BN
BN

Posted on

EGA: Runtime Enforcement for LLM Outputs (v1.0.0)

I built EGA, a runtime enforcement layer for LLM outputs.

The problem: eval tools usually score after something already went wrong.

They do not stop bad outputs from going downstream.

EGA sits in the runtime path and checks the model output against the source before letting it pass through.

If something does not have support, it gets dropped or flagged.

v1.0.0 is live on PyPI today.

This is still early:

  • not benchmarked yet
  • not production-grade calibration yet
  • needs real RAG pipeline feedback

I am looking for engineers building RAG pipelines who are willing to plug this in and tell me where it breaks.

pip install ega
GitHub: https://github.com/bh3r1th/llm-evidence-gated-generation
PyPI: https://pypi.org/project/ega/1.0.0/

Top comments (0)