DEV Community

Anton Fedotov
Anton Fedotov

Posted on

I built an open-source trust boundary for RAG and AI agent pipelines

A pattern I keep seeing in AI agent workflows:

they work on clean demo data,
then become fragile on real-world content.

Not because of some dramatic “AI jailbreak” story.

More often because the pipeline has no clear boundary between:

  • trusted instructions
  • retrieved content
  • emails / PDFs / tickets
  • memory
  • tool outputs

For the model, all of this can become one context stream.

That means untrusted content can quietly start influencing the workflow.

I built Omega Walls as an open-source Python library for this problem.

The idea is to put a runtime trust boundary between untrusted content, model context, memory, and tools.

The first version focuses on:

  • RAG / agent prompt injection
  • cross-document or cross-step attack pressure
  • secret-exfiltration pressure
  • tool/action abuse
  • deterministic evidence and auditability

Install:

pip install omega-walls
Enter fullscreen mode Exit fullscreen mode


`

GitHub:
https://github.com/synqratech/omega-walls

PyPI:
https://pypi.org/project/omega-walls/

I’d be interested in feedback from people building RAG or internal agent systems: where would you expect this layer to sit in your stack?

Top comments (0)