Hi everyone, I’m building StillMe—a RAG system focused on absolute transparency and zero-anthropomorphism.
I recently received an invite to the Algolia Agent Studio Challenge. With a $3,000 prize pool, it was tempting. Algolia is a world-class search engine, and integrating it could technically solve my retrieval latency in Stage 2.
But after a deep technical audit, I’ve decided to say "No-Go" (for now). Here is why, and why it matters for Ethical AI.
The Transparency Paradox
StillMe is built on the promise that every piece of information can be audited. Algolia is a SaaS—a "black-box" ranking system. Moving my retrieval layer there means I can no longer guarantee 100% transparency to my users. In the world of "Intellectual Humility," I can't justify a proprietary middleman.Overkill for "Deep Research"
StillMe learns from arXiv and RSS feeds 6 times a day. We need high-precision retrieval for a 19-layer validation chain, not millisecond search-as-you-type for e-commerce.The Challenge for the Community
I’m stuck. I want the precision of keyword search but the auditability of Open Source.
Vector DBs (Chroma/FAISS) are great for semantics but struggle with exact technical terms.
Proprietary Search APIs are fast but opaque.
I’m looking for contributors who care about this: How can we build a Hybrid Search layer that remains 100% local and auditable? Should I look into Meilisearch? Or stick with a refined BM25 on top of Chroma?
If you believe AI should be a tool we can trust, not an illusion we fall for, I’d love your feedback on my repo.
GitHub: (https://github.com/anhmtk/StillMe-Learning-AI-System-RAG-Foundation)
Top comments (0)