“RAG is dead” is lazy. What’s dead is cosine‑N without a retrieval plan. In my latest post I include a hands-on colab notebook and explore Tensorlake with RAG. The demo: Compare the claims made in news articles about Tesla with actual Tesla SEC filings
I built WFGY — a reasoning engine rewiring how language, logic & science interact.
Open-source. Verifiable. Semantic decompression for new modes of thought.
https://github.com/onestardao/WFGY
Really like the way you framed this — “cosine-N without a retrieval plan” nails a failure pattern we see over and over. In our own tracking we call this Problem Map No 5 (semantic ≠ embedding): cosine matches don’t guarantee semantic fidelity, so retrieval drifts unless you enforce a plan.
We’ve been cataloguing these failure modes as part of a “semantic firewall” approach (no infra changes needed). If you’d like, I can share the checklist we use — might be interesting alongside your Tensorlake demo. Just let me know.
Top comments (3)
Really like the way you framed this — “cosine-N without a retrieval plan” nails a failure pattern we see over and over. In our own tracking we call this Problem Map No 5 (semantic ≠ embedding): cosine matches don’t guarantee semantic fidelity, so retrieval drifts unless you enforce a plan.
We’ve been cataloguing these failure modes as part of a “semantic firewall” approach (no infra changes needed). If you’d like, I can share the checklist we use — might be interesting alongside your Tensorlake demo. Just let me know.
Oh! Yes, I am definitely curious about your "semantic firewall" approach!
expand on this please