DEV Community

Cover image for everything-rag: LLMs with your data, locally
Astra Bertelli
Astra Bertelli

Posted on

everything-rag: LLMs with your data, locally

𝗔𝗹𝘄𝗮𝘆𝘀 𝘄𝗮𝗻𝘁𝗲𝗱 𝘁𝗼 𝗰𝗵𝗮𝘁 𝘄𝗶𝘁𝗵 𝘆𝗼𝘂𝗿 𝗽𝗱𝗳𝘀 𝗶𝗻 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝗮𝗻𝗱 𝗲𝗮𝘀𝗶𝗹𝘆-𝗮𝗰𝗰𝗲𝘀𝘀𝗶𝗯𝗹𝗲 𝘄𝗮𝘆?

Now you can! I am thrilled to introduce 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴-𝗿𝗮𝗴, your fully customizable and local chatbot assistant! 🤖

𝗧𝗟;𝗗𝗥:
You do not want to read this post but you want to start getting your hands dirty?

Check out the online HuggingFace space built for everything-rag!

With everything-rag, you can:

  • Use virtually any LLM you want
  • Use your own data: everything-rag can work with any pdf you provide, whether it's about data sciences or pallas' cats!🐈
  • Enjoy 100% local and 100% free functionality: no hosted APIs or pay-as-you-go services.
  • Exploit Retrieval Augmented Generation
  • Simple installation through Docker image

This is completely free to use and runs on your local computer!

Make sure to check out (and leave a little ⭐ if you please) the GitHub repo, have a look to my blog post about the importance of open-source LLMs and, also, do not forget to sponsor me on GitHub if you please!

Inspired by Jan.ai (for local LLM development) and Cheshire Cat AI (for the elegant dockerized installation workflow).

Shout-outs to Hugging Face (LLM support and Online space), Gradio (app interface), Langchain (retrieval architecture), Chroma (vectorized databases architecture), Docker and GitHub (docker image distribution and hosting) for having made everything-rag possible!

And stay tuned, more features are coming in the future!🚀

Top comments (0)