DEV Community

Chandrani Mukherjee
Chandrani Mukherjee

Posted on

Teach your RAG to learn from its mistakes — the smart way

🔁 Building a Feedback Loop for RAG with LangChain and Docker

Retrieval-Augmented Generation (RAG) is great — until your LLM starts hallucinating or retrieving outdated context. That’s where a feedback loop comes in.

In this post, we’ll build a simple RAG pipeline with LangChain, containerize it using Docker, and add a feedback mechanism to make it smarter over time.


🧠 Why Feedback Matters in RAG

A RAG system has two parts:

  1. Retriever — fetches relevant documents from a vector store.
  2. Generator — produces an answer using the retrieved context.

Without feedback, your model never learns from mistakes.

A feedback loop lets you:

  • Re-rank documents that users find more useful.
  • Fine-tune retrievers based on query–document relevance.
  • Measure response quality (faithfulness, groundedness, etc.).

⚙️ Step 1: Build a Minimal RAG Pipeline

Let’s start with a simple LangChain setup:

from langchain.chains import RetrievalQA
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import TextLoader

# Load documents
loader = TextLoader("data/policies.txt")
docs = loader.load()

# Create embeddings and vector store
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(docs, embeddings)
retriever = db.as_retriever(search_kwargs={"k": 3})

# Define RAG pipeline
qa = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model_name="gpt-4o-mini"),
    retriever=retriever,
    return_source_documents=True
)

query = "What is the latest leave policy?"
response = qa({"query": query})
print(response["result"])
Enter fullscreen mode Exit fullscreen mode

💬 Step 2: Add a Feedback Collector

After displaying the result, log user feedback (thumbs up/down) into a simple JSON or database.

import json, datetime

def log_feedback(query, response, rating):
    entry = {
        "timestamp": str(datetime.datetime.now()),
        "query": query,
        "response": response,
        "rating": rating
    }
    with open("feedback.json", "a") as f:
        json.dump(entry, f)
        f.write("\n")
Enter fullscreen mode Exit fullscreen mode

You can later parse this feedback file to improve your retriever — e.g., re-weighting embeddings or filtering irrelevant sources.


🔄 Step 3: Close the Feedback Loop

Use libraries like TruLens or Ragas to automatically evaluate and fine-tune based on feedback:

from trulens_eval import Feedback, TruChain, Select

tru_qa = TruChain(chain=qa, app_id="rag-feedback-demo")

feedback_quality = Feedback(name="helpfulness")
tru_qa.add_feedback(feedback_quality)
tru_qa.evaluate([{"query": query, "response": response["result"]}])
Enter fullscreen mode Exit fullscreen mode

🐳 Step 4: Containerize with Docker

Create a simple Dockerfile:

FROM python:3.10-slim
WORKDIR /app
COPY . .
RUN pip install langchain openai faiss-cpu trulens-eval
ENV OPENAI_API_KEY=your_api_key
CMD ["python", "rag_feedback.py"]
Enter fullscreen mode Exit fullscreen mode

Then build and run:

docker build -t rag-feedback .
docker run -e OPENAI_API_KEY=$OPENAI_API_KEY rag-feedback
Enter fullscreen mode Exit fullscreen mode

🚀 Step 5: Scale & Iterate

  • Deploy your RAG system as a microservice behind an API.
  • Stream feedback data to a shared database (Postgres, MongoDB).
  • Periodically retrain or re-index your vector store based on positive/negative signals.

🧩 Summary

By integrating LangChain, Docker, and a feedback loop, you get a self-improving RAG system that learns what “good” looks like from real usage.

This loop not only boosts retrieval precision but also reduces hallucination and improves trust in your AI answers.


💡 Next Steps

  • Add automated evaluation with Ragas
  • Serve your feedback endpoint via FastAPI
  • Store embeddings and feedback in a persistent vector DB like Weaviate or Pinecone

Top comments (0)