DEV Community

Cover image for ๐Ÿ’ฌ Part 5: Designing Conversational Logic
Praveen
Praveen

Posted on

๐Ÿ’ฌ Part 5: Designing Conversational Logic

๐Ÿ’ฌ Part 5: Designing Conversational Logic

Now that we have our order data embedded and searchable with Pinecone, itโ€™s time to design how the chatbot understands and responds to customer queries. This part focuses on crafting prompts, setting up retrieval-based QA, and handling multi-turn conversations using LangChain's Chains and Memory.


โœ… What We'll Cover

  • Creating prompt templates for order-related questions
  • Setting up chains using LangChain
  • Managing context and conversation history
  • Using memory to support multi-turn conversations

๐Ÿงพ 1. Create Prompt Templates

LangChain allows you to create flexible prompt templates for guiding the LLM. Hereโ€™s an example:

// backend/langchain/prompts.js
const { PromptTemplate } = require('@langchain/core/prompts');

const orderQueryPrompt = new PromptTemplate({
  template: \`
You are a helpful support assistant for an e-commerce store. Use the context to answer the user's question.
If the answer isn't in the context, respond with "Sorry, I couldn't find the information."

Context:
{context}

Question:
{question}
\`,
  inputVariables: ['context', 'question'],
});

module.exports = { orderQueryPrompt };
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”„ 2. Create a QA Chain

Weโ€™ll use a RetrievalQAChain to combine the vector search and the LLM.

// backend/langchain/qaChain.js
const { RetrievalQAChain } = require('@langchain/community/chains');
const { OpenAI } = require('@langchain/openai');
const { orderQueryPrompt } = require('./prompts');
const { initPinecone } = require('./config');
const { OpenAIEmbeddings } = require('@langchain/openai');
const { PineconeStore } = require('@langchain/pinecone');

async function createQAChain() {
  const llm = new OpenAI({
    openAIApiKey: process.env.OPENAI_API_KEY,
    temperature: 0.2,
    modelName: 'gpt-3.5-turbo',
  });

  const pinecone = await initPinecone();
  const index = pinecone.Index('ecommerce-orders');

  const vectorStore = await PineconeStore.fromExistingIndex(
    new OpenAIEmbeddings({ openAIApiKey: process.env.OPENAI_API_KEY }),
    { pineconeIndex: index }
  );

  return RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever(), {
    prompt: orderQueryPrompt,
  });
}

module.exports = { createQAChain };
Enter fullscreen mode Exit fullscreen mode

๐Ÿง  3. Add Conversation Memory (Multi-Turn)

Weโ€™ll use BufferMemory to allow the model to remember the chat context.

// backend/langchain/memory.js
const { BufferMemory } = require('@langchain/core/memory');

const memory = new BufferMemory({
  returnMessages: true,
  memoryKey: 'chat_history',
});

module.exports = { memory };
Enter fullscreen mode Exit fullscreen mode

You can pass this memory into a conversational chain if you want the bot to remember previous queries.


๐Ÿš€ 4. Handle Incoming Queries

Finally, letโ€™s wire this up to handle incoming API requests.

// backend/routes/chat.js
const express = require('express');
const router = express.Router();
const { createQAChain } = require('../langchain/qaChain');

router.post('/', async (req, res) => {
  const { question } = req.body;

  try {
    const qaChain = await createQAChain();
    const response = await qaChain.call({ question });
    res.json({ answer: response.text });
  } catch (err) {
    console.error(err);
    res.status(500).json({ error: 'Failed to process question' });
  }
});

module.exports = router;
Enter fullscreen mode Exit fullscreen mode

Then plug the route into your server.js:

// backend/server.js
const chatRoute = require('./routes/chat');
app.use('/api/chat', chatRoute);
Enter fullscreen mode Exit fullscreen mode

โœ… Next Steps (Part 6)

In the next part, we will:

  • Build the frontend chat interface in React
  • Connect the UI to the chat API
  • Style and refine the user experience

๐ŸŽจ Letโ€™s make it look good and feel real!

Top comments (0)