DEV Community

Cover image for 🧭 Part 3: Implementing Vector Search with Pinecone
Praveen
Praveen

Posted on

🧭 Part 3: Implementing Vector Search with Pinecone

🧭 Part 3: Implementing Vector Search with Pinecone

In this part, we’ll integrate Pinecone, a vector database that enables semantic search. This allows the chatbot to understand user queries and fetch relevant order information—even if the phrasing varies.


âś… What We'll Cover

  • Introduction to vector databases and embeddings
  • Setting up Pinecone
  • Creating embeddings from order data
  • Storing and retrieving vectors from Pinecone

🧠 1. Why Vector Search?

Traditional keyword search has limits. With vector search, we embed data (like order summaries) into high-dimensional vectors using LLMs. These vectors can then be compared to the user’s query (also embedded) to find semantically similar matches.


🔧 2. Set Up Pinecone

Install Pinecone client (already done in Part 1, just in case):

npm install @pinecone-database/pinecone
Enter fullscreen mode Exit fullscreen mode

Update your .env if not already done:

PINECONE_API_KEY=your_pinecone_api_key
PINECONE_ENVIRONMENT=your_pinecone_environment
Enter fullscreen mode Exit fullscreen mode

Initialize Pinecone in your config:

// backend/langchain/config.js
const { PineconeClient } = require('@pinecone-database/pinecone');

const initPinecone = async () => {
  const client = new PineconeClient();
  await client.init({
    apiKey: process.env.PINECONE_API_KEY,
    environment: process.env.PINECONE_ENVIRONMENT,
  });
  return client;
};
Enter fullscreen mode Exit fullscreen mode

🧬 3. Generate Embeddings

We’ll create vector representations of order data using LangChain’s OpenAI embeddings.

// backend/langchain/embedOrders.js
const { OpenAIEmbeddings } = require('@langchain/openai');
const { initPinecone } = require('./config');
const { connectToDatabase } = require('../database/connection');

async function embedOrders() {
  const db = await connectToDatabase();
  const orders = await db.collection('orders').find().toArray();

  const embeddings = new OpenAIEmbeddings({
    openAIApiKey: process.env.OPENAI_API_KEY,
  });

  const pinecone = await initPinecone();
  const index = pinecone.Index("ecommerce-orders");

  for (const order of orders) {
    const orderSummary = `
      Order ID: ${order.orderId}
      Customer: ${order.customerName}
      Items: ${order.items.map(i => i.productName).join(', ')}
      Status: ${order.status}
    `;

    const [embedding] = await embeddings.embedQuery(orderSummary);

    await index.upsert([
      {
        id: order.orderId,
        values: embedding,
        metadata: {
          orderId: order.orderId,
          customerName: order.customerName,
          status: order.status,
        },
      },
    ]);
  }

  console.log("All orders embedded and stored in Pinecone.");
}

embedOrders();
Enter fullscreen mode Exit fullscreen mode

Run the script:

node backend/langchain/embedOrders.js
Enter fullscreen mode Exit fullscreen mode

🔍 4. Perform a Semantic Search

We’ll later use this to match user queries to order information.

// backend/langchain/searchOrder.js
const { OpenAIEmbeddings } = require('@langchain/openai');
const { initPinecone } = require('./config');

async function searchOrders(userQuery) {
  const embeddings = new OpenAIEmbeddings({
    openAIApiKey: process.env.OPENAI_API_KEY,
  });

  const pinecone = await initPinecone();
  const index = pinecone.Index("ecommerce-orders");

  const [queryVector] = await embeddings.embedQuery(userQuery);

  const results = await index.query({
    vector: queryVector,
    topK: 3,
    includeMetadata: true,
  });

  return results.matches;
}

module.exports = { searchOrders };
Enter fullscreen mode Exit fullscreen mode

âś… Next Steps (Part 4)

In the next part, we will:

  • Connect LangChain components
  • Load data from MongoDB
  • Split and embed text dynamically
  • Retrieve and generate responses

🚀 Stay tuned for Part 4!

Hostinger image

Get n8n VPS hosting 3x cheaper than a cloud solution

Get fast, easy, secure n8n VPS hosting from $4.99/mo at Hostinger. Automate any workflow using a pre-installed n8n application and no-code customization.

Start now

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

đź‘‹ Kindness is contagious

Explore a trove of insights in this engaging article, celebrated within our welcoming DEV Community. Developers from every background are invited to join and enhance our shared wisdom.

A genuine "thank you" can truly uplift someone’s day. Feel free to express your gratitude in the comments below!

On DEV, our collective exchange of knowledge lightens the road ahead and strengthens our community bonds. Found something valuable here? A small thank you to the author can make a big difference.

Okay