DEV Community

Cover image for πŸ›‘ Stop Re-Writing RAG Pipelines: My Next.js + Pinecone Architecture ⚑️
Atul Tripathi
Atul Tripathi

Posted on

πŸ›‘ Stop Re-Writing RAG Pipelines: My Next.js + Pinecone Architecture ⚑️

Let's be real for a second. β˜•οΈ

I have built the "Chat with your PDF" feature for clients about five times in the last two months.

The frontend? Fun. streaming UI, Tailwind, those fancy typing effects... I love it. 🎨
The backend? Absolute headache. 🀯

Every single time, I found myself staring at VS Code, copying and pasting the same boring boilerplate to handle:

❌ The PDF Loader: Chunking text without breaking sentences mid-thought.
❌ The Embeddings: Batching data to OpenAI so I don't hit rate limits.
❌ The Vector Store: Upserting into Pinecone/Supabase without errors.
❌ The Context Window: Calculating tokens so the AI doesn't crash.

After the 5th time, I realized I was wasting 40+ hours per project just setting up the "plumbing" before I could actually build the cool stuff.

So, I decided to fix it. Forever. πŸ› οΈ

πŸ—οΈ The Architecture

Here is the stack I finally settled on for a production-ready RAG (Retrieval Augmented Generation) app. It's typed, it scales, and it just works.

  • β–² Framework: Next.js (App Router)
  • 🦜 Orchestration: LangChain.js
  • 🌲 Vector DB: Pinecone (My go-to for speed)
  • ⚑️ Database/Auth: Supabase
  • πŸ’… Styling: Tailwind CSS

🧠 The Hard Part: Handling Vectors

The biggest pain point isn't the chat; it's the Ingestion Pipeline. You can't just dump a PDF into ChatGPT. You have to slice and dice it first. πŸ”ͺ

Here is the logic I standardized:

  1. User uploads file πŸ“‚
  2. Server reads buffer πŸ‘“
  3. RecursiveCharacterTextSplitter breaks it into chunks 🧩
  4. Upsert to Pinecone with metadata πŸ’Ύ

It looks something like this (simplified for sanity):

import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { PineconeStore } from "langchain/vectorstores/pinecone";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";

// πŸͺ„ The logic that usually takes 3 hours to debug
export async function addDocumentsToVectorStore(text, fileId) {

  // 1. Split the text intelligently
  const splitter = new RecursiveCharacterTextSplitter({
    chunkSize: 1000,
    chunkOverlap: 200,
  });

  const docs = await splitter.createDocuments([text]);

  // 2. Embed and Upsert to the Cloud ☁️
  const embeddings = new OpenAIEmbeddings();

  await PineconeStore.fromDocuments(docs, embeddings, {
    pineconeIndex: index,
    namespace: `file_${fileId}`, // πŸ”’ Isolate vectors per file
  });

  console.log("Vectors Upserted. We are ready to chat. ⚑️");
}
Enter fullscreen mode Exit fullscreen mode

πŸ“¦ So, I Productized It.

I got tired of setting this up from scratch.

I took my personal repo, cleaned it up, added a polished UI, and turned it into a reusable Starter Kit.

What's inside the box?

βœ… Pre-configured LangChain Setup

βœ… Built-in Vector Ingestion (PDF/TXT/MD)

βœ… Streaming Chat Components (ChatGPT style)

βœ… Rate Limiting (Save your API credits!)

πŸ§ͺ The Experiment (Smoke Test)

I'm running a little experiment this weekend.

Instead of charging the full launch price ($149), I set up a $9 Early Bird Deposit.

Why $9? It acts as a "Skin in the Game" filter. If the pain of building RAG pipelines isn't worth the price of a coffee β˜•οΈ, then I know this isn't worth building further.

But if you want to save 40 hours of work?

πŸ‘‰ Grab the RAG Starter Kit (Early Access)

(Even if you don't buy it, feel free to roast my landing page in the comments! I'm building this in public and feedback is gold. πŸ₯‡)

Happy Coding! πŸš€

Top comments (0)