DEV Community

Cover image for Langchain: Vectorstores and Embeddings
Rutam Bhagat
Rutam Bhagat

Posted on

Langchain: Vectorstores and Embeddings

In this blog post, we will explore vectorstores and embeddings, which are most important components for building chatbots and performing semantic search over a corpus of data. We will revisit these concepts from the previous blog, dive deeper into edge cases and potential failure modes. Don't worry, I'll also discuss how to address these issues later on.

Workflow Overview

Recall the overall workflow for retrieval augmented generation (RAG):

Image description

We start with documents, create smaller splits of those documents, generate embeddings for those splits, and then store them in a vector store. A vector store is a database where you can easily look up similar vectors later on, which becomes useful when trying to find documents relevant to a given question.

Image description

Setting Up

First, let's set up the appropriate environment variables and load the documents we'll be working with - the CS229 lectures. We'll intentionally duplicate the first lecture to simulate some dirty data:

import os
from langchain_openai import OpenAI
from dotenv import load_dotenv, find_dotenv
from langchain_community.document_loaders.pdf import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter

_ = load_dotenv(find_dotenv())
client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY")
)
loaders = [
    PyPDFLoader("docs/cs229_lectures/MachineLearning-Lecture01.pdf"), # Duplicate documents on purpose - messy data
    PyPDFLoader("docs/cs229_lectures/MachineLearning-Lecture01.pdf"),
    PyPDFLoader("docs/cs229_lectures/MachineLearning-Lecture02.pdf"),
    PyPDFLoader("docs/cs229_lectures/MachineLearning-Lecture03.pdf"),
]
docs = []
for loader in loaders:
    docs.extend(loader.load())
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=150)
splits = text_splitter.split_documents(docs)
print("Length of splits: ", len(splits)) # Length of splits:  209
Enter fullscreen mode Exit fullscreen mode

Embeddings

Now that we have our documents split into smaller, semantically meaningful chunks, it's time to create embeddings for them.

Embeddings take a piece of text and create a numerical representation of it, such that text with similar content will have similar vectors in this numeric space. This allows us to compare those vectors and find similar pieces of text.

Image description

To illustrate this, let's try a few toy examples:

from langchain_openai import OpenAIEmbeddings
import numpy as np

embedding = OpenAIEmbeddings()
sentence1 = "i like dogs"
sentence2 = "i like canines"
sentence3 = "the weather is ugly outside"
embedding1 = embedding.embed_query(sentence1)
embedding2 = embedding.embed_query(sentence2)
embedding3 = embedding.embed_query(sentence3)
print(np.dot(embedding1, embedding2))  # 0.9631227500523609
print(np.dot(embedding1, embedding3))  # 0.7703257495981695
print(np.dot(embedding2, embedding3))  # 0.7591627401108028
Enter fullscreen mode Exit fullscreen mode

As expected, the first two sentences about pets have very similar embeddings (dot product of 0.96), while the sentence about the weather is less similar to both pet-related sentences (dot products of 0.77 and 0.76).

Vector Stores

Next, we'll store these embeddings in a vector store, which will allow us to easily look up similar vectors later on when trying to find relevant documents for a given question.

Image description

We'll use the Chroma vector store for this lesson, as it's lightweight and in-memory, making it easy to get started:

from langchain.vectorstores import Chroma

persist_directory = "docs/chroma/"
# this code is only for ipynb files
# !rm -rf ./docs/chroma  # remove old database files if any 
vectordb = Chroma.from_documents(
    documents=splits, embedding=embedding, persist_directory=persist_directory
)
print(vectordb._collection.count())  # 209
Enter fullscreen mode Exit fullscreen mode

Similarity Search

Now, let's try a question and see how the similarity search works:

Image description

question = "is there an email i can ask for help"
docs = vectordb.similarity_search(question, k=3)
print("Length of context docs: ", len(docs))  # 3
print(docs[0].page_content)
Enter fullscreen mode Exit fullscreen mode
cs229-qa@cs.stanford.edu. This goes to an acc ount that's read by all the TAs and me. So 
rather than sending us email individually, if you send email to this account, it will 
actually let us get back to you maximally quickly with answers to your questions.  
If you're asking questions about homework probl ems, please say in the subject line which 
assignment and which question the email refers to, since that will also help us to route 
your question to the appropriate TA or to me  appropriately and get the response back to 
you quickly.  
Let's see. Skipping ahead — let's see — for homework, one midterm, one open and term 
project. Notice on the honor code. So one thi ng that I think will help you to succeed and 
do well in this class and even help you to enjoy this cla ss more is if you form a study 
group.  
So start looking around where you' re sitting now or at the end of class today, mingle a 
little bit and get to know your classmates. I strongly encourage you to form study groups 
and sort of have a group of people to study with and have a group of your fellow students 
to talk over these concepts with. You can also  post on the class news group if you want to 
use that to try to form a study group.  
But some of the problems sets in this cla ss are reasonably difficult.  People that have 
taken the class before may tell you they were very difficult. And just I bet it would be 
more fun for you, and you'd probably have a be tter learning experience if you form a
Enter fullscreen mode Exit fullscreen mode

This returns the relevant chunk mentioning the cs229-qa@cs.stanford.edu email address for asking questions about the course material.

After this, let's persist the vector database for future use:

vectordb.persist()
Enter fullscreen mode Exit fullscreen mode

Failure Modes

While the basic semantic search works well, there are some edge cases and failure modes that can arise. Let's explore a few of them.

Duplicate Documents

question = "what did they say about matlab?"
docs = vectordb.similarity_search(question, k=5)
print(docs[0].page_content)
# Document(page_content='those homeworks will be done in either MATLA B or in Octave, which is sort of — I \nknow some people call it a free ve rsion of MATLAB, which it sort  of is, sort of isn\'t.  \nSo I guess for those of you that haven\'t s een MATLAB before, and I know most of you \nhave, MATLAB is I guess part of the programming language that makes it very easy to write codes using matrices, to write code for numerical routines, to move data around, to \nplot data. And it\'s sort of an extremely easy to  learn tool to use for implementing a lot of \nlearning algorithms.  \nAnd in case some of you want to work on your  own home computer or something if you \ndon\'t have a MATLAB license, for the purposes of  this class, there\'s also — [inaudible] \nwrite that down [inaudible] MATLAB — there\' s also a software package called Octave \nthat you can download for free off the Internet. And it has somewhat fewer features than MATLAB, but it\'s free, and for the purposes of  this class, it will work for just about \neverything.  \nSo actually I, well, so yeah, just a side comment for those of you that haven\'t seen \nMATLAB before I guess, once a colleague of mine at a different university, not at \nStanford, actually teaches another machine l earning course. He\'s taught it for many years. \nSo one day, he was in his office, and an old student of his from, lik e, ten years ago came \ninto his office and he said, "Oh, professo r, professor, thank you so much for your', metadata={'page': 8, 'source': 'docs/cs229_lectures/MachineLearning-Lecture01.pdf'})
print(docs[1].page_content)
# Document(page_content='those homeworks will be done in either MATLA B or in Octave, which is sort of — I \nknow some people call it a free ve rsion of MATLAB, which it sort  of is, sort of isn\'t.  \nSo I guess for those of you that haven\'t s een MATLAB before, and I know most of you \nhave, MATLAB is I guess part of the programming language that makes it very easy to write codes using matrices, to write code for numerical routines, to move data around, to \nplot data. And it\'s sort of an extremely easy to  learn tool to use for implementing a lot of \nlearning algorithms.  \nAnd in case some of you want to work on your  own home computer or something if you \ndon\'t have a MATLAB license, for the purposes of  this class, there\'s also — [inaudible] \nwrite that down [inaudible] MATLAB — there\' s also a software package called Octave \nthat you can download for free off the Internet. And it has somewhat fewer features than MATLAB, but it\'s free, and for the purposes of  this class, it will work for just about \neverything.  \nSo actually I, well, so yeah, just a side comment for those of you that haven\'t seen \nMATLAB before I guess, once a colleague of mine at a different university, not at \nStanford, actually teaches another machine l earning course. He\'s taught it for many years. \nSo one day, he was in his office, and an old student of his from, lik e, ten years ago came \ninto his office and he said, "Oh, professo r, professor, thank you so much for your', metadata={'page': 8, 'source': 'docs/cs229_lectures/MachineLearning-Lecture01.pdf'})
Enter fullscreen mode Exit fullscreen mode

Notice that the first two results are identical. This is because we intentionally duplicated the first lecture PDF earlier, leading to the same information being present in two different chunks. Ideally, we'd want distinct chunks to be retrieved.

Structured Information Not Captured

question = "what did they say about regression in the third lecture?"
docs = vectordb.similarity_search(question, k=5)

for doc in docs:
    print(doc.metadata)
# {'page': 0, 'source': 'docs/cs229_lectures/MachineLearning-Lecture03.pdf'}
# {'page': 14, 'source': 'docs/cs229_lectures/MachineLearning-Lecture03.pdf'}
# {'page': 0, 'source': 'docs/cs229_lectures/MachineLearning-Lecture02.pdf'}
# {'page': 6, 'source': 'docs/cs229_lectures/MachineLearning-Lecture03.pdf'}
# {'page': 8, 'source': 'docs/cs229_lectures/MachineLearning-Lecture01.pdf'}
print(docs[4].page_content)
Enter fullscreen mode Exit fullscreen mode
into his office and he said, "Oh, professo r, professor, thank you so much for your 
machine learning class. I learned so much from it. There's this stuff that I learned in your 
class, and I now use every day. And it's help ed me make lots of money, and here's a 
picture of my big house."  
So my friend was very excited. He said, "W ow. That's great. I'm glad to hear this 
machine learning stuff was actually useful. So what was it that you learned? Was it 
logistic regression? Was it the PCA? Was it the data ne tworks? What was it that you 
learned that was so helpful?" And the student said, "Oh, it was the MATLAB."  
So for those of you that don't know MATLAB yet, I hope you do learn it. It's not hard, 
and we'll actually have a short MATLAB tutori al in one of the discussion sections for 
those of you that don't know it.  
Okay. The very last piece of logistical th ing is the discussion s ections. So discussion 
sections will be taught by the TAs, and atte ndance at discussion sections is optional, 
although they'll also be recorded and televi sed. And we'll use the discussion sections 
mainly for two things. For the next two or th ree weeks, we'll use the discussion sections 
to go over the prerequisites to this class or if some of you haven't seen probability or 
statistics for a while or maybe algebra, we'll go over those in the discussion sections as a 
refresher for those of you that want one.
Enter fullscreen mode Exit fullscreen mode

In this case, we expected all the retrieved documents to be from the third lecture, as specified in the question. However, we see that the results include chunks from other lectures as well.

The intuition here is that the structured information about querying only the third lecture is not captured in the semantic embeddings, which are focused more on the concept of regression itself.

Conclusion

In this blog post, we covered the basics of semantic search using vectorstores and embeddings, as well as some edge cases and failure modes that can arise. While this approach works well in many cases, we saw instances where duplicate documents were retrieved, and structured information was not properly captured.

In the next blog , we'll discuss how to address these failure modes and enhance our retrieval capabilities, ensuring that we retrieve both relevant and distinct chunks while also incorporating structured information into the search process.

Top comments (0)