DEV Community

Cover image for Making your CV talk 🤖 How to have Open AI api answer based on your custom data?
Nikola Mitic
Nikola Mitic

Posted on

Making your CV talk 🤖 How to have Open AI api answer based on your custom data?

I used https://ts.llamaindex.ai/.

As somebody who is just starting out with generative AI, I have found LlamaIndex to be of a huge help.

They help you cut the corners and reduce need for a deep understanding about how LLM work in order to be productive. Which is something I need as a beginner.

One thing to note is that TS version tend to be unstable in terms of API at the time of writing, versions keep introducing new functions and classes signatures and their docs sometimes fails to follow up.

So be patient, I believe TS version will reach maturity soon. They are doing a great job.

Code is rather simple:

export const getAnswerChunks = async (
  source: string,
  question: string,
  useGroq: boolean = false
) => {
  if (useGroq) {
    // Update llm to use Groq
    Settings.llm = new Groq({
      apiKey: process.env.GROQ_API_KEY,
      model: "llama3-8b-8192",
    });
  }
  // Create Document object
  const document = new Document({
    text: `Nikola Mitic - life story: ${JSON.stringify(source)}`,
  });
  // Create storage from local file
  const storageContext = await storageContextFromDefaults({
    persistDir: "./index-storage",
  });
  // Split text and create embeddings. Store them in a VectorStoreIndex
  const index = await VectorStoreIndex.fromDocuments([document], {
    storageContext,
  });
  // gets retriever
  const retriever = index.asRetriever({ similarityTopK: 5 });

  const chatEngine = new ContextChatEngine({
    retriever,
    chatModel: Settings.llm,
  });
  // Get stream chunks
  const chunks = await chatEngine.chat({
    message: `
      You are Nikola Mitic AI clone.
      You answer the question as if you are Nikola Mitic.
      If question is related to work experience, the correct and complete answer can be found under "nikola_mitic_resume_cv_work_experience"

      Bellow id the question:
      -------------------------------------------------
       ${question}
      -------------------------------------------------
    `,
    stream: true,
  });

  return chunks;
};

Enter fullscreen mode Exit fullscreen mode

Let's dissect it a bit:

  if (useGroq) {
    // Update llm to use Groq
    Settings.llm = new Groq({
      apiKey: process.env.GROQ_API_KEY,
      model: "llama3-8b-8192",
    });
  }
Enter fullscreen mode Exit fullscreen mode

Here we are adding a flag that will instruct LlamaIndex to use Groq instead of Open AI which is its default settings. Important to note, regardless of Groq usage, we will still need to use Open AI as Groq does offer embeddings models at the time of writing.

However, Groq API seems to be free as of time of this writing, so one can save a lot.

  // Create Document object
  const document = new Document({
    text: `Nikola Mitic - life story: ${JSON.stringify(source)}`,
  });
  // Create storage from local file
  const storageContext = await storageContextFromDefaults({
    persistDir: "./index-storage",
  });
  // Split text and create embeddings. Store them in a VectorStoreIndex
  const index = await VectorStoreIndex.fromDocuments([document], {
    storageContext,
  });
  // gets retriever
  const retriever = index.asRetriever({ similarityTopK: 5 });
Enter fullscreen mode Exit fullscreen mode

Here we are creating our document, creating file system storage context for future retrievals.
Creating our vector store and finally retriever. Please note, playing with similarityTopK is important here, it is choice between faster response time and more accurate and content rich full answer.

 const chatEngine = new ContextChatEngine({
    retriever,
    chatModel: Settings.llm,
  });
  // Get stream chunks
  const chunks = await chatEngine.chat({
    message: `
      You are Nikola Mitic AI clone.
      You answer the question as if you are Nikola Mitic.
      If question is related to work experience, the correct and complete answer can be found under "nikola_mitic_resume_cv_work_experience"

      Bellow id the question:
      -------------------------------------------------
       ${question}
      -------------------------------------------------
    `,
    stream: true,
  });
Enter fullscreen mode Exit fullscreen mode

Finally we make use of LlamaIndex chat interface and construct our prompt. You should play with the prompt, I find out this to be very tricky and results very on which LLM is used.

So have fun with it. ✌️


❤️If you would like to stay it touch please feel free to connect❤️

  1. X
  2. Linkedin
  3. nikola.mitic.dev@gmail.com

Top comments (0)