DEV Community

Yash Desai
Yash Desai

Posted on

PDF Q&A Automation using LLaMA-3 Model via Groq API

Introduction

Imagine having a vast library of PDF documents and needing to extract answers to specific questions from these files. Manual processing can be tedious and time-consuming. With the advancements in AI, particularly in natural language processing (NLP), we can automate this process. In this article, we'll explore how to use the LLaMA-3 model via the Groq API to create a Python script that automates Q&A from PDF files.

Setting Up the Environment

Before diving into the script, ensure you have the following:

  • Groq API Key: Obtain a valid API key from Groq.
  • Python Environment: Set up a Python environment with the necessary libraries. You'll need requests for API calls and PyPDF2 for handling PDF files.
  • Install Libraries: Run pip install requests PyPDF2 to install the required libraries.

Creating the Python Script

The script will involve the following steps:

  1. Read PDF Content: Extract text from the PDF file.
  2. Send Question to API: Use the Groq API to send the question and the extracted text to the LLaMA-3 model.
  3. Get Answer: Receive the answer from the API and print it.

Step 1: Read PDF Content

First, we'll write a function to extract text from a PDF file using PyPDF2.

import PyPDF2

def extract_text_from_pdf(file_path):
    pdf_file_obj = open(file_path, 'rb')
    pdf_reader = PyPDF2.PdfFileReader(pdf_file_obj)
    num_pages = pdf_reader.numPages
    text = ''
    for page in range(num_pages):
        page_obj = pdf_reader.getPage(page)
        text += page_obj.extractText()
    pdf_file_obj.close()
    return text
Enter fullscreen mode Exit fullscreen mode

Step 2: Send Question to API

Next, we'll create a function to send the question and the PDF content to the Groq API.

import requests
import json

def send_question_to_api(question, pdf_content, groq_api_key):
    url = 'https://api.groq.com/openai/v1/chat/completions'
    headers = {
        'Content-Type': 'application/json',
        'Authorization': f'Bearer {groq_api_key}'
    }
    data = {
        "model": "llama-3.3-70b-versatile",
        "messages": [
            {
                "role": "user",
                "content": f"Answer the following question based on the provided text: {question}\n\nText: {pdf_content}"
            }
        ]
    }
    response = requests.post(url, headers=headers, data=json.dumps(data))
    return response.json()
Enter fullscreen mode Exit fullscreen mode

Step 3: Get Answer

Finally, we'll parse the API response to get the answer.

def get_answer_from_response(response):
    try:
        answer = response['choices'][0]['message']['content']
        return answer
    except Exception as e:
        return f"Failed to retrieve answer: {str(e)}"
Enter fullscreen mode Exit fullscreen mode

Putting It All Together

Now, let's combine these functions into a single executable script.

def main():
    groq_api_key = 'YOUR_GROQ_API_KEY'
    pdf_file_path = 'path_to_your_pdf_file.pdf'
    question = 'Your question here'

    pdf_content = extract_text_from_pdf(pdf_file_path)
    response = send_question_to_api(question, pdf_content, groq_api_key)
    answer = get_answer_from_response(response)

    print(f"Question: {question}")
    print(f"Answer: {answer}")

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  • Automation: We've automated the process of extracting answers from PDF files using the LLaMA-3 model via the Groq API.
  • Flexibility: This script can be adapted for various PDF files and questions.
  • Accuracy: The accuracy of the answers depends on the quality of the PDF content and the question asked.

Conclusion

In this article, we've demonstrated how to leverage the LLaMA-3 model via the Groq API to create a Python script for automating Q&A from PDF files. This approach not only saves time but also opens up possibilities for more complex document analysis tasks. As AI models continue to evolve, we can expect even more sophisticated automation capabilities in the future.

Top comments (0)