Ever leave an important online meeting with tasks assigned and ideas discussed, but can't quite remember who said what? It almost feels like you need a dedicated note-taker to keep track of everything and generate reports. A better solution is to automate this with a script, which is exactly what we'll do.
In this tutorial, I'll show you how to create an application that automatically analyzes meetings and generates reports using the BotHub API (Whisper-1 + Claude 3.5 Sonnet). This application will transcribe audio recordings, identify key information—who said what and which tasks were discussed—and compile a report, including a PDF version.
Setting up Dependencies and Project Configuration
Before we begin, let's ensure we have all the necessary components installed, including Python and the required libraries for working with the API and audio files. We'll install the following:
-
os
andpathlib.Path
: for working with environment variables and file paths; -
dotenv
: for loading sensitive data from a.env
file; -
fpdf
: for generating PDF files; -
openai
: for interacting with the BotHub API.
Install these packages using pip
:
pip install openai python-dotenv fpdf
We'll also use logging
to track the program's execution and record any errors or important messages. We'll set up basic logging at the INFO
level:
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
To interact with the BotHub API, you first need to register on the BotHub platform and obtain an API key. This key is used to authenticate the requests we'll be sending.
For secure key storage, create a .env
file in the root directory of your project and add your generated API key:
BOTHUB_API_KEY=your_api_key
Next, use the dotenv library's load_dotenv()
function to load the data from the .env
file, making it accessible to our code:
from dotenv import load_dotenv
load_dotenv()
To work with the BotHub API, create an OpenAI instance, providing the api_key
and base_url
for the BotHub service. The API key is loaded from the environment using os.getenv('BOTHUB_API_KEY')
:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv('BOTHUB_API_KEY'),
base_url='https://bothub.chat/api/v2/openai/v1'
)
Core Function for Audio Processing
This step involves creating a function that transcribes an audio file into text. We'll utilize the BotHub API and Whisper-1 for speech recognition. The audio file is opened in binary read mode (rb
), and then we use the client.audio.transcriptions.create
method to send the audio file to the server for processing. The response contains the text transcription. If the transcription is successful, a "Transcription complete" message is logged, and the text is returned for further processing. In case of an error, the error message is logged.
def transcribe_audio(audio_file_path):
try:
with open(audio_file_path, "rb") as audio_file:
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)
logger.info("Transcription complete.")
return transcript.text
except Exception as e:
logger.error(f"Error during audio transcription: {e}")
return None
Extracting Key Insights
After transcription, we have the text of our meeting. Now, our goal is to extract key insights, such as discussed tasks, decisions made, and any identified problems. Using client.chat.completions.create
, we create a request to extract these key points, specifying the model, the meeting text, and the request in a messages
format, where we instruct the model to identify the main tasks and problems. The function returns a string containing the key insights upon successful execution.
def extract_key_points(meeting_text):
try:
response = client.chat.completions.create(
model="claude-3.5-sonnet",
messages=[
{
"role": "user",
"content": f"Analyze the following meeting transcript and extract key insights, such as tasks, decisions, and discussed problems:\n\n{meeting_text}"
}
]
)
logger.info("Key insight extraction complete.")
return response.choices[0].message.content
except Exception as e:
logger.error(f"Error extracting key insights: {e}")
return None
Sentiment Analysis
We can also analyze the sentiment of the meeting text. Similar to extract_key_points
, we use client.chat.completions.create
to request a sentiment analysis of the provided text. The function returns the sentiment analysis result or an error message.
def analyze_sentiment(meeting_text):
try:
response = client.chat.completions.create(
model="claude-3.5-sonnet",
messages=[
{
"role": "user",
"content": f"Analyze the sentiment of the following text:\n\n{meeting_text}"
}
]
)
logger.info("Sentiment analysis complete.")
return response.choices[0].message.content
except Exception as e:
logger.error(f"Error during sentiment analysis: {e}")
return None
Report Generation
Once the key insights and sentiment analysis are complete, we need to compile them into a report. This report should be logical, coherent, and concise. We use client.chat.completions.create
, providing a request with the key points and sentiment analysis, allowing the API to generate the final report text. The function returns the report text upon successful completion.
def generate_report(key_points, sentiment):
try:
content = f"Compile a meeting report considering the following key points and sentiment analysis:\n\nKey Points:\n{key_points}\n\nSentiment:\n{sentiment}"
report = client.chat.completions.create(
model="claude-3.5-sonnet",
messages=[
{
"role": "user",
"content": content
}
]
)
logger.info("Report generation complete.")
return report.choices[0].message.content
except Exception as e:
logger.error(f"Error generating report: {e}")
return None
To facilitate easy storage and sharing, we save the report as a PDF. We use the FPDF
library for PDF creation. We add a page, enable automatic text wrapping with multi_cell
. After creating and populating the page with the report text, we save the report using output(file_path)
.
from fpdf import FPDF
def save_report_as_pdf(report_text, file_path="meeting_report.pdf"):
pdf = FPDF()
pdf.add_page()
pdf.set_auto_page_break(auto=True, margin=15)
pdf.output(file_path)
logger.info(f"Report saved as {file_path}")
Main Function
This function orchestrates all the previous steps. It begins by transcribing the audio. If transcription fails, an error message is displayed, and the function terminates. Next, it calls the function to extract key insights. If an error occurs, it returns an appropriate message. Sentiment analysis is performed similarly, and if successful, the report text is generated. If all steps complete successfully, save_report_as_pdf
is called to save the report in PDF format. Finally, the function returns the generated report text.
def analyze_meeting(audio_file_path):
meeting_text = transcribe_audio(audio_file_path)
if not meeting_text:
return "Error during audio transcription."
key_points = extract_key_points(meeting_text)
if not key_points:
return "Error extracting key insights."
sentiment = analyze_sentiment(meeting_text)
if not sentiment:
return "Error during sentiment analysis."
report_text = generate_report(key_points, sentiment) # Pass sentiment to report generation
if not report_text:
return "Error generating report."
save_report_as_pdf(report_text)
return report_text
In conclusion, we've built a small application that can boost your productivity and help you manage your time more effectively. We implemented a series of core functions, including audio transcription, key insight extraction, report generation, and saving the report in PDF format. This tool will help you keep track of important ideas and tasks, saving you time and effort.
Hope this helped! If so, any support is welcome, thanks for reading! 😊
Top comments (0)