DEV Community

El Bruno for Microsoft Azure

Posted on • Originally published at elbruno.com on

2

#AI – Using OpenAI Whisper to convert audio to text, in Python, from my podcast episode (in Spanish!)

Hi!

Today’s post is how to use Whisper to get the text transcription from my podcast episode audio. Based on the Podcast Copilot sample, I decided to use Whisper to do this. Let’s start!

OpenAI – Whisper

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.

Whisper GitHub

You can get more information about Whisper here

And, once installed (simple python package), it’s super easy to use:



import whisper

model = whisper.load_model("base")
result = model. Transcribe("audio.mp3")
print(result["text"])



Enter fullscreen mode Exit fullscreen mode

Super easy! However, this approach may trigger some memory problems with large audio files.

The approach in the original Podcast Copilot is to split the podcast audio into small chunks and get the transcript for each one of the small pieces. In example:



# Chunk up the audio file 
sound_file = AudioSegment.from_mp3(podcast_audio_file)
audio_chunks = split_on_silence(sound_file, min_silence_len=1000, silence_thresh=-40 )
count = len(audio_chunks)
print("Audio split into " + str(count) + " audio chunks \n")

# Call Whisper to transcribe audio
model = whisper.load_model("base")
transcript = ""
for i, chunk in enumerate(audio_chunks):    
    if i < 10 or i > count - 10:
        out_file = "chunk{0}.wav".format(i)
        print("\r\nExporting >>", out_file, " - ", i, "/", count)
        chunk.export(out_file, format="wav")
        result = model.transcribe(out_file)
        transcriptChunk = result["text"]
        print(transcriptChunk)

        transcript += " " + transcriptChunk

# Print transcript
print("Transcript: \n")
print(transcript)
print("\n")



Enter fullscreen mode Exit fullscreen mode

And that’s it! Depending on your machine, this may take some time. I added a start/end check, so you can get an idea of how much time you need to spend processing an episode.

In example, for a 10 min audio, transcript done with an estimated time of 06:05.

transcript done for the last chuck with an estimated time of 06:05.

The full source code is here:

# Copyright (c) 2023
# Author : Bruno Capuano
# Change Log :
#
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
from pydub import AudioSegment
from pydub.silence import split_on_silence
import whisper
import time
# Call Whisper to transcribe audio
print("Calling Whisper to transcribe audio...\n")
# add a start time flag
start_time = time.time()
print(f"Start time: {start_time} seconds\n")
# Inputs about the podcast
podcast_name = "No Tiene Nombre"
podcast_episode_name = "NTN160"
podcast_author = "Bruno Capuano"
podcast_url = "https://go.ivoox.com/sq/277993"
podcast_audio_file = ".\\NTN160.mp3"
# Chunk up the audio file
sound_file = AudioSegment.from_mp3(podcast_audio_file)
audio_chunks = split_on_silence(sound_file, min_silence_len=1000, silence_thresh=-40 )
count = len(audio_chunks)
print("Audio split into " + str(count) + " audio chunks \n")
# Call Whisper to transcribe audio
model = whisper.load_model("base")
transcript = ""
for i, chunk in enumerate(audio_chunks):
# If you have a long audio file, you can enable this to only run for a subset of chunks
if i < 10 or i > count - 10:
out_file = "chunk{0}.wav".format(i)
print("\r\nExporting >>", out_file, " - ", i, "/", count)
chunk.export(out_file, format="wav")
result = model.transcribe(out_file)
transcriptChunk = result["text"]
print(transcriptChunk)
# Append transcript in memory if you have sufficient memory
transcript += " " + transcriptChunk
# Print the transcript
print("Transcript: \n")
print(transcript)
print("\n")
# let's write the transcript to disk, for future exercises
transcript_filename = f"{podcast_episode_name}.txt"
textfile = open(transcript_filename, "w" , encoding='utf-8')
transcript_to_txt = transcript.encode("utf-8")
textfile.write(transcript)
textfile.close()
print(f"Transcript saved to {textfile.name} \n")
# calculate the elapsed time
end_time = time.time()
elapsed_time = end_time - start_time
# print the elapsed time
print(f"Elapsed time: {elapsed_time} seconds")

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno


Image of Datadog

The Essential Toolkit for Front-end Developers

Take a user-centric approach to front-end monitoring that evolves alongside increasingly complex frameworks and single-page applications.

Get The Kit

Top comments (0)

Postmark Image

Speedy emails, satisfied customers

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up