This is my discussion of version 1 of a project I'm working on called animatemusic.
My blog post on version 0 of this project can be found here.
In version 0 of this project, in my pursuit to render text on a canvas element (which would eventually become a video frame), I chose to design around the essential question:
which video frame is being rendered?
which was a reasonable question to ask since I was rendering a finite and known quantity of frames (based on the framerate and duration of the video).
However, it ended up having a not-so-reasonable solution because it involved a conversion from float (start time in seconds) to integer (frame number) for each set of words that were to be rendered. This caused repetitive rounding, which resulted in the text lagging behind the vocal audio. Here's codepen to articulate the plight:
For version 1, I've chosen to steer clear of this issue by designing around a new essential question:
at what time is the word rendered?
I found two resources that reassured me that my new essential question was worth pursuing:
- Controlling frame rate with requestAnimationFrame
- How to throttle requestAnimationFrame to a specific frame rate
I would eventually like to use a library such as anime.js or three.js, and their documentation and API also catered to a time-based animation approach.
I took this opportunity to refactor my original script, in addition to adding functions to render the text on the canvas when the current time (elapsed) of the video is within the
end times of the word. Here's a codesandbox and sample transcript json for upload:
I am trembling with excitement as I present to you the new and improved resulting video!! The lyrics sync with the audio so much better than version 0! I don't see many issues with the text (although there are some blanks and one
<unk>, which is gentle's version of
Thanks for reading! Please comment below if you have something you can share to help me improve this project. Going to try and sleep now amongst the adrenaline rush...