Introduction
I have been working on my media converter project this week. I finished off where I had the frontend displaying the data from the video streams and the backend worked using a hard-coded solution. This week, I wanted to connect both sides and have a basic version working that copies the video file over to another video file, that renames the file and sets it in the location the user selected.
Project functionality
This is a basic rundown of phase 1 of the project. This is a project that allows users to select a video locally and upload it to the application, where it will chunk the file and send it to a second server that has FFMPEG, which will use FFPROBE and return the video file information and display the information on the client.
This will also generate a list of streams on the video file that the user can update or remove from the video file. Updates include changing the video container (e.g. mp4, mov, avi...), changing the frames per second, changing the video dimensions and switching audio channels. Once the user has selected their choices, they can set a location for the new file, but if they don't, it will default to the folder provided in the application. They can also rename the new file before uploading it.
When the user uploads the file, it will be sent to a server in a Docker container. This is because when you use FFMPEG, it uses as much CPU and RAM as it needs to encode files, and if we want this to run locally we need to limit what it can use by limiting the Docker container. Once all chunks are sent over, the chunks are streamed back into a file and passed into a queue to be processed. When the queue has an item, it starts the queue system and generates a command for FFMPEG then creates a child process to run FFMPEG, which will either just copy the file or re-encode the file to the new settings. When the file is complete, it will be sent back to the main server, where it will check for a location and stream the chunks into the new file. The user will then get a notification via web sockets to inform them of the new file and where it is located.
Day 1
Day 1, I worked on removing the hard-coded command as I was testing with one video to make sure it all worked. I planned out what was needed for the command from the frontend and created an information object that stored details on the user's choices and details I needed about the video file. To pass this information around the application without having to pass it from one function to another, I created a map storage that stored it temporarily while the video is being encoded. To track this, I added the UUID library to generate IDs for each object.
Day 2
Day 2, I was going to work on the command generation, but I realised a flaw in my design last night that would cause the first video to be stuck if other videos were added to the queue. For some reason, I await each function to complete. This is a problem, as when the queue is empty and an item is added, it activates the queue system that runs until the queue is empty. But because the next in the queue video just added the files and returned a 200 to the user, this added to the wait time of the first video each time a new video was added.
So I spent most of the day refactoring how the flow of the server worked and cleaned up some unnecessary code that was bloating functions. It wasn't too bad, as I had already broken up parts into functions, it was more about fixing the flow of the system.
Day 3
Day 3, I got to work on the command generation, and it was going smoothly until I found a hurdle I never knew was there. I was testing out commands, it was working at a basic level, no problems until I used a MKV file. These files have another stream called attachments that most other video file formats don't have. These streams are actually binary blobs embedded in the container's metadata. These hold data on Fonts, Cover art and Licenses, so when I tried to encode the file, I kept getting an error about analyze duration and probe size, so I updated these values to help FFMPEG encode the file, but it kept happening. I then did some research to find out about these attachment streams and from what I found is FFMPEG can't encode these streams as they are just raw files embedded inside the container.
So my only solutions were to remove these streams from videos and encode the file losing this data or to dump the raw data, encode the video and then re-embed the data back into the video file. So I worked on another section of the application to change how I handle the location feature and until I researched the best solution to dumping and re-embedding the data, I removed the data from MKV files.
Day 4 & Day 5
For these days, I mainly did research and refactored the code. The application is working fully in a basic pass-through, but I needed to start handling errors and messy code to make it easier to understand the flow of the system without having to look through every file to find what I am looking for. I also worked on handling my logs, as I need to keep clean logs to find what works and where it fails. This wasn't too bad, as I had a logging system in my previous application that was helpful.
Conclusion
My application is on the right track, it is working at a basic level with a few bumps in the road, but once I complete this section, I can move on to handling images with the same system.

Top comments (0)