When I started coding, the first thing I did to put my new knowledge into practice was to automate a process that was done every day at my job.
I worked with video editing and, at least once a day, I had to download a media with the recordings to a backup server and then pull them to my machine to start editing. So, I decided to create a script that would do this.
By automating everyday processes, it was as if I was “gaining” time. If a daily task that takes 5 minutes to do is automated, in 5 business days, you will have 25 minutes more to do something else. In a month, 1 hour and 40 minutes; in a year, 20 hours…Multiply that by more tasks or users and you have considerable numbers.
In short, he accesses a Wikipedia article to get the information that will be used in the video; uses IBM's Watson artificial intelligence to “break” text into sentences; the Google API to fetch images to illustrate the video; ImageMagick to process the photos and create the thumbnail; Adobe After Effects to edit the video, and finally the Youtube API to upload.
I followed the series and made this project. In the end, I added some extra functionality like generating audio for each sentence with the Text-to-Speech API, also from IBM's Watson, and adding them to the video.
The idea was to create a YouTube channel with replays of matches of a very popular game called “League of Legends”, fully automated.
Some sites make matches of professional players available for download. Replays are executable files that run the game.
To automate the entire creation process, a few steps were necessary:
- Enter the website, choose a match, get the match information and download the replay
- Run replay and record screen
- Edit the video
- Create the thumbnail
- Upload the video and thumbnail to Youtube and fill in information like title, description, and keywords
Web scraping is a method of accessing any website and getting information.
The idea here was to take the first game and extract the information from this table.
Among the many Python options to accomplish this task, I chose Selenium because it has the functionality to interact with the site. The library lets you click to download the game.
I created a Python dictionary with all the information I needed and saved it in a JSON file, in a folder called “assets”, at the root of the project.
To create the thumbnail, I developed a template with HTML and CSS. With the JSON information, the data is dynamically filled and an HTML file is saved in the “assets” folder:
./assets match_data.json thumbnail.html
After that, I use Selenium again to access that HTML and take a screenshot of the page. The image is saved in png in a folder in my local directory.
The result is this:
To simplify the creation of the video, I decided to record the game with OBS Studio. So I can add on-screen elements, which are superimposed at the start, without the need to edit or post-produce the video.
With Python's subprocess module, I run the .bat file that opens the replay of the match.
PyAutoGUI is used to open OBS Studio and put the match to record.
When the game is over, recording stops, and a .mp4 video file is saved to my local disk, ready to use.
With the JSON information, I make a request with the title, description, keywords, and the video file.
When the video upload finishes, I make another request to add the thumbnail to the video.
This project allowed me to use different technologies and methods to programmatically create content. With all the automated processes, just run a line of code to populate this channel with a new, updated video with thumbnail and custom information.
You can check the channel here.
You can check the code on my Github.
Follow me on Twitter!
Feel free to ask questions, make suggestions and contribute to the project.