My main project for the past month or so has been Open Recommender, an open source LLM-powered recommendation system for YouTube videos. It works by taking your Twitter data (tweets, likes, retweets and quotes) and analyses it to infer what topics you are currently interested in. It then searches YouTube to find relevant videos and narrows them down to the clips that are most likely to interest you.
I have been curious about the idea of open recommendation systems for years. I've always found it unnerving how much control third parties have over the content I see. And I'm frustrated by the misalignment between the objective function of most platforms' recommendation algorithms and my personal reason for using the platform - platforms want to keep me scrolling to sell my attention to advertisers. But I want my recommendation system to be built with the purpose of improving my life and helping me make progress towards my goals.
When we go on our Facebook feed or that of any other social media site, we are at their recommendation algorithms mercy. They presumably optimize for clicks, time spent, and endless scrolling. That’s what they want us to do, but is that what we want out of Facebook? - The Importance of Open Recommender Systems - Erik Bjäreholt
What I dreamt of back in university was a system that could infer my interests from my daily activities, like flashcard reviews, reading my behaviour and browsing habits, and use that information to recommend me videos and podcasts from YouTube that I could watch in the evening after school. I never got anything off the ground until a couple of weeks ago when I revisited Erik Bjäreholt's blog post on Open Recommender Systems and realised that LLMs have made sophisticated, customisable and explainable recommendation systems easier than ever to build!
On top of that, massive price decreases and the increased performance of cheap, fine-tunable open source models have made this economically viable too. I reached out to a company called OpenPipe who specialise in helping companies incrementally replace expensive OpenAI GPT-4 prompts with faster, cheaper fine-tuned models and they were kind enough to sponsor all of the OpenAI calls and fine-tuning costs for this project!
They have a super simple drop-in replacement for OpenAI's library which records your requests into a simple web interface to help you curate a dataset and fine-tune a model. I am extremely grateful for their support.
Open Recommender
Goals
Here are some of the goals of Open Recommender.
Understand the user's interests
We will use users' public Twitter feeds as a proxy for their current interests. This is ethical because it relies purely on public information and it's an effective data source because people interact with tweets that are related in some way to their current interests. Of course, not everyone has an active Twitter account, but it's a good place to start. For the time being you can consider Open Recommender to be "the recommendation system for the terminally online".
Customizable and Explainable
No more black box mystery algorithms - LLMs are the perfect replacement! Users can provide custom instructions using natural language and the LLM can provide understandable explanations for its recommendations. Eg. it can explain which Twitter posts influenced its decision to recommend you a certain podcast or interview.
Recommend interesting clips from videos
I want to experiment with recommending smaller units of content, similar to YouTube shorts. There's so much great information buried in 4 hour long podcasts that I don't have time to watch. I want the recommendation system to show me the specific clip I'll find most interesting. Then I can decide whether to continue watching the whole thing.
Recommend "timeless" content
It's not always the case that newer videos are better. Current recommendation algorithms are biased towards trends and virality. In addition to the latest and greatest I also want to be able to recommend old videos which have stood the test of time.
Biased towards learning as opposed to entertainment
I love the user interface of YouTube shorts. It reminds me a lot of incremental reading which I'm a huge fan of. I just wish the content wasn't so sensationalist, clickbaity and trashy.
Current State
I've already finished the MVP of the data processing pipeline. Here's a diagram, or you can take a scroll through src/pipeline/main.ts. It's actually quite a simple set of steps to go from Twitter data to YouTube video recommendations!
Since Open Recommender is open source, you can even run the current version yourself right now by following the installation guide, but be warned - it can get expensive!
The next steps for the project are to continue iterating on the GPT-4 prompts to improve the quality of the recommendations. So far for each pipeline run, roughly half of the recommendations are good and half are kinda meh. The goal is to improve this ratio to the point where 80% of the recommended videos are good. At that point I will transition to fine-tuning to bring down the cost. Then we can start getting some users!
To quote Kyle, one of the founders of OpenPipe:
In the next few weeks I'll also be writing more articles with code snippets and technical details about the lessons I learned building the data processing pipeline for Open Recommender, especially regarding prompt engineering and iteration. Looking forward to any ideas you have and can't wait to share the progress with you!
Top comments (0)