I'm Matt I created Safe Streaming in April, the API service provides a means for developers to deliver video content recommendations to users.
In simple terms, the service gives more information about video content to parents and survivors of trauma to empower them to make a safe choice before clicking play.
Our newest feature set is foundational to our continued development on our Alexa Skill.
Our new tagging system detects and covers sensitivities over these items:
- Bad Language
- Sexual Assault (implied or explicit)
- Substance Abuse
68 thousand sensitivity tags that cover 46 thousand movies and TV shows.
It goes deeper though.
With the API you can set up individual consumers with their unique needs and deliver instant recommendations. A given recommendation will show exactly what issues a piece of content contains and how it connects to a consumer's preferences.
The missing piece of the puzzle, providing clearer labels for video so viewers can avoid sensitive content.
Probably the most data-rich content labeling system in the industry #boldaf 😎
I mentioned earlier that we are working on an Alexa Skill to complement the API. After the release and some marketing, I will be running a paid hackathon and looking for innovative solutions.
Happy to answer any questions.
Fancy a chat with me? 👇
Top comments (5)
Thank you for your reply. The API doesn't rely to much on AI/ML at the moment, although we are exploring a couple of avenues in that direction which is exciting.
The API is driven from a lot of intelligent data from a diverse set of sources if you google a film with parents guides there is a range of different websites that effectively provides reviews for movies. The benefit is the data has been written for the purpose to help people and is keyword rich and enables us to detect if an issue is implicit rather then explicit. The problem with many of these services is that each of them has their own biases, long story short we have aggregated some of the sources, done a lot of data cleaning, and applied keyword analysis. We consume a number of other sources to further strengthen our dataset.
It would be trivial to apply these techniques to film, TV show scripts, and possibly audio.
This is interesting, I've worked on a similar project before at my former workplace, but on text data. Are the 68 thousand tags organized into some kind of hierarchical taxonomy? Seems like a whole lot of tags! When you say you're looking to provide "clearer labels" for video, does that mean there's an existing set of tags, but they're messy, or does it mean you're looking to improve the API's ability to tag videos in general (i.e. finding new sources of labeled data, automating labeling using ML/AI, etc.)?
So a couple of things, we have a core set of tags, consider them categories. At the moment we have 8 or so, our 46k pieces of content have been tagged with 68k times with the aforementioned categories. New categories/tags will be added going forward.
We are primarily using text, at this time but are exploring other media and more ML-like predictive techniques, using the approach of new sources of data and tagging accordingly. I imagine the service will become more sophisticated over time.
The best way to think about it is what Film ratings provides but a degree where we utilize raw community, review data, and anything we can get our hands-on.
So for the Alexa skill we are building, imagine being able to tell save your content preferences for your friends and each member of your family, then ask if a film is suitable based on the people sitting on the sofa with you.
With this feedback I probably need to get my copy on my landing page a touch clearer.