I joined #100DaysofCode because it provides structure and reinforces commitments, something that's important if you're like me and always starting (but not finishing) side projects 😅
My goal with this round is to work on Handsfree.js - a wrapper library around computer vision models for interacting with pages handsfree. My mission is to help you leverage computer vision without needing to know how each model works, and my vision is a fully handsfree web!
The following is a look back at what I did over my first week of the challenge
Made a game called "Smile Tiles" to test the accuracy of Handsfree.js over time: handsfree.js.org/#/smile-tiles
- Click Start Webcam on top right
- Smile/smirk to click
- Black tiles increase score/time
- White tiles resets everything
Day 4 of #100DaysofMLCode and #100DaysofCode23:31 PM - 07 Nov 2019
Most of my focus was on getting a simple "face pointer" working. The idea is that by knowing the your head pose, we can project an imaginary laser beam from between your eyes and onto the screen. Then through different face gestures we can treat the pointer like a mouse by using the native
This means that any mouse based listeners you have on your existing page will work out of the box without any extra coding! To test that the idea works, I created a simple game where you have to click the black tiles in a certain amount of time.
I hope to make it more accurate over time 😅
Added a more exciting 360 video so that you can really check out the handsfree controls: handsfree-youtube-360.glitch.me
Also wrote a step-by-step tutorial on how to set this demo up with Handsfree.js: dev.to/heyozramos/con…
Day 6 of #100DaysofMLCode and #100DaysofCode03:57 AM - 10 Nov 2019
Because I'm already using
handsfree.head.rotation to calculate where to place the cursor, we can use that data to control different things on the page. To test this, I made a simple demo that controls a YouTube 360 video with your head.
One of the final things to test with
handsfree.head are the morph values, which are floats between 0 and 1 that measure how much a part of your face is being activated. To test this, I started working on a demo which matches an emoji to your face!
On this day, I also created a "Pitch Deck" which I'll be using to help me get grants and funding for the project over the next few months. You can take a peek through the slides on the metrics I'm using to hopefully justify funding
I hope to finish the Emoji demo by tomorrow, which will include several new events to Handsfree.js, like
isWinking, etc. I really love the process of iterating on the library by first creating a demo to see what needs to be done and then adding features.
handsfree.head feature set will be based on the head's position in 3D space, which I'll demo by creating a simple 3D face painting app (smile to draw, move head to move brush). I also hope to start working on a plugin for P5.js and start applying to grants.
Thanks for reading, here are some other topics I created this week:
Have fun coding!