DEV Community

Cover image for Puppeteering Emojis with face morphs through Handsfree.js
Oz Ramos
Oz Ramos

Posted on • Updated on

Puppeteering Emojis with face morphs through Handsfree.js

Liquid error: internal

Face morphs are powerful properties of many computer vision models which represent how much a certain facial feature is activated. They can be used to puppeteer 3D models and 2D cartoons, or to trigger events like sounding an alert when a driver becomes drowsy or to snap a photo when you make a perfect smile 😊

Usually these numbers range from either [0, 1] or [-1, 1], and with Weboji through Handsfree.js I've abstracted those to a few new properties I call "morph activations" like instance.head.state.eyesClosed and instance.head.state.pursed . We'll explore these properties by making a simple "emoji puppeteering" app.

Setting up

As usual, we'll include our dependencies. These new properties are available on 5.0.5:

    <!-- Handsfree dependencies -->
    <link rel="stylesheet" href="" />
    <script src=""></script>
Enter fullscreen mode Exit fullscreen mode

Next, we'll add a start/stop button and an element to hold our Emoji:

    <button onclick="handsfree.start()">Start Webcam</button>
    <button onclick="handsfree.stop()">Stop Webcam</button>

    <div id="emoji">πŸ˜€</div>
Enter fullscreen mode Exit fullscreen mode

Finally, we'll grab a reference to the emoji element and instantiate Handsfree:

    const $emoji = document.querySelector('#emoji')
    window.handsfree = new Handsfree()
Enter fullscreen mode Exit fullscreen mode

Adding our Emoji plugin

Now we'll add a plugin named "emoji". If you recall from our getting started tutorial, a plugin is simply a labeled callback that runs on every webcam frame: Handsfree.use('nameOfPlugin', ({head}) => {}). Let's start the emoji plugin:

    Handsfree.use('emojify', (pointer, instance) => {
      let emoji = '😐'
      let isFlipped = false

      // Instance.head.state contains a list of activated morphs
      // We just map it to a shorthand here so we don't have to type it all out each time
      let state = head.state

      // Let's start with some easy ones
      if (state.mouthOpen) emoji = 'πŸ˜ƒ'
      if (state.browsUp) emoji = 'πŸ™„'
      if ( emoji = 'πŸ™‚'

      // Some emojis can be made by combining activations
      if (state.eyesClosed && state.pursed) emoji = 'πŸ˜™'
      if (state.mouthOpen && state.eyesClosed && state.browsUp) emoji = 'πŸ˜‚'
      if (!state.mouthClosed && state.pursed && state.browsUp) emoji = '😲'

      // Here we flip the emoji if the user smirks the other way
      // A smirk happens if and only if a user smiles to one side
      if (state.smirk && state.browsUp) {
        if (state.smileLeft) isFlipped = true
        emoji = '😏'

      // Aplly transforms
      $ = `perspective(1000px)
            scale(${isFlipped ? -1 : 1}, 1)`

      // Show the emoji
      $emoji.innerText = emoji
Enter fullscreen mode Exit fullscreen mode

As you can see, it's quite easy to mix and match activations! For a complete list of head activations, check out the wiki page for head properties.

Check out my demo to see what other emojis I've mapped. Order can sometimes matter!

Configuring activation thresholds

To configure the threshold for those, that is, how pursed your lips need to be to activate head.state.pursed, check out the config section.

For instance, let's say that you're building an accessibility tool for someone who can't quite smile to the right all the way. In this case, you might lower the activation threshold:

    handsfree = new Handsfree({
        head: {
            threshold: {
                // The default iss 0.7
                smileRight: 0.2
Enter fullscreen mode Exit fullscreen mode

Going further

This post explained how to use the new morph activation properties of Handsfree.js, but you can take this way further than that! Remember, handsfree.head.morphs contains a set of morph values from 0, 1 which you can use to morph 3D models in real time and so much more.

Thanks for reading!

We have one more head property to learn, position, which tells us where in 3D space the head is. This can be useful for things like AR apps, as a "zooming" gesture, and other things we'll explore soon.

Here are some other links to keep you going:

Have fun coding πŸ‘‹


  • 11/23/19 - Made updates to reflect new v6 API

Top comments (0)