DEV Community

Cover image for Vue Camera Gestures - add AI powered gesture controls to your app in 1 line of HTML
Daniel Elkington
Daniel Elkington

Posted on

Vue Camera Gestures - add AI powered gesture controls to your app in 1 line of HTML

Consider a basic Vue application. There's a ball, and clicking "left" or "right" buttons make it move left or right.
Red ball in a box with "Left" and "Right" buttons underneath. Clicking the buttons moves the ball in that direction.
We'll suppose that you have a component called MovingBall that deals with displaying the ball and moving it, and we are using that component as follows:

    <moving-ball :direction="direction"></moving-ball>
    <button @click="left">Left</button>
    <button @click="right">Right</button>

import MovingBall from './MovingBall'
export default {
  components: {
  data: function () {
    return {
      direction: null
  methods: {
    left () {
      this.direction = 'left'
    right () {
      this.direction = 'right'

Controlling the ball using buttons is a bit boring. What if you could instead control it by gesturing a direction to your device's camera?

If you think that would take a lot of code to achieve, think again! I've just released a library, Vue Camera Gestures, that will let you do that in just one line in your component's template. We just need to install the library,

npm install vue-camera-gestures

import and register the component,

import CameraGestures from 'vue-camera-gestures'
export default {
  components: {

and put the following line into our template, under the buttons.

<camera-gestures @left="left" @right="right"></camera-gestures>

And that's it! When the user opens the page, they'll be prompted to train a left and a right gesture, verify them, and then they can control the ball using the trained gestures.
In a Chrome window, a camera feed is displayed with instructions to maintain a neutral position, and perform "left" and "right" gestures. The user does so. After this the user can use the trained gestures to control the movement of a ball.

Note that the gestures don't have to involve pointing - the user can do basically anything they're comfortable with. Under the hood, the transfer learning technique is used to retrain an existing image recognition library. We then use a KNN classifier algorithm which, after training, takes a frame of video, and finds the 10 nearest frames from the training phase that match. We assume that the user is performing the gesture that has the most matching cases.

But you don't need to know any of that - you can just use the component, and have all the AI complexity abstracted away. The camera-gestures component's events are completely configurable, and you can have as many of them as you like. So if you had a blogging app, you could do something like


and the user will be prompted to train the five gestures, and can then use them to control the blogging app. After training, the specified events will be fired whenever the user performs the associated gesture.

Give it a go! Easily make your app more accessible/fun! Find a demo and full documentation at and explore the source code here.

Top comments (1)

nelsw profile image
Connor Van Elswyk