loading...
Team XenoX

JavaScript Quickies: Controlling 3D Objects with Hands ๐Ÿคฏ

sarthology profile image Sarthak Sharma ใƒป6 min read

Hey guys, what's up? We all at Team XenoX are really excited to inform you that we are starting a new series of articles called Javascript Quickies. These will be quick experiments that you guys can do in Javascript to explore something new in technology. Thanks to Javascript, we can just plug in various modules and create anything. The only limit is your imagination.

The Idea ๐Ÿ’ก

We all have our favorite moments from Sci-Fi movies. These moments are extra special for us developers because we can't help but wonder how all the cool sci-fi tricks that we see on the screen could be turned into reality. Whenever I see something like that, my mind immediately jumps into top gear and I start thinking about all the technical possibilities. There's a child-like fascination attached to it that I absolutely love.

I remember watching Iron Man as a teen and being completely amazed by the scene where he interacts with holographic objects in his lab. As I recalled that scene, I got to thinking whether I could create something similar, something that sparked the same kind of joy.

Of course, we don't have all that juicy tech to create the exact same effect, at least not yet. But we can surely try to make something almost as cool from what we already have. So I made this cool little project over the weekend to share with you guys.

Buckle up, Avengers! Let's create The Hand Trick.

๐Ÿค“ TRY NOW ๐Ÿค“

Requirements ๐Ÿงบ

I made this using Vanilla Javascript. So you should have a basic understanding of Javascript to understand this tutorial. Other than that, I have used two libraries in this:
1. Three.js ๐Ÿ‘‰๐Ÿผ link
2. Handtrack.js ๐Ÿ‘‰๐Ÿผ link

That's it.

Let's code now ๐Ÿ‘ฉ๐Ÿฝโ€๐Ÿ’ป

The HTML side of the code is very simple. We are just including the libraries here and adding the div to render the camera feed in the browser:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <meta http-equiv="X-UA-Compatible" content="ie=edge" />
    <title>The Hand Trick</title>
    <link rel="stylesheet" href="css/style.css" />
  </head>
  <body>
    <!-- Video for handtracker -->
    <div class="tracker">
      <video id="myvideo"></video>
      <canvas id="canvas" class="border"></canvas>
      <button id="trackbutton" disabled onclick="toggleVideo()">Button</button>
      <div id="updatenote">hello</div>
    </div>
    <div class="data">
      <div class="hand-1">
        <p id="hand-x">X: <span>0</span></p>
        <p id="hand-y">Y: <span>0</span></p>
      </div>
    </div>
    <script src="js/three.js"></script>
    <script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"></script>
    <script src="js/scene.js"></script>
  </body>
</html>

Once that is done, let's quickly jump back to the Javascript side of things. If you know Three.js, you can skip this part. For others, we are creating a scene here by setting the details about it.

// Setting scene for 3D Object
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(
  75,
  window.innerWidth / window.innerHeight,
  0.1,
  1000
);
var vector = new THREE.Vector3();
var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);

After that, let's create a 3D Object to render on the scene. Here we will be defining the Geometry of the box and the type of the mesh's material.

// Creating 3D object
var geometry = new THREE.BoxGeometry(1, 2, 1);
var material = new THREE.MeshBasicMaterial({
  color: "rgba(3, 197, 221, 0.81)",
  wireframe: true,
  wireframeLinewidth: 1
});

var cube = new THREE.Mesh(geometry, material);

scene.add(cube);
camera.position.z = 5;

This step is optional if you want to rotate your object in 3D. It just looks cooler this way.

// Optional animation to rotate the element
var animate = function() {
  requestAnimationFrame(animate);
  cube.rotation.x += 0.01;
  cube.rotation.y += 0.01;
  renderer.render(scene, camera);
};

animate();

That's all we need to do with Three.js. Now let's play around with Handtrack.js

// Creating Canavs for video Input
const video = document.getElementById("myvideo");
const handimg = document.getElementById("handimage");
const canvas = document.getElementById("canvas");
const context = canvas.getContext("2d");
let trackButton = document.getElementById("trackbutton");
let updateNote = document.getElementById("updatenote");

let imgindex = 1;
let isVideo = false;
let model = null;

// Params to initialize Handtracking js
const modelParams = {
  flipHorizontal: true,
  maxNumBoxes: 1,
  iouThreshold: 0.5,
  scoreThreshold: 0.7
};

handTrack.load(modelParams).then(lmodel => {
  model = lmodel;
  updateNote.innerText = "Loaded Model!";
  trackButton.disabled = false;
});

We are defining parameters here to load Handtrack js but this step is optional; you can pass an empty object also. The handTrack.load() method will help you load a model. Once the handtrack js is loaded, let's write functions to load video stream in the canvas defined in the html. For that, we are using the handTrack.startVideo() method.

// Method to start a video
function startVideo() {
  handTrack.startVideo(video).then(function(status) {
    if (status) {
      updateNote.innerText = "Video started. Now tracking";
      isVideo = true;
      runDetection();
    } else {
      updateNote.innerText = "Please enable video";
    }
  });
}

// Method to toggle a video
function toggleVideo() {
  if (!isVideo) {
    updateNote.innerText = "Starting video";
    startVideo();
  } else {
    updateNote.innerText = "Stopping video";
    handTrack.stopVideo(video);
    isVideo = false;
    updateNote.innerText = "Video stopped";
  }
}

Now we can write the code to get prediction data from the handtrack.js

//Method to detect movement
function runDetection() {
  model.detect(video).then(predictions => {
    model.renderPredictions(predictions, canvas, context, video);
    if (isVideo) {
      requestAnimationFrame(runDetection);
    }
  });
}

The Real Trick ๐Ÿง™๐Ÿผโ€โ™‚๏ธ

All the above code can basically be copy-pasted from the documentation of the library. But the real challenge was to integrate both to get the desired result.

The trick is to track the coordinates of the hand on video canvas and make changes with respect to it on to the Three js Object.

The prediction object from model.detect() method returns the following object:

{
  "bbox": [x, y, width, height],
  "class": "hand",
  "score": 0.8380282521247864
}

bbox gives you the value coordinates, width and height of the box drawn around the hand. But the coordinates are not for the center point. To calculate it for the center point, we are using this simple formula:

 let midvalX = value[0] + value[2] / 2;
 let midvalY = value[1] + value[3] / 2;

Another problem is that the scale of the object's canvas and tracker's canvas is huge. Also, the center point origin of both the sources is not center. To take care of that, first we have to shift the coordinatesโ€‹ so that the origin point of video canvas can be center.

Once that's done, taking care of the scale issue is easy. So the final result will be something like this.

//Method to detect movement
function runDetection() {
  model.detect(video).then(predictions => {
    model.renderPredictions(predictions, canvas, context, video);
    if (isVideo) {
      requestAnimationFrame(runDetection);
    }
    if (predictions.length > 0) {
      changeData(predictions[0].bbox);
    }
  });
}

//Method to Change prediction data into useful information
function changeData(value) {
  let midvalX = value[0] + value[2] / 2;
  let midvalY = value[1] + value[3] / 2;

  document.querySelector(".hand-1 #hand-x span").innerHTML = midvalX;
  document.querySelector(".hand-1 #hand-y span").innerHTML = midvalY;

  moveTheBox({ x: (midvalX - 300) / 600, y: (midvalY - 250) / 500 });
}

//Method to use prediction data to render cube accordingly
function moveTheBox(value) {
  cube.position.x = ((window.innerWidth * value.x) / window.innerWidth) * 5;
  cube.position.y = -((window.innerHeight * value.y) / window.innerHeight) * 5;
  renderer.render(scene, camera);
}

Well, that's it. You can now control the 3D object with your hand. I have made the code public on Github so go check it out. Clone it, run it, and have fun with it.

GitHub logo sarthology / thehandtrick

๐Ÿ–๐Ÿผ Controlling 3D object with your hand

The Hand Trick

Demo

We all have our favorite moments from Sci-Fi movies. These moments are extra special for us developers because we can't help but wonder how all the cool sci-fi tricks that we see on the screen could be turned into reality. Whenever I see something like that, my mind immediately jumps into top gear and I start thinking about all the technical possibilities. There's a child-like fascination attached to it that I absolutely love.

I remember watching Iron Man as a teen and being completely amazed by the scene where he interacts with holographic objects in his lab. As I recalled that scene, I got to thinking whether I could create something similar, something that sparked the same kind of joy

Check out the tutorial here

Prerequisites

Before running this locally you must have these installed

  • Three.js
  • Handtrack.js

Join The Team

Be part of coolest projects, JOINโ€ฆ

Wrap Up ๐Ÿ‘‹๐Ÿป

The story has just begun. This was the first tutorial in the series, and I have plans to take this experiment one step further. I'd love to have some contributors. If you'd like to contribute to the project, just generate a pull request at XenoX Multiverse and I'll contact you.

Team XenoX started as a small team of devs working on open-source projects for the fun of it. But over the months, it has grown bigger and stronger. This is why I've created XenoX Multiverse, the home of all open-source initiatives by Team XenoX.If you want to be one of us, just write your name and start contributing!

Before I Go

We have a Telegram channel for Dev.to now! Get the best of Dev.to on the go, along with external articles, videos, and polls that we send daily!
๐Ÿ‘‰๐Ÿผ Link

Time for me to go now. That's all, folks! Remember that this is just a quick experiment to help you get your creative juices flowing. You can add more cool features to this, and if you feel like it's running sluggishly, you can optimize the code later. The point is to learn something new as quickly as possible. Hope you liked this post.

See ya later!

Posted on by:

sarthology profile

Sarthak Sharma

@sarthology

JavaScript Nerd๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป| Philosopher๐Ÿง˜๐Ÿปโ€โ™‚๏ธ | Life Hacker๐Ÿ”ง | Health enthusiast๐Ÿ‹๐Ÿปโ€โ™‚๏ธ

Team XenoX

Come change the world with us. We aspire to be the biggest open-source initiative on the planet. Sponsored by Skynox Tech.

Discussion

markdown guide
 

I have a leapmotion which is a quality IR camera with driver which is able to read hands and fingers with high precision. Sadly they have deprecated browser support. Via JavaScript you could easily read the hands and fingers and respond to thing.

I can tell you one thing. Waving your hands around for a while is quite tiring.

 

I did try these handtrack devices in the browser by writing a websocket server in C++ or whatever language they provide API, then forward skelectal coordinates as JSON string. That's a pretty terrible experience, the orbit is not linear and come in zig-zag shape, doing interpolation afterward resulted in huge delays.

Any suggestion ? Thank in advance :)

 
 

Well that can also increase arm strength ๐Ÿ˜ฌ

 

This is so cool, just imagine the possibilities using body as input.
You don't need any fancy sensors or hardware just tiny camera which you can find almost everywhere. I am not completely sure but this can also be used for converting sign language to text or voice output. We can also build hand gesture input based game my younger brother would like it lot.

Cool and fascinating stuff.

 

Letโ€™s make something then ๐Ÿ˜Š

 

Wow so cool to do this with just a webcam and a browser :)

I made something similar in 2014 with a leapmotion and a webcam in C++:

youtube.com/watch?v=8b99VDvN64M

 
 
 
 

Here

GitHub logo sarthology / XenoXMultiverse

๐Ÿ”ฅ๐Ÿš€ XenoX Assemble !!

XenoX Multiverse

We're making things official with the beginning of XenoX Multiverse. Team XenoX started as my dream, just a small team of devs working on open-source projects for the fun of it. But over the months, it has grown bigger and stronger. So with this, I want to formally announce the Team XenoX open source initiative. This is where the master list of all the projects and all the devs collaborating on XenoX will live.

If you want to be one of us, just write your name and you're in!

How to join the Team? ๐Ÿ’ช๐Ÿผ

  1. Fork the repo.
  2. Add your name to this file.
  3. Submit a pull request.

That's it. You're all set. Someone from the team will contact you and formally welcome you to the team. Cheers!

The Team

  1. Sarthak Sharma ๐Ÿ”ฅ
  2. Utkarsh Talwar ๐ŸŽธ
  3. Rajat Sharma ๐ŸŽง
  4. Rajesh Mainali ๐ŸŽต



 

woahhh !! seems to be a cool one, but, but, but, IRON-MAN's way is best :p

 
 

This looks super cool. It would be nice if handtrack.js can track hand gestures.
Thanks for the telegram channel ๐Ÿ™

 

Iโ€™m sure someone will build something over Handtrack.js to get that too.

๐Ÿ˜Š๐Ÿ˜Š

 

It was at this moment when all the jaws in the world dropped to the floor. ๐Ÿ˜ฎ

 
 

Wow, that's impressive...

How about creating a full-body gaming interface that could be a replacement for costly equipment and also, it could be enhanced for giving gesture commands to PC.

 

Thatโ€™s a great idea too. ๐Ÿค“

 

Not bad, I have used your demo is interesting, I remember something similar using the leapmotion device.