DEV Community

Pratik Pathak
Pratik Pathak

Posted on • Originally published at pratikpathak.com on

Calculate Age/Prediction app using Html, CSS & Javascript TFjs

Age prediction javascript is an app that uses the machine learning model of opencv and tensorflow JS to predict the Age of a person using face recognition. It is solely built by using an open-source model of TFjs

Code for Age Prediction using JS

This project appears to be a web-based application that uses OpenCV.js and TensorFlow.js for some form of image or video processing.

Live Preview | Source Code

<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Opencv JS</title>
    <script src="js/utils.js"></script>
    <script async src="js/opencv.js" onload="openCvReady();"></script>

    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@2.0.0/dist/tf.min.js"></script>

    <style>
      canvas {
        position: absolute;
      }
      h2 {
        position: relative;
        top: -250px;
        right: 350;
      }
      body {
        margin: 0;
        background-color: whitesmoke;
        padding: 0;
        width: 100vw;
        height: 100vh;
        display: flex;
        justify-content: center;
        align-items: center;
      }
    </style>
  </head>
  <body>
    <video id="cam_input" height="680" width="900"></video>
    <canvas id="canvas_output"></canvas>
    <script type="text/JavaScript" src="script.js"></script>
    <!-- <h2 id="output">Initializing</h2> -->
  </body>
</html>


let model;

//openCvReady is the function that will be executed when the opencv.js file is loaded
function openCvReady() {
  cv["onRuntimeInitialized"] = () => {
    // The variable video extracts the video the video element
    let video = document.getElementById("cam_input"); // video is the id of video tag
    // navigator.mediaDevices.getUserMedia is used to access the webcam
    navigator.mediaDevices
      .getUserMedia({ video: true, audio: false })
      .then(function (stream) {
        video.srcObject = stream;
        video.play();
      })
      .catch(function (err) {
        console.log("An error occurred! " + err);
      });

    //src and dst holds the source and destination image matrix
    let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
    let dst = new cv.Mat(video.height, video.width, cv.CV_8UC1);
    //gray holds the grayscale image of the src
    let gray = new cv.Mat();
    //cap holds the current frame of the video
    let cap = new cv.VideoCapture(cam_input);
    //RectVector is used to hold the vectors of different faces
    let faces = new cv.RectVector();
    let predictions = "Detecting...";
    //classifier holds the classifier object
    let classifier = new cv.CascadeClassifier();
    let utils = new Utils("errorMessage");
    //crop holds the ROI of face
    let crop = new cv.Mat(video.height, video.width, cv.CV_8UC1);
    let dsize = new cv.Size(48, 48);

    // Loading the haar cascade face detector
    let faceCascadeFile = "haarcascade_frontalface_default.xml"; // path to xml
    utils.createFileFromUrl(faceCascadeFile, faceCascadeFile, () => {
      classifier.load(faceCascadeFile); // in the callback, load the cascade from file
    });

    //Loading the model with async as loading the model may take few miliseconds
    //The function dont take and return anything
    //the model holds the model
    (async () => {
      model = await tf.loadLayersModel("./model/model.json");
      console.log(model);
    })();

    const FPS = 30;
    // processvideo will be executed recurrsively
    function processVideo() {
      let begin = Date.now();
      cap.read(src);
      src.copyTo(dst);
      cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0); // converting to grayscale
      try {
        classifier.detectMultiScale(gray, faces, 1.1, 3, 0); // detecting the face
        console.log(faces.size());
      } catch (err) {
        console.log(err);
      }
      //iterating over all the detected faces
      for (let i = 0; i < faces.size(); ++i) {
        let face = faces.get(i);
        // filtering out the boxes with the area of less than 45000
        if (face.width * face.height < 40000) {
          continue;
        }
        let point1 = new cv.Point(face.x, face.y);
        let point2 = new cv.Point(face.x + face.width, face.y + face.height);
        // creating the bounding box
        cv.rectangle(dst, point1, point2, [51, 255, 255, 255], 3);
        //creating a rect element that can be used to extract
        let cutrect = new cv.Rect(face.x, face.y, face.width, face.height);
        //extracting the ROI
        crop = gray.roi(cutrect);

        cv.resize(crop, crop, dsize, 0, 0, cv.INTER_AREA);

        //converting the image matrix to a 4d tensor
        const input = tf.tensor4d(crop.data, [1, 48, 48, 1]).div(255);

        //console.log(input)
        //making the prediction and adding the prediction it to the output canvas
        predictions = model.predict(input).dataSync(0);

        console.log(predictions);
        //adding the text above the bounding boxes
        cv.putText(
          dst,
          String(parseInt(predictions)),
          { x: face.x, y: face.y - 20 },
          1,
          3,
          [255, 128, 0, 255],
          4,
        );
      }

      // showing the final output
      cv.imshow("canvas_output", dst);

      let delay = 1000 / FPS - (Date.now() - begin);
      setTimeout(processVideo, delay);
    }
    // schedule first one.
    setTimeout(processVideo, 0);
  };
}

Enter fullscreen mode Exit fullscreen mode

25+ More Javascript Projects for beginners, click here to learn more https://pratikpathak.com/top-javascript-projects-with-source-code-github/

More about index.html – Travel India

The HTML structure includes a <video> element for displaying a video stream (likely from a webcam given the id “cam_input”) and a <canvas> element for displaying the processed output.

The project uses several JavaScript files:

  • utils.js is likely a utility script that provides helper functions used in the project.
  • opencv.js is the main OpenCV.js library, which provides a wide range of image and video processing functions.
  • tf.min.js is the TensorFlow.js library, which provides machine learning capabilities.
  • script.js is likely the main script for the application, which uses the above libraries to implement the specific functionality of the project.

The CSS within the <style> tags positions the canvas and video elements, and styles the body of the page

More about script.js – Travel India

The script.js file provides the functionality for a web-based application that uses OpenCV.js and TensorFlow.js for real-time face detection and emotion prediction from a webcam feed.

Here’s a breakdown of the JavaScript code:

  • openCvReady() is the function that will be executed when the OpenCV.js library is loaded.
  • Inside openCvReady(), the cv["onRuntimeInitialized"] function is defined to run once the OpenCV.js runtime is initialized.
  • The video from the webcam is accessed using navigator.mediaDevices.getUserMedia and displayed in the “cam_input” video element.
  • Several OpenCV.js objects are created to hold the video frames (src, dst, gray, cap), detected faces (faces), and the face classifier (classifier).
  • The Haar cascade face detector is loaded from an XML file using utils.createFileFromUrl and classifier.load.
  • The TensorFlow.js model is loaded from a JSON file using tf.loadLayersModel.
  • processVideo() is a function that is called repeatedly to process each frame of the video. It reads a frame from the video, converts it to grayscale, detects faces in the frame using the Haar cascade classifier, and for each detected face, it extracts the region of interest (ROI), resizes it, and feeds it to the TensorFlow.js model for emotion prediction. The predicted emotion is then drawn on the frame, and the processed frame is displayed in the “canvas_output” canvas element.
  • processVideo() is scheduled to run repeatedly at a rate of 30 frames per second (FPS).

This script enables the application to perform real-time face detection and emotion prediction from a webcam feed.

Top comments (0)