DEV Community

Cover image for How to automate attendance record with face recognition, Python and React
Berge Maxim
Berge Maxim

Posted on • Updated on

How to automate attendance record with face recognition, Python and React

Header image

Taking attendance is a rather boring task. Let's see how we can automate this with artificial intelligence in python and a nice user interface in React.

What do we want?

A system that check if an employee is on time or has left early. But also, check the time of arrival and Departure of those.
We also want to be able to add or delete an employee.

How to do it?

We can place a camera in front of the door which will recognize the employees and save the time of their arrival and departure.
With those data we make some simple conditions to determine if they are late or if they left earlier.
we save those informations in an online database to make it accessible from anywhere.

How the user will be able the use those data?

With a simple web interface. We just want to add and delete an employee and check all the data we have about him.

Screenshot:
Screenshot of the website

With which technologies?

Logo of python, react and flask

To create the front-end we use React which is perfect for processing information in real time.

For the back-end, we use Python Flask to create an API which can receive request and data, then send back and answer. For exemple, the API will receive a name, make a request to the database to have all the data about this person and send back those data.

For the database, we use PostgreSQL but any database engine would do the work.

For the face recognition, we use a python library called "face_recognition".

How will it work?

Let's describe the data processing flow of our web application.

As soon as the camera detects a face it will check if the person is in the system and if so, it will retrieve the date, the name of the person and the time it detected him. If this is the first time this employee is detected today, an arrival time will be assigned, once this time is determined, each subsequent detection on the same day will update his departure time.

Let's detail how the data will travel.

A first script will get the video feed from the camera, detect people, get the time of the detection and send thoses information to our API. Then the API will ask to the DB if the employee as already been seen today to determine if the time that it saw the person is the arrival time or the departure time. Then it will check if the employee is in the conditions to be on time and send back all those data to the db.

The user will be able to ask, from the front-end, the data about an employee, add one, or delete one.

The front will send the information to the API that will query the DB, receive the informations and send it back to the front.

UML of the project

Let's dive into the code!

Good. Now that we know what we want and how it will be structured, it's time to code!

This section will be divided into 3 parts.

  1. Facial recognition
  2. The API
  3. The front-end

1. Facial recognition

As mentioned above, for facial recognition we will use the python face_recognition library.
Let's take a quick look at how it works.

We give a picture of a user to record his "facial identity".

A first model will dig up whether there is a face or not and determine its location on the photo.

A second model will calculate the facial parameters. (distance between the eyes, shape of the chin,…)

We save this so-called "encoded" data by linking them to a name so that they can be compared with a future picture.

Then we give a new nameless photo and the same process will be repeated except that this time, a third model will compare the parameters of the face with those it already knows.

For more information, please refer to the official documentation.

Code:

To add a single user with a picture:

# Import the library
import face_recognition

# Select an image to teach to the machine how to recognize

# * ---------- User 1 ---------- *
# Load the image 
user_one_face = face_recognition.load_image_file("assets/img/user-one.jpg")
# Encode the face parametres
user_one_face_encoding = face_recognition.face_encodings(user_one_face)[0]

# * ---------- User 2 ---------- *
# Load the image 
user_two_face = face_recognition.load_image_file("assets/img/user-two.jpg")
# Encode the face parametres
user_two_face_encoding = face_recognition.face_encodings(user_two_face)[0]


# Create a list of known face encodings and their names
known_face_encodings = [
    user_one_face_encoding,
    user_two_face_encoding
]

# Create list of the name matching with the position of the known_face_encodings
known_face_names = [
    "User One",
    "User Two"
]
Enter fullscreen mode Exit fullscreen mode

If we want to add more users, we have to repeat thoses steps for each one.

To stay DRY, let's automate the "Add a face" process, by creating a folder in which we store the portrait pictures of our employees.

Now it will automatically encode all the photos in the folder by linking them to the file name.


# Import the library
import face_recognition

# Declare all the list
known_face_encodings = []
known_face_names = []
known_faces_filenames = []

# Walk in the folder to add every file name to known_faces_filenames
for (dirpath, dirnames, filenames) in os.walk('assets/img/users/'):
    known_faces_filenames.extend(filenames)
    break

# Walk in the folder
for filename in known_faces_filenames:
    # Load each file
    face = face_recognition.load_image_file('assets/img/users/' + filename)
    # Extract the name of each employee and add it to known_face_names
    known_face_names.append(re.sub("[0-9]",'', filename[:-4]))
    # Encode de face of every employee
    known_face_encodings.append(face_recognition.face_encodings(face)[0])
Enter fullscreen mode Exit fullscreen mode

There we go! All our employees are now encoded and we can recognize them.

To compare with a picture:

# * --------- IMPORTS --------- *
import numpy as np
import face_recognition

# * ---------- Encode the nameless picture --------- *
# Load picture
face_picture = face_recognition.load_image_file("assets/img/user-one.jpg")
# Detect faces
face_locations = face_recognition.face_locations(face_picture)
# Encore faces
face_encodings = face_recognition.face_encodings(face_picture, face_locations)

# Loop in all detected faces
for face_encoding in face_encodings:
    # See if the face is a match for the known face (that we saved in the precedent step)
    matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
    # name that we will give if the employee is not in the system
    name = "Unknown"
    # check the known face with the smallest distance to the new face
    face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
    # Take the best one
    best_match_index = np.argmin(face_distances)
    # if we have a match:
    if matches[best_match_index]:
        # Give the detected face the name of the employee that match
        name = known_face_names[best_match_index]
Enter fullscreen mode Exit fullscreen mode

At the end, the value of "name" will be "unknow" or have the name of the employee that match.

Good, we know how to compare two pictures. But we want to apply it to a video feed, right?

So let's just apply this to every frame of the video feed, then if there is a match, send data to the API (that we will made later)!

# * --------- IMPORTS ---------*
import cv2

# Select the webcam of the computer (0 by default for laptop)
video_capture = cv2.VideoCapture(0)

# Aplly it until you stop the file's execution
while True:
    # Take every frame
    frame = video_capture.read()
    # Process every frame only one time
    if process_this_frame:
        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(frame)
        face_encodings = face_recognition.face_encodings(frame, face_locations)
        # Initialize an array for the name of the detected users
        face_names = []

        # * ---------- Initialyse JSON to EXPORT --------- *
        json_to_export = {}
        # Loop in every faces detected
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"
            # check the known face with the smallest distance to the new face
            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            # Take the best one
            best_match_index = np.argmin(face_distances)
            # If we have a match
            if matches[best_match_index]:
                # Save the name of the best match
                name = known_face_names[best_match_index]

                # * ---------- SAVE data to send to the API -------- *
                # Save the name
                json_to_export['name'] = name
                # Save the time
                json_to_export['hour'] = f'{time.localtime().tm_hour}:{time.localtime().tm_min}'
                # Save the date
                json_to_export[
                    'date'] = f'{time.localtime().tm_year}-{time.localtime().tm_mon}-{time.localtime().tm_mday}'
                # If you need to save a screenshot:
                json_to_export['picture_array'] = frame.tolist()

                # * ---------- SEND data to API --------- *
                # Make a POST request to the API
                r = requests.post(url='http://127.0.0.1:5000/receive_data', json=json_to_export)
                # Print to status of the request:
                print("Status: ", r.status_code)

        # Store the name in an array to display it later
        face_names.append(name)
        # To be sure that we process every frame only one time
        process_this_frame = not process_this_frame

        # * --------- Display the results ---------- *
        for (top, right, bottom, left), name in zip(face_locations, face_names):
            # Draw a box around the face
            cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
            # Define the font of the name
            font = cv2.FONT_HERSHEY_DUPLEX
            # Display the name
            cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

        # Display the resulting image
        cv2.imshow('Video', frame)

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()
Enter fullscreen mode Exit fullscreen mode

We have now a script that will recognize who is in front of the camera and send it to the API.

2. The API

As mentioned above, for the API we use Flask. The purpose here is to receive the data from our face recognition model and redistribute it to the front when requested. But also to have the possibility to add a new employee with his name and photo and to be able to delete some only by recovering his name.

Let's create the app:

# * --------- IMPORTS --------- *
# All the imports that we will need in our API
from flask import Flask, request, jsonify
from flask_cors import CORS, cross_origin
import os
import psycopg2
import cv2
import numpy as np
import re

# We define the path of the current file, we will use it later
FILE_PATH = os.path.dirname(os.path.realpath(__file__))


# * ---------- Create App --------- *
# Init the app
app = Flask(__name__)
# To avoid cors erros
CORS(app, support_credentials=True)


# * -------------------- Run Server -------------------- *
if __name__ == '__main__':
    # * --- DEBUG MODE: --- *
    app.run(host='127.0.0.1', port=5000, debug=True)
Enter fullscreen mode Exit fullscreen mode

Create a route that will receive data from our face recognition model:

Note: this code should be **between* the create app section and the run server section.*

# * --------------------  ROUTES ------------------- *
# * ---------- Get data from the face recognition ---------- *
@app.route('/receive_data', methods=['POST'])
def get_receive_data():
    if request.method == 'POST':
        # Get the data
        json_data = request.get_json()

        # Check if the user is already in the DB
        try:
            # Connect to the DB
            connection = psycopg2.connect(user="USER_NAME",
                                          password="PASSWORD",
                                          host="DB_HOST",
                                          port="PORT",
                                          database="DATABBASE_NAME")
            # Open a cursor
            cursor = connection.cursor()

            # Query to check if the user as been saw by the camera today
            is_user_is_there_today =\
                f"SELECT * FROM users WHERE date = '{json_data['date']}' AND name = '{json_data['name']}'"

            cursor.execute(is_user_is_there_today)
            # Store the result
            result = cursor.fetchall()
            # Send the request
            connection.commit()

            # If use is already in the DB for today:
            if result:
                # Update user in the DB
                update_user_querry = f"UPDATE users SET departure_time = '{json_data['hour']}', departure_picture = '{json_data['picture_path']}' WHERE name = '{json_data['name']}' AND date = '{json_data['date']}'"
                cursor.execute(update_user_querry)

            else:
                # Create a new row for the user today:
                insert_user_querry = f"INSERT INTO users (name, date, arrival_time, arrival_picture) VALUES ('{json_data['name']}', '{json_data['date']}', '{json_data['hour']}', '{json_data['picture_path']}')"
                cursor.execute(insert_user_querry)

        except (Exception, psycopg2.DatabaseError) as error:
            print("ERROR DB: ", error)
        finally:
            # Execute query
            connection.commit()

            # closing database connection.
            if connection:
                cursor.close()
                connection.close()
                print("PostgreSQL connection is closed")

        # Return user's data to the front
        return jsonify(json_data)
Enter fullscreen mode Exit fullscreen mode

Create a route that will get the data of an employee from the database with his name

We receive a name as a string from a GET request of the front, make a query to the database and return the data that we get as a json.

# * ---------- Get all the data of an employee ---------- *
@app.route('/get_employee/<string:name>', methods=['GET'])
def get_employee(name):
    answer_to_send = {}
    # Check if the user is already in the DB
    try:
        # Connect to DB
        connection = psycopg2.connect(user="USER",
                                      password="PASSWORD",
                                      host="DB_HOST",
                                      port="PORT",
                                      database="DATABASE_NAME")

        cursor = connection.cursor()
        # Query the DB to get all the data of a user:
        user_information = f"SELECT * FROM users WHERE name = '{name}'"

        cursor.execute(user_information)
        result = cursor.fetchall()
        connection.commit()

        # if the user exist in the db:
        if result:
            print('RESULT: ',result)
            # Structure the data and put the dates in string for the front
            for k,v in enumerate(result):
                answer_to_send[k] = {}
                for ko,vo in enumerate(result[k]):
                    answer_to_send[k][ko] = str(vo)
            print('answer_to_send: ', answer_to_send)
        else:
            answer_to_send = {'error': 'User not found...'}

    except (Exception, psycopg2.DatabaseError) as error:
        print("ERROR DB: ", error)
    finally:
        # closing database connection:
        if (connection):
            cursor.close()
            connection.close()

    # Return the user's data to the front
    return jsonify(answer_to_send)
Enter fullscreen mode Exit fullscreen mode

Create a route that will get the data of the 5 last employees detected by the camera

We receive a GET request from the front, query the DB to get the 5 last entries and send back the answer to the front as a json.

# * --------- Get the 5 last users seen by the camera --------- *
@app.route('/get_5_last_entries', methods=['GET'])
def get_5_last_entries():
    # Create a dict thet will contain the answer to give to the front
    answer_to_send = {}
    # Check if the user is already in the DB
    try:
        # Connect to DB
        connection = psycopg2.connect(user="USER_NAME",
                                      password="PASSWORD",
                                      host="HOST_NAME",
                                      port="PORT",
                                      database="DATABASE_NAME")

        cursor = connection.cursor()
        # Query the DB to get the 5 last entries ordered by ID:
        lasts_entries = f"SELECT * FROM users ORDER BY id DESC LIMIT 5;"
        cursor.execute(lasts_entries)
        # Store the result
        result = cursor.fetchall()
        # Send the request
        connection.commit()

        # if DB is not empty:
        if result:
            # Structure the data and put the dates in dict for the front
            for k, v in enumerate(result):
                answer_to_send[k] = {}
                for ko, vo in enumerate(result[k]):
                    answer_to_send[k][ko] = str(vo)
        else:
            answer_to_send = {'error': 'DB is not connected or empty'}

    except (Exception, psycopg2.DatabaseError) as error:
        print("ERROR DB: ", error)
    finally:
        # closing database connection:
        if (connection):
            cursor.close()
            connection.close()

    # Return the user's data to the front as a json
    return jsonify(answer_to_send)
Enter fullscreen mode Exit fullscreen mode

Create a route that will add an employee in the system

We receive a GET request with a picture and a name from the front, we will add it to the user's folder and send back a success message to the front.

# * ---------- Add new employee ---------- *
@app.route('/add_employee', methods=['POST'])
@cross_origin(supports_credentials=True)
def add_employee():
    try:
        # Get the picture from the request
        image_file = request.files['image']

        # Store it in the folder of the know faces:
        file_path = os.path.join(f"assets/img/users/{request.form['nameOfEmployee']}.jpg")
        image_file.save(file_path)
        answer = 'new employee succesfully added'
    except:
        answer = 'Error while adding new employee. Please try later...'
    return jsonify(answer)
Enter fullscreen mode Exit fullscreen mode

Create a route that will get a list of the name of all the employees in the system

We receive a GET request from the front, walk in the user's folder to get the name of all the employee and send back this list to the front as json.

# * ---------- Get employee list ---------- *
@app.route('/get_employee_list', methods=['GET'])
def get_employee_list():
    # Create a dict that will store the list of employee's name
    employee_list = {}

    # Walk in the user's folder to get the user list
    walk_count = 0
    for file_name in os.listdir(f"{FILE_PATH}/assets/img/users/"):
        # Capture the employee's name with the file's name
        name = re.findall("(.*)\.jpg", file_name)
        if name:
            employee_list[walk_count] = name[0]
        walk_count += 1

    return jsonify(employee_list)
Enter fullscreen mode Exit fullscreen mode

Create a route that will delete a user with his name

We receive a GET request from the front with the name of the user as a string to delete it. Then the API acess the user's folder and delete the picture with the corresponsing name.

# * ---------- Delete employee ---------- *
@app.route('/delete_employee/<string:name>', methods=['GET'])
def delete_employee(name):
    try:
        # Select the path
        file_path = os.path.join(f'assets/img/users/{name}.jpg')
         # Remove the picture of the employee from the user's folder:
        os.remove(file_path)
        answer = 'Employee succesfully removed'
    except:
        answer = 'Error while deleting new employee. Please try later'

    return jsonify(answer)
Enter fullscreen mode Exit fullscreen mode

Here we go! We have a fully functional face recognition script and an API that kicks some ass! Let's build a nice user interface now.

3. The front end

For the front end I divided each panel in a component. We are not going into detail in each component we will just explain how to send the request and receive the answer as a json. We let you be creative to use the data. If you want an example, here is a link to the github of the projet.

Request to get an employee's data:

// Define a state the get the list of the employee's data
const [employeeList, setEmployeeList] = useState([]);
// Define a state to get the error if there is
const [errorMessage, setErrorMessage] = useState(null);


// Function to send the employee's name (value of an input fiel) and get back his data
const searchForEmployee = () => {
    // Value of the employee's name input
    const name = document.getElementById('searchForEmployee').value.toLowerCase()
    if(name){
        fetch(`http://127.0.0.1:5000/get_employee/${name}`)
        .then(response => response.json())
        .then(response => {
            if(response){
                // Set employeeList state with the response as a json
                setEmployeeList(response)
            } else {
               // Set errorMessage state with the response as a json 
              setErrorMessage(response.Error)
            }
        })
    }
    else{
       setEmployeeList(['No name find...'])
    }
}
Enter fullscreen mode Exit fullscreen mode

Request to get the 5 last arrivals or departues:

// Define a state to store the 5 last entries
const [employeeList, setEmployeeList] = useState([]);

// Make the request to the API and get the 5 last entries as a json
const searchForLastEntries = () => {
    fetch('http://127.0.0.1:5000/get_5_last_entries')
    .then(response => response.json())
    .then(response => {
        if(response) {
            // Set the value of the employeeList state with the response
            setEmployeeList(response)
        }
    })
}
Enter fullscreen mode Exit fullscreen mode

Request to add an employee:

// Create a state to check if the user as been added
const [isUserWellAdded, setIsUserWellAdded] = useState(false);
// Create a state to check if the is error while the user's adding
const [errorWhileAddingUser, seterrorWhileAddingUser] = useState(false);

const addEmployeeToDb = e => {
        e.preventDefault()
        // Send it to backend -> add_employee as a POST request
        let name = document.getElementById("nameOfEmployee").value
        let picture = document.getElementById('employeePictureToSend')

        let formData  = new FormData();

        formData.append("nameOfEmployee", name)
        formData.append("image", picture.files[0])

        fetch('http://127.0.0.1:5000/add_employee',{
            method: 'POST',
            body:  formData,
        })
            .then(reposonse => reposonse.json())
            .then(response => {
                console.log(response)
                setIsUserWellAdded(true)
            })
            .catch(error => seterrorWhileAddingUser(true))
    }
Enter fullscreen mode Exit fullscreen mode

Request the get the employee's list and delete them:

// Create a state to get the list of all the employee's list
const [nameList, setNameList] = useState({});

// Get the list of all the employee's in the folder
const getEmployeeList = () => {
    fetch('http://127.0.0.1:5000/get_employee_list')
        .then(response => response.json())
        .then (response => {
            if(!isEmployeeListLoaded){
                setNameList(response)
                setIsEmployeeListLoaded(true)
            }
        })
}

// A Component to have a button that delete the employye:
const EmployeeItem = props => {
    // Function that send the employee's name to delete
    const deleteEmployee = name => {
        fetch(`http://127.0.0.1:5000/delete_employee/${name}`)
            .then(response => response.json())
            .then(() => setIsEmployeeListLoaded(false))
    }
    return(
        <li> { props.name } <ItemButton onClick={ () => deleteEmployee(props.name) }>DELETE</ItemButton></li>
    )
}
Enter fullscreen mode Exit fullscreen mode

Now you can put a camera in front of the door and drink peacefully your coffee!

Disclaimer

If you want to use it in production, be sure to respect the law of your country. And please, Ask to people their consent before use and store their image.

GitHub of the project

You can find the repo here.

Team that made the project:

I hope I was clear.

If you have any question or suggestion about it, don't hesitate to put it in comment or you can directly contact me on LinkedIn!

Top comments (42)

Collapse
 
alilynne profile image
Ali Thompson

You really ought to put a disclaimer about how this shouldn't actually be used yet in production. Facial recognition systems just are not accurate and disproportionately affect people of color, especially black people, in harmful ways. I also didn't see anything about getting consent from people about having their faces scanned and placed into a database. Please think about including this information so that people can make ethical decisions.

Collapse
 
graphtylove profile image
Berge Maxim

Yes, I understand your point, you're right I'll do it.
I didn't knew it for the colour people, I'll double check that too!

Collapse
 
haiderkhalil0000 profile image
Haider Khalil

Hello sir! can you share your code with me only for educational purpose as i'm student learning Artificial Intelligence i will only use the source code for learning my email is "haiderkhalil0000@gmail.com" i shall be very thankful to you on this favor.

Collapse
 
mburszley profile image
Maximilian Burszley

Is there consent somewhere if this system is in use on registration for an event? Feels like a major invasion of privacy to be storing images of people associated with their identity.

Collapse
 
graphtylove profile image
Berge Maxim

You can't store image of people withour their consent. The usage can be in a school to automate attendance record for the teacher in a classroom for exemple. But in this case, parents have to consent (at least in Belgium).
If you want to know more about it, I advise you to check with your country law, it can depend.

Collapse
 
byrro profile image
Renato Byrro

I believe they actually can.

What you're saying is they should not or are not legally allowed. That's different. When the government itself is the offender, it's kinda difficult to think citizens can rely on... err... the Government to enforce correction.

But if developers won't implement it, politicians and bureaucrats can't wreak havoc a nation's privacy.

Collapse
 
shebl_albarazi profile image
shebl Albarazi

Dear Maxim...
Highly appreciate what you have done you and your colleagues...
I am trying to finilize a robust attendance system, frankly I am new to programming from networking background...,mcse,Cisco.....
I built a system following your codes ,but I am facing two major things
1- How to display the video on the local host Reactjs....the connection is refused (I am using raspberry pi with its camera)
2-users data base needs to be built with postures...I didn't successed to do it ..I don't have the exact columns and their names, like time,date ,arrival time ...
Appreciate your help on those two major point ......all the best
Shebl albarazi
Shebl.barazi@gmail.com

Collapse
 
srinivasav22 profile image
Srinivasa V • Edited

Hi Berge Maxim,

Your project was awesome

while i tried to run code by running app.py i am getting an error is key Error : Database USer
app.py line 21 in and and line 679

could you please help me how to run the code

Thanks in advance

Srinivasa V

Collapse
 
codinghamster12 profile image
coding-hamster

hey, great project! but I have one question, are you sure this model is accurate, like I tried creating my project the same way, but like when I use the webcam to identify multiple people in one frame, it keeps changing the bounding boxes labels, and that way attendance for more than one people gets marked, because the labels of boxes keep changing. i tried using more than one image, like 30 images for each person but still the same problem, can you help me understand why?

Collapse
 
deesphe profile image
Sphe

Hi Maxim,

I'd like to talk to you about this project of yours; if you have time, please send me an Email at : sphe.kay@gmail.com
It'd really assist me in the project I'm currently doing, I hope to hear from you. Thank you.

Regards,
Sphe

Collapse
 
iambudi profile image
I am Budi

Thanks for the post. I am wondering if i have selfie photo on my phone and show it to the attendance camera will it recognize my face?

Collapse
 
graphtylove profile image
Berge Maxim

Yes. To face this problem, when a face is detected, we take a screenshot of the face so the admin can verify manually.

Collapse
 
ccerocks profile image
CCERocks

Hi, how hard to make double authenticity with this script.. i mean.. face recognition is 1 then accesscard or fingerprint is 1 too..

anyhow shoould be easier to insert API from other services ?

Collapse
 
chiayen0503 profile image
Chiayen0503

Hi Berge,

Do you know how to separate and list prediction result for every detecting face in multiple frames?

I'd to create a list given detecting face; the list stores all possible candidate names after processing multiple frames.

The reason I do this is the predicting results from the API is not always correct; especially when we look on predicting result from a single frame but ignore from other frames.

Collapse
 
jamiayahyamadni profile image
jamia yahya madni

Hello sir! can you share your code with me only for educational purposes as I'm a student learning Artificial Intelligence I will only use the source code for learning my email is "digitalmolvi0@gmail.com" I shall be very thankful to you on this favor.

Collapse
 
harichandu13 profile image
hari Chandan

Hey Berge Maxim,
I am getting the below error while running the get_name_from_camera_feed.py file.
Could you please help me out why am I facing this problem.

File "get_name_from_camera_feed.py", line 49, in
known_face_encodings.append(face_recognition.face_encodings(face)[0])
IndexError: list index out of range

Collapse
 
aashaybane profile image
Aashay Bane

Hi How do I reject the picture from mobile , In case someone tries to cheat

Collapse
 
ehsaninfo profile image
ehsan-info

This is a web-based application. I want to use it on a mobile phone. Then what should I do? Please help me out by giving your suggestion.

Collapse
 
abdulqayoom91 profile image
ABDULQAYOOM91

How to connect ip camera and store videos
like cctv cameras ??
how to make complete hardware basis project using your project>

Collapse
 
graphtylove profile image
Berge Maxim

For the ip camera, it fully depends on the IP camera you use.
"how to make complete hardware basis project using your project"
I don't understand your question.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.