Introduction
The ability to detect and analyze human faces is a core AI capability. In this exercise, you’ll explore the Face service to work with faces.
Provision an Azure AI Face API resource
Open the Azure portal at https://portal.azure.com, and sign in using your Azure credentials. Close any welcome messages or tips that are displayed.
- Select Create a resource.
- In the search bar, search for Face, select Face, and create the resource with the following settings:
- Subscription: Your Azure subscription
- Resource group: Create or select a resource group
- Region: Choose any available region
- Name: A valid name for your Face resource
- Pricing tier: Free F0
- Create the resource and wait for deployment to complete, and then view the deployment details.
- When the resource has been deployed, go to it and under the Resource management node in the navigation pane, view its Keys and Endpoint page. You will need the endpoint and one of the keys from this page in the next procedure.
Develop a facial analysis app with the Face SDK
- Open VS Code
- Enter the following commands to clone the GitHub repo containing the code files for this exercise
git clone https://github.com/MicrosoftLearning/mslearn-ai-vision
- After the repo has been cloned, use the following command to navigate to the application code files:
cd mslearn-ai-vision/Labfiles/face/python/face-api
The folder contains application configuration and code files for your app. It also contains an /images subfolder, which contains some image files for your app to analyze.
- Install the Azure AI Vision SDK package and other required packages by running the following commands:
pip install -r requirements.txt azure-ai-vision-face==1.0.0b2
- Open env file in VS Code, update the configuration values it contains to reflect the endpoint and an authentication key for your Computer Vision resource (copied from its Keys and Endpoint page in the Azure portal).
- After you’ve replaced the placeholders, use the CTRL+S command to save your changes and then use the CTRL+Q command to close the code editor while keeping the cloud shell command line open.
Add code to create a Face API client
- Open analyze-faces.py in VS Code.
- In the code file, find the comment Import namespaces, and add the following code to import the namespaces you will need to use the Azure AI Vision SDK:
# Import namespaces
from azure.ai.vision.face import FaceClient
from azure.ai.vision.face.models import FaceDetectionModel, FaceRecognitionModel, FaceAttributeTypeDetection01
from azure.core.credentials import AzureKeyCredential
- In the Main function, note that the code to load the configuration settings and determine the image to be analyzed has been provided. Then find the comment Authenticate Face client and add the following code to create and authenticate a FaceClient object:
# Authenticate Face client
face_client = FaceClient(
endpoint=cog_endpoint,
credential=AzureKeyCredential(cog_key))
Add code to detect and analyze faces
- In the code file for your application, in the Main function, find the comment Specify facial features to be retrieved and add the following code:
# Specify facial features to be retrieved
features = [FaceAttributeTypeDetection01.HEAD_POSE,
FaceAttributeTypeDetection01.OCCLUSION,
FaceAttributeTypeDetection01.ACCESSORIES]
- In the Main function, under the code you just added, find the comment Get faces and add the following code to print the facial feature information and call a function that annotates the image with the bounding box for each detected face (based on the face_rectangle property of each face):
# Get faces
with open(image_file, mode="rb") as image_data:
detected_faces = face_client.detect(
image_content=image_data.read(),
detection_model=FaceDetectionModel.DETECTION01,
recognition_model=FaceRecognitionModel.RECOGNITION01,
return_face_id=False,
return_face_attributes=features,
)
face_count = 0
if len(detected_faces) > 0:
print(len(detected_faces), 'faces detected.')
for face in detected_faces:
# Get face properties
face_count += 1
print('\nFace number {}'.format(face_count))
print(' - Head Pose (Yaw): {}'.format(face.face_attributes.head_pose.yaw))
print(' - Head Pose (Pitch): {}'.format(face.face_attributes.head_pose.pitch))
print(' - Head Pose (Roll): {}'.format(face.face_attributes.head_pose.roll))
print(' - Forehead occluded?: {}'.format(face.face_attributes.occlusion["foreheadOccluded"]))
print(' - Eye occluded?: {}'.format(face.face_attributes.occlusion["eyeOccluded"]))
print(' - Mouth occluded?: {}'.format(face.face_attributes.occlusion["mouthOccluded"]))
print(' - Accessories:')
for accessory in face.face_attributes.accessories:
print(' - {}'.format(accessory.type))
# Annotate faces in the image
annotate_faces(image_file, detected_faces)
- Examine the code you added to the Main function. It analyzes an image file and detects any faces it contains, including attributes for head pose, occlusion, and the presence of accessories such as glasses. Additionally, a function is called to annotate the original image with a bounding box for each detected face.
- Save your changes (CTRL+S) but keep the code editor open in case you need to fix any typo’s.
- Resize the panes so you can see more of the console, then enter the following command to run the program with the argument images/face1.jpg:
python3 analyze-faces.py images/face1.jpg
The app runs and analyzes the following image:
- Observe the output, which should include the ID and attributes of each face detected.
- Note that an image file named detected_faces.jpg is also generated. Open detected_faces.jpg:
- Run the program again, this time specifying the parameter images/face2.jpg to extract text from the following image:
- Run the program one more time, this time specifying the parameter images/faces.jpg to extract text from this image:
You’ve just harnessed the power of Azure’s Face API to detect and analyze human faces with remarkable precision—from identifying accessories to measuring head pose. This technology opens doors to transformative applications: smarter security systems, personalized retail experiences, and accessible UI design that adapts to users’ expressions and focus.
Project guide link: https://microsoftlearning.github.io/mslearn-ai-vision/Instructions/Labs/03-face-service.html
Top comments (0)