This week on my twitch live stream, I extended my tool/library smiler
to use use a neural network to detect smiling faces:
Using a Convolutional Neural Network (CNN) to Detect Smiling Faces
Matt Hamilton ・ Jul 17 '20 ・ 4 min read
After the show I did some refactoring of the code, and tidied it all for for a release. It is now available to install/download on PyPI:
https://pypi.org/project/choirless-smiler/
You can use it either as a CLI tool, or as a library. We use it as a library in Choirless, and wrap it up as an Apache Openwhisk function for our render pipeline built on IBM Cloud Functions.
This morning I just added an extra feature, a progress bar when you use it as a CLI and specify the --verbose
flag.
The progress bar is implemented using a great little Python library called TQDM which makes it really easy to add progress bars into your code.
The changes I made in smiler are to the method that calculates the threshold needed to get just 5% of the "most different" frames.
def calc_threshold(self, frames, q=0.95):
prev_frame = next(frames)
counts = []
+
+ if self.verbose:
+ if self.total_frames is not None:
+ frames = tqdm(frames, total=self.total_frames)
+ else:
+ frames = tqdm(frames)
+ frames.set_description("Calculating threshold")
+
for frame in frames:
# Calculate the pixel difference between the current
# frame and the previous one
and the method that actually analyses each frame:
best_smile_score = 0
best_frame = next(frames)
+ if self.verbose:
+ if self.total_frames is not None:
+ frames = tqdm(frames, total=self.total_frames)
+ else:
+ frames = tqdm(frames)
+ frames.set_description("Finding smiliest face")
+
for frame in frames:
# Convert the frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
In both cases it was as simple as just wrapping the iterator in a tqdm()
call. If I know the total number of frames then I pass that in as a parameter.
Top comments (0)