Did you know you can make machine learning models with Wolfram Mathmatica which is bundled into the full version of RaspberryOS (raspbian)? Well you can.
To give you some background I have been implementing a security camera system using imagenode, imagehub, and imagezmq developed by Jeff Bass which in my humble opinion is just awesome. Jeff has written a very good installation guide so go have a look.
Once you have the nodes and the hub setup what you get is a detailed activity log on both the nodes and hub that is easy to parse and is rotated daily. Another feature is that the image folders are rotated daily also. And that's it the Desktop app or Web front end is on you. I chose the web frontend and developed a quick and dirty page using Svelte to show the last 25 pics along with some activity stats.
Now this setup does a wonderful job of detecting motion and responding to events. I have one monitoring my drive way and garage door but what I needed was a way to tell if the garage door was open or closed. I looked at pre-built models that google offers but found them lacking and would need to be retrained.
So while sifting through the mountains of Machine Learning tutorials out there I decided to take a break. I have recently started using the RPI 4 as my primary desktop so I decided to take a break and mess with this Wolfram Mathmatica. I have seen it bundled with the RPI for years but never messed with it. I noticed their project page and found this Facial recognition how to. Whaaaaaat is is really that easy. Just so happens I already had about 500 photos of garage door opened and closed. So I separated them into different folders and followed the tutorial. About an hour later it spit out a model that I can put up on imagehub to classify images. But how to do that?
Wolfram provides python bindings to the Wolfram engine,
pip3 install wolframclient
so running your new model on your RPI is really easy.
from wolframclient.evaluation import WolframLanguageSession
from wolframclient.language import wl
session = WolframLanguageSession('/usr/bin/wolfram')
gdstatus = wl.Import("/path/to/model.wls)
status = ""
for gdpic in gdpiclist:
picimport = wl.Import("/path/to/pic/to/classify.jpg")
status = session.evaluate(gdstatus(picimport))
session.terminate()
Conclusion
As this is not the most optimal option to machine learning it is an easy, low install way of creating trivial machine learning models. Everything you need is already installed except for one package that is pip installable. I hope this helps someone.
Best Regards
Charlie
Top comments (2)
Thanks for this. I am Jeff Bass, the developer of imageZMQ, imagenode and imagehub. This post shows a great way to analyze the images in imagehub that had never occurred to me. I'm going to try it out. Thanks for sharing it.
What version of Raspberry OS are you using on your imagenode RPi that is gathering the images? I am still running the old Buster version of RPi OS and haven't tried the latest one on my imagenodes yet.
Hi Jeff,
Thanks for imageZMQ, imagenode and imagehub great piece of software you go there. :) I was using the stock Buster image (along with doing regular updates) which has wolfram preinstalled. IIRC there is an additional python/wolfram bindings package you have to install. The project I was working on was a simple one of detecting if the garage door was open or closed, it took approx 500 pics of the garage door open and closed to train the model with an approx 97% prediction accuracy rate. I have since abandoned that particular project as it is no longer need it. However I do have plans of possibly experimenting with replacing parts of the stack with rust (make thing a little faster). In my current project (which is running on the rpi 3B+) I can see a noticeable speed increase over python on that particular device. But yea hopefully one day I will get around to it. I will be sure to reach out to you if and when it happens :)
Once again thank you for imageZMQ, imagenode and imagehub :)