Advances in AI have made voice available even for extremely resource-constrained devices, such as Arduino.
On day 32, we’re going to add a voice assistant to Ardunio Nano 33 BLE Sense via [Picovoice SDK (https://picovoice.ai/docs/picovoice/).
End of this tutorial, we'll enable Arduino to recognize voice commands such as
“Porcupine, make the red light blink eight times quickly.”
Porcupine Wake Word detects the wake word Porcupine, then Rhino Speech-to-Intent determines the intent from the follow-up command (make the red light blink eight times quickly):
{
“intent”: “blinkLight”,
“slots”: {
“color”: “red”
“Iteration”: “8”
“speed”: “quickly”
}
}
Grab the open-source code and try yourself:
- Download the project (both
Control_LED_Picovoice.inandparam.h). - Open the Library Manager in the Arduino IDE, search for the
Picovoicepackage. - Install
Picovoicepackage. - Grab your
AccessKeyfrom Picovoice Console for free. - Use the AccessKey you grabbed from the Console and run the below:
static const char* ACCESS_KEY = "${YOUR_ACCESSKEY}";
- Open the serial monitor.
- You will see the grammar for the context, describing all of the follow-on voice commands it understands.
- Say, for example:
“Picovoice, make red flash” - Create a custom context model on the Picovoice Console. Picovoice Console is a web-based UI to design and train models easily for voice interactions that run on the Picovoice SDK.
Now let's add the context model to a project. 10. Replace
CONTEXT_ARRAYin theparam.hfile with the new context model by following the steps explained in the Picovoice SDK for Arduino documentation. - Update the
inference_callbackso that it takes appropriate actions (e.g., blinking a LED) once an intent is recognized.
Resources:

Top comments (0)