In my last post, Willard the Smart Car was undergoing surgery for ghost states and memory leaks. While I continue to debug the "Reflex Layer," I wanted to work more on the emotional response layer.
As a recap, the plan is for the Raspberry Pi to be the Cognition Layer (high-level thinking) and the Arduino Uno (or in Willard's case the Elegoo board) to be the Reflex Layer (movement). That leaves a critical piece for interaction and sensing. I decided to experiment with the ESP32 as the Interaction Layer—the "Feeling Brain" that gives my droid its personality.
The Hardware Pivot: SparkFun & Qwiic
This layer uses the SparkFun ESP32 Thing Plus. I couldn't be happier with the ease of experimentation here, primarily because of the board’s design and the Qwiic (I2C) ecosystem. (I will publish more of my experiments to my GitHub repo uno-sketch-box repo, although kind of regretting my name choice as I'm experimenting with more than just the Arduino Uno at this point.)
One of the biggest challenges with Willard was the bird’s nest of jumper wires taking over the chassis.
The Thing Plus uses the modern USB-C standard, has built-in wireless, and features a Qwiic connector. This allows me to daisy-chain Arduino Modulino sensors with a single cable. I’m spending less time fighting with breadboards and more time imagining what the droid can actually do.
My current interaction stack includes:
- Modulino Distance: The droid’s "social eyes."
- Modulino Pixels: For visual "eye" expressions.
- Modulino Buzzer: The droid’s "voice box."
Sound Engine
I didn't want my droid to sound like a microwave or a digital alarm. I wanted it to have a distinct personality. This led me into a deep dive on Granular Synthesis and Vibrato.
Instead of playing a static frequency, I wrote a sound engine that uses micro-slides in pitch. By rapidly oscillating the frequency with a sin() function and adding tiny 20ms gaps, the buzzer produces a "croaky," mechanical texture.
for (int i = 800; i < 1200; i += 20) {
int vibrato = sin(millis() / 50.0) * 15;
buzzer.tone(i + vibrato, 20);
delay(15);
}
By starting a sound at a low 300Hz "croak" and sliding it up to a 900Hz "chirp," the droid goes from sounding like a telephone to sounding like it’s asking a question.
// An inquisitive chirp
for (int i = 300; i < 900; i += 10) {
buzzer.tone(i, 15);
delay(25); // The gap creates the mechanical "clicky" texture
}
To simulate active awareness when it isn't interacting, I used the same mathematical logic to create a pulse for its eyes. By mapping millis() to a sine wave, the LEDs oscillate in brightness smoothly rather than just staying on.
void idleBlink() {
int b = (sin(millis() / 2000.0) * 15) + 20;
pixels.set(0, 0, 100, 255, b);
pixels.set(7, 0, 100, 255, b);
pixels.show();
}
Bridging the Social Gap
The goal of this droid is to handle nudging my son that it's time for school in a way that is fun (not too much fun) but gets him engaged and aware of the time.
Currently, I’m using distance as the primary input. In the future, the droid will use vision to categorize what he's doing (e.g., "Is he wearing shoes yet?").
During testing, I discovered that if I set a single distance trigger, the droid "stutters" when my hand sits statically near the sensor. To fix this, I implemented a 250mm "buffer zone." The ESP32 greets you when you enter the 20cm range, but "locks" that interaction until you move past 45cm. Only then does it return to its idle stage (pulsing two blue LEDs).
And a nervous "red", I'm a bit too close:

What's Next?
I'm working on image recognition on the Raspberry Pi with the Google Mediapipe and OpenCV libraries to train a facial recognition model. Then the Pi will identify who is in the room, and the ESP32 will decide which personality profile to use in response.
I’d love to hear from you, how do you handle "personality" in your builds? Have you found a way to make simple buzzers sound more organic?


Top comments (0)