DEV Community

Cover image for Open Michi 1 - rewarding a robot cat

Posted on

Open Michi 1 - rewarding a robot cat

The project

I can't have a kitty, but I love animals. And robotics. So I decided to use part of my free time designing a Robot Kitty... I know there's this amazing project for doing exactly the same thing with instructions but I hate instructions and love DIY so I decided to try it from scratch, besides, I was looking for the fun in the designing process. I got really excited about it, and started watching even more kitty videos on the internet, studying the movement. After a week I started sketching stuff in my magnetic notebook while at the subway from and to the office. I was kind of excited, and after a couple of weeks I started the first paws attempts. It was going pretty well, but for reasons
I didn't have access to the lab where I had all my robotic Kitty stuff, so I'm kind of focusing on the mind, which I will later put in the kitty. I decided to write about the evolution of this project in my blog! I hope you guys enjoy.


I want it to think in its way, and learn a bit of it surrounding. I don't expect it to be perfect, but I would be so happy if it could experiment and learn in ways I don't expect. Of course any trial I do in the computer will be later dispatched as the experience is not the same as in its tiny body, but I can test different ideas. This kitty will have a camera, so it will have image processing. When I was at university I created a sumo robot alongside a couple of other students, and we had a lot of liberty in out choices so we decided to add some open CV image processing to better understand who the enemy was, and the idea of adding similar logic to my kitty came along, alongside other options.

About logic and contextual learning

I've been thinking about the learning process, trying to point out the common triggers that make us remember and enjoy learning. I think many of this process is related to emotions which is impossible for me to "add" to my project. We all dream of electric sheep being able to feel and all, but in real life I won't be able to do this. But an idea came in, I really wanted to "reward" my kitty with "treats" but it wont be able to eat (duh) but it will be able to process image information, or in general take advantage of electronic sensors. Assuming in the logic of learning will be "weights" in the processed information I could build objects that would trigger higher weights, meaning "treats". Imagine that you feel excited by looking at a green cube or a fuccia cone instead of a yummy candy, this is the general idea, but of course excitement is just some code, not... real excitement.

The code

As it's widely used in Raspberry PI and I have experience with it, I firstly thought about using Open CV. Previously I've grew familiar with face recognition and surely had a lot of fun. This time I was aiming to object recognition as well. My other option is using an Arduino colour detector, why? well:

  • Externalize the colour detection so it's not part of the image processing.
  • Requiring to purposely interact with the cat, so it doesn't get "treats" incorrectly from external sources.
  • Some image processing (such as face recognition) requires making grey scale conversion for efficiency reasons, as I explained some time ago.

I spoke about colour detection here as I used it for my tattooing gadget ideas, but this time was a little different. It needed to boost an external source. Anyway, a tentative of the colour processing would be:

// Include I2C Library
#include <Wire.h>

// Include Sparkfun ISL29125 Library
#include "SFE_ISL29125.h"

// Declare sensor object
SFE_ISL29125 RGB_sensor;

// Calibration values

unsigned int redlow = 0;
unsigned int redhigh = 0;
unsigned int greenlow = 0;
unsigned int greenhigh = 0;
unsigned int bluelow = 0;
unsigned int bluehigh = 0;

// Declare RGB Values
int redVal = 0;
int greenVal = 0;
int blueVal = 0;

void setup()
  // Initialize serial communication

  // Initialize the ISL29125 with simple configuration so it starts sampling
  if (RGB_sensor.init())
    Serial.println("Sensor Initialization Successful\n\r");

void loop()
  // Read sensor values (16 bit integers)
  unsigned int red = RGB_sensor.readRed();
  unsigned int green = RGB_sensor.readGreen();
  unsigned int blue = RGB_sensor.readBlue();

  // Convert to RGB values
  int redV = map(red, redlow, redhigh, 0, 255);
  int greenV = map(green, greenlow, greenhigh, 0, 255);
  int blueV = map(blue, bluelow, bluehigh, 0, 255);

  // Constrain to values of 0-255
  redVal = constrain(redV, 0, 255);
  greenVal = constrain(greenV, 0, 255);
  blueVal = constrain(blueV, 0, 255);

  if (redVal > 100)
    Serial.print("Red candy detected");
    //do whatever, learn this is veery good
  } elif(GreenVal > 100) {
    Serial.print("Green candy detected");
    //do whatever, learn this is nice!
  } elif(BlueVal > 100) {
    Serial.print("Blue candy detected");
    //do whatever, learn this is ok
  } else {
    //nocandy :(

  // Delay 2 seconds for sensor to stabilize
Enter fullscreen mode Exit fullscreen mode

Anyway This way I can process the "treat" apart from other things, such as object or face recognition. If I want my kitty to recognize my face it will need to take the image, convert into greyscale, calculate the vectors of each part to recognize where the darker and lighter parts are, convert it into data, compare it with the data of me it already learned and say "hey that's Paula!" and I can, at any time, approach a physical treat to the sensor without breaking this process which is already the faster. With object recognition happens the same, processing colours for response would just slow things down. And by the way I also bought touch sensors, so I can tap it in the head and it will feel it, as a treat as well. It looks like a lot to process, but actually using an Arduino nano/Metro/mega to process this external sensors is quite fast and doesn't require that much processing. After all the most difficult part is about learning and storing the information in an efficient way, which is what I'm working on right now. For that I'm trying PyTorch (I'm kind of avoiding tensorflow here?) but if you can guide me into it, I would be glad to hear options.

Next time I'll tell you guys about the paws and muscles. I tried to design a movement that would emulate a cat's. A not very fast one, though! I also installed a voice recognition but... I want it to randomly ignore me... After all is a cat.

Top comments (0)