Cover image for  Interactive Machine Learning Experiments

Interactive Machine Learning Experiments

trekhleb profile image Oleksii Trekhleb 9 min read


Hey readers!

I've open-sourced new Interactive Machine Learning Experiments project on GitHub. Each experiment consists of 儭 Jupyter/Colab notebook (to see how a model was trained) and demo page (to see a model in action right in your browser).

Although the models may be a little dumb (remember, these are just experiments, not a production ready code), they will try to do their best to:

  • Recognize digits or sketches you draw in your browser
  • Detect and recognize the objects you'll show to your camera
  • Classify your uploaded image
  • Write a Shakespeare poem with you
  • 儭 Play with you in Rock-Paper-Scissors game
  • etc.

I've trained the models on Python using TensorFlow 2 with Keras support and then consumed them for a demo in a browser using React and JavaScript version of Tensorflow.

Models performance

儭 First, let's set our expectations.儭 The repository contains machine learning experiments and not a production ready, reusable, optimised and fine-tuned code and models. This is rather a sandbox or a playground for learning and trying different machine learning approaches, algorithms and data-sets. Models might not perform well and there is a place for overfitting/underfitting.

Therefore, sometimes you might see things like this:

Dumb model

But be patient, sometimes the model might get smarter and give you this:

Smart model


I'm a software engineer and for the last several years now I've been doing mostly frontend and backend programming. In my spare time, as a hobby, I decided to dig into machine learning topics to make it less like magic and more like math to myself.

  1. Since Python might be a good choice to start experimenting with Machine Learning I decided to learn its basic syntax first. As a result a Playground and Cheatsheet for Learning Python project came out. This was just to practice Python and at the same time to have a cheatsheet of basic syntax once I need it (for things like dict_via_comprehension = {x: x**2 for x in (2, 4, 6)} etc.).

  2. After learning a bit of Python I wanted to dig into the basic math behind Machine Learning. So after passing an awesome Machine Learning course by Andrew Ng on Coursera the Homemade Machine Learning project came out. This time it was about creating a cheatsheet for basic machine learning math algorithms like linear regression, logistic regression, k-means, multilayer perceptron etc.

  3. The next attempt to play around with basic Machine Learning math was NanoNeuron. It was about 7 simple JavaScript functions that supposed to give you a feeling of how machines can actually "learn".

  4. After finishing yet another awesome Deep Learning Specialization by Andrew Ng on Coursera I decided to practice a bit more with multilayer perceptrons, convolutional and recurrent neural networks (CNNs and RNNs). This time instead of implementing everything from scratch I decided to start using some machine learning framework. I ended up using TensorFlow 2 with Keras. I also didn't want to focus too much on math (letting the framework do it for me) and instead I wanted to come up with something more practical, applicable and something I could try to play with right in my browser. As a result new Interactive Machine Learning Experiments came out that I want to describe a bit more here.


Models training

  • 領 I used Keras inside TensorFlow 2 for modelling and training. Since I had zero experience with machine learning frameworks, I needed to start with something. One of the selling points in favor of TensorFlow was that it has both Python and JavaScript flavor of the library with similar API. So eventually I used Python version for training and JavaScript version for demos.

  • 領 I trained TensorFlow models on Python inside Jupyter notebooks locally and sometimes used Colab to make the training faster on GPU.

  • Most of the models were trained on good old MacBook's Pro CPU (2,9 GHz Dual-Core Intel Core i5).

  • Of course there is no way you could run away from NumPy for matrix/tensors operations.

Models demo

  • 領 I used TensorFlow.js to do predictions with previously trained models.

  • 鳴 To convert Keras HDF5 models to TensorFlow.js Layers format I used TensorFlow.js converter. This might be inefficient to transfer the whole model (megabytes of data) to the browser instead of making predictions through HTTP requests, but again, remember that these are just experiments and not production-ready code and architecture. I wanted to avoid having a dedicated back-end service to make architecture simpler.

  • 劾領 The Demo application was created on React using create-react-app starter with a default Flow flavour for type checking.

  • For styling, I used Material UI. It was, as they say, "to kill two birds" at once and try out a new styling framework (sorry, Bootstrap 仄領).


So, in short, you may access Demo page and Jupyter notebooks by these links:

Experiments with Multilayer Perceptron (MLP)

A multilayer perceptron (MLP) is a class of feedforward artificial neural network (ANN). Multilayer perceptrons are sometimes referred to as "vanilla" neural networks (composed of multiple layers of perceptrons), especially when they have a single hidden layer.

Handwritten Digits Recognition

You draw a digit, and the model tries to recognize it.

Handwritten Digits Recognition

Handwritten Sketch Recognition

You draw a sketch, and the model tries to recognize it.

Handwritten Sketch Recognition

Experiments with Convolutional Neural Networks (CNN)

A convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery (photos, videos). They are used for detecting and classifying objects on photos and videos, style transfer, face recognition, pose estimation etc.

Handwritten Digits Recognition (CNN)

You draw a digit, and the model tries to recognize it. This experiment is similar to the one from MLP section, but it uses CNN under the hood.

Handwritten Digits Recognition (CNN)

Handwritten Sketch Recognition (CNN)

You draw a sketch, and the model tries to recognize it. This experiment is similar to the one from MLP section, but it uses CNN under the hood.

Handwritten Sketch Recognition (CNN)

Rock Paper Scissors (CNN)

You play a Rock-Paper-Scissors game with the model. This experiment uses CNN that is trained from scratch.

Rock Paper Scissors (CNN)

Rock Paper Scissors (MobilenetV2)

You play a Rock-Paper-Scissors game with the model. This model uses transfer learning and is based on MobilenetV2.

Rock Paper Scissors (MobilenetV2)

Objects Detection (MobileNetV2)

You show to the model your environment through your camera, and it will try to detect and recognize the objects. This model uses transfer learning and is based on MobilenetV2.

Objects Detection (MobileNetV2)

Image Classification (MobileNetV2)

You upload a picture, and the model tries to classify it depending on what it "sees" on the picture. This model uses transfer learning and is based on MobilenetV2.

Image Classification (MobileNetV2)

Experiments with Recurrent Neural Networks (RNN)

A recurrent neural network (RNN) is a class of deep neural networks, most commonly applied to sequence-based data like speech, voice, text or music. They are used for machine translation, speech recognition, voice synthesis etc.

Numbers Summation

You type a summation expression (i.e. 17+38), and the model predicts the result (i.e. 55). The interesting part here is that the model treats the input as a sequence, meaning it learned that when you type a sequence 1 17 17+ 17+3 17+38 it "translates" it to another sequence 55. You may think about it as translating a Spanish Hola sequence to English Hello.

Numbers Summation

Shakespeare Text Generation

You start typing a poem like Shakespeare, and the model will continue it like Shakespeare. At least it will try to do so .

Shakespeare Text Generation

Wikipedia Text Generation

You start typing a Wiki article, and the model tries to continue it.

Wikipedia Text Generation

Future plans

As I've mentioned above the main purpose of the repository is to be more like a playground for learning rather than for production-ready models. Therefore, the main plan is to continue learning and experimenting with deep-learning challenges and approaches. The next interesting challenges to play with might be:

  • Emotions detection
  • Style transfer
  • Language translation
  • Generating images (i.e. handwritten numbers)
  • etc.

Another interesting opportunity would be to tune existing models to make them more performant. I believe it might give a better understanding of how to overcome overfitting and underfitting and what to do with the model if it just stuck on 60% accuracy level for both training and validation sets and doesn't want to improve anymore .

Anyways, I hope you might find some useful insights for models training from the repository or at least to have some fun playing around with the demos!

Happy learning!

Posted on May 5 by:

trekhleb profile

Oleksii Trekhleb


Software engineer @ Uber. Author of 70k javascript-algorithms repository on GitHub. Currently in 儭儭儭Amsterdam.


markdown guide

Great way to present your experiments and share the knowledge!

My team just completed an open-sourced Content Moderation Service built Node.js, TensorFlowJS, and ReactJS that we have been working over the past weeks. We have now released the first part of a series of three tutorials - How to create an NSFW Image Classification REST API and we would love to hear your feedback. Any comments & suggestions are more than welcome. Thanks in advance!

Again big Kudos for the awesome work you've done!


This is really cool! Especially as a recent undergraduate student in Data Science, this is truly inspiring. I think I might take those Coursera courses too! Keep up the awesome work, I'd definitely love to see future updates on this.


Thanks for such words Jake! Good luck with Coursera!


Greetings Oleksii,
Wow, quite an amazing piece of work here! I can see that this presentation would have taken you some time to piece together. The Python notebooks look very 'Pro.' Are you using ML now? Or is this a field you would like to enter?

Funny enough, I was considering putting some of my own ML work on dev.to. BUT, I was not sure IF this was/is the right forum. I have been using R & ML in grad. school for the past two years. But I have to wonder how much interest there is in others learning R/RStudio? What do you think?


Hi Matt!

Thanks for such words! Yeah, to go live with these 11 experiments I needed several (2-3) months actually (of course not a full time work but rather 1-1.5 morning hours for 3-4 days in a week) :D For me at the moment ML is just a hobby and something that I learn and try for fun. Therefore the performance of models is far from desired one. But, yeah, eventually in the future I guess it might be possible to work with ML more closely and professionally.

About adding your article here on dev.to, I'd say that's a good idea. At least there is a #machinelearning tag available here on the platform so it should be pretty valid to have such articles here :)


Thank you . For your road map. I couldn't find a good road map for starting ML. What's your opinion about handle machine learning book is it good for starting or not? and rust is good for ML or not?


I'm a new to machine learning to be honest, so not too much of experience to share, but I would suggest starting from Andrew Ng courses on Coursera (i.e. coursera.org/learn/machine-learning). He explains things really good and make them easy to understand. Regarding the language, I would prefer Python because of all those libraries (Keras, Tensorflow, NumPy, Matplotlib etc.) that makes life easier. But that's my personal choice.


Thanks again .okay . I add this to my Todo list


wow, that was soooooo cool and amazing
love the way you are writing
keep it up.