DEV Community

Zuri Hunter
Zuri Hunter

Posted on

How I Built my first Twitch Bot with Natural Language Processing

Introduction

Over the past couple months I have been experimenting with different areas of Machine Learning and Artificial Intelligence. Recently I tried building a Convolutional Neural Network that would have the ability to identify hair types from images. To learn more about my experiment you can check out my video here. Despite not having a successful outcome exploring Convolutional Neural Networks, I was still inspired to learn more about AI/ML. This led me to learn about a subset in the space called Natural Language Understanding, specifically "Question and Answer." In my new project I decided to build a Twitch Bot that will answer questions about the three black women who are trailblazers within the AI/ML industry, Timnit Gebru, Rediet Abebe, and Joy Buolamwini.

What is Natural Language Understanding?

In the text world for Machine Learning and Artificial Intelligence there are two subtopics: Natural Language Processing and Natural Language Understanding. Natural Language Processing is when I teach the machine to extract, categorize and break down sentence structures. For example the sentence "The quick brown fox jumps over the lazy dog." Using a method called “name entity recognition” the machine would identify “fox” and “dogs” as “things” and then using “parts-of-speech tagging” the machine would parse out “brown”, “quick” and “lazy” as an adjective and “jumps” as a verb. These two methods within NLP alone just help the machine identify components in the natural language but not really give the machine the ability to understand the sentence.

This is where the subtopic Natural Language Understanding is introduced. NLU trains the machine to have reading comprehension over the natural language that it processes. A method in NLU called “Question and Answer” teaches the machines how to respond to questions based on the data it receives. In the context of my example sentence, if I asked the machine "What color is the fox?", it would answer back "The fox is brown." Q&A models can answer "definition" questions, "how and why" questions and semantically constrained questions. Also Q&A models can be trained on everything, which is called “open domain” or it can be trained on a particular subject, “closed domain”. For the model that I will be building it is going to be a closed domain since it will only be able to answer questions about the three ladies in the AI/ML industry and it will be able to answer "definition" questions and "how and why" questions.

Building the Bot

To build my Twitch Bot there are three major components that I will have to piece together: connecting to my own personal Twitch channel, setting up a cache database for the data and building the "Question and Answer" model. Twitch has a very flexible API that allows for me to read and post messages to my channel. The library I chose to interface with their API is TMI.js, a Twitch Message Interface library. My data is several text files that are less than 100KB. Each time my model runs, it will retrieve the data and conduct its analysis. To speed up this processing I am going to create an in-memory cache database. Cache Databases increases processing time and decreases the time it takes to retrieve data. The library I chose was memory-cache.js, a simple in-memory cache for Node.js.

Finally, I will be using the popular open-source machine-learning library called Tensorflow. Tensorflow was originally written in Python but it has grown to other programming languages. The Tensforflow.js is written in Node.js and contains pre-trained models. There are two versions of Tensorflow.js: CPU and GPU. All of this means that the model will run on the CPU or GPU of my machine. Running models on GPU has been known for faster processing. For my model I am going to use CPU. The libraries that I will use to support my model are @tensorflow-models/qna.js, @tensorflow/tfjs and @tensorflow/tfjs-node.

Setup Twitch

The first step I have to do is initialize my Twitch client. TMI.js requires me to provide my username, password and channel in order to view and respond to messages in my Twitch channel.

// src/index.js
require("dotenv").config()
const tmi = require("tmi.js")

const client = new tmi.Client( {
   connection: {
     secure: true,
     reconnect: true
  },
  identity: {
    username: process.env.TWITCH_USERNAME,
    password: process.env.TWITCH_AUTH
  },
  channels: [ process.env.TWITCH_CHANNEL ]
});
Enter fullscreen mode Exit fullscreen mode

Since those three items are sensitive information, I will place them in an .env file. "dotenv" is a library that loads environment files to the process.env object. It is very secure and useful for configuring servers for different environments. Another configuration that I can set with my Twitch client is the connection type. To follow best practices I am going to set my connection to be secure (HTTPS). I am also going to set my Twitch client to constantly reconnect if there was a drop in connection.

Build Out Model

After initializing my Twitch client, the next thing to do is to build out my model. Earlier I mentioned that Tensorflow.js uses pre-trained models. A pre-trained model is a model that is developed by another party that serves as a foundation model to expand on. For example, if I want to build a model that can detect cat breeds in pictures. I have the option to build the model from scratch. This means I will have to train the model to know the difference between cats and other objects. This requires over millions of images of all types including cats. Yet if I used a pre-trained model it will already know the difference between cats and objects. This gives me the opportunity to build models faster to solve problems. For my use case, I am going to use a pre-trained model for "Question and Answer."

Before I can use the model, I would need to feed it my own data. The data will be stored in a text file. The set back with doing that is every time the model runs the server would have to read the file system.

// src/model/nlu.js
const fs = require('fs')
const path = require('path')
const cache = require('memory-cache')

const memCache = new cache.Cache();

(async function () {
   [
    path.join(__dirname, '..', 'data', 'joy-buolamwini.txt'),
    path.join(__dirname, '..', 'data', 'rediet-abebe.txt'),
    path.join(__dirname, '..', 'data', 'timnit-gebru.txt')
   ].map( function (file) {
    fs.readFile(file, 'utf-8', function(err, data) {
        if(err) {
            console.error('Error', err)
        else {
            const key = file.split('/')[file.split('/').length-1].split('')[0]
            memCache.put(key,data)
        }
    });
   });
})()
Enter fullscreen mode Exit fullscreen mode

To bypass this I can store the data in memory and retrieve the data from the memory cache. I will do this by creating a self-invoke function. This will read the text files and place it in the memory cache. I will retrieve the information using the file name as its key.
 
Now that I have my data in place, I can piece together the model. My model is going to answer questions from three topics. Instead of tailoring the code to all three topics, I am going to make a one-size fits all function. The function will capture the name of the topic and presented question.

// src/model/nlu.js
require('@tensorflow/tfjs-node');
const tf = require('@tensorflow/tfjs');
const QNA = require('@tensorflow-models/qna');

async function qnaModel ( name, question) {
    try {
        await tf.ready();

        let model = await QNA.load();
        const text = memCache.get(name);
        let answers = await model.findAnswers(question, text);

        if(answers.length <1) {
            return console.log('Can you rephrase the question?')
        }
        return answers[0];
    } catch(e) {
        console.log('Model encountered an error', e)
    }
}

module.exports = {
   qnaModel
}
Enter fullscreen mode Exit fullscreen mode

I am going to name the function "qnaModel." The first thing it will do is initialize the Q&A pre-trained model from Tensorflow. Then based on the name that it was given, it will retrieve the associated text from the memory cache. That information, along with the question is fed into Tensorflows function, called "findAnswers." This function takes two parameters, the data and the question. In return, it will provide me with an array of objects. Here is an example below:

[{
  text: "August 20, 1908",
  startIndex: 135,
  endIndex: 147,
  score: 0.0941282522248868
}]
Enter fullscreen mode Exit fullscreen mode

The object has four properties: text, startIndex, endIndex and score. The "text" displays the answer that the model developed. Both “startIndex” and “endIndex” points to where the model found the answer in the text. "Score" measures how confident the model is with its answer. The closer the number is to 1 the more confident the model is with the answer. By default the first entry in the array will have the higher score from the model. In the code I decide to place a conditional statement. If the array is greater than one, return the first element in the array. Finally I export this function for my Twitch client to use.

Connect Model to Twitch Bot

The Twitch API has several events that I can use to interact with in my Twitch channel. I will need to use a channel event that allows for me to listen for "messages". From there I can analyze these messages and determine whether a user is trying to interact with my bot. I am going to look for messages that contain !joybuolamwini !redietabebe and !timnitgebru. Those messages will have a question preceding it.

// src/index.js
const qnaModel = require('./model/nlp').qnaModel;

client.on('message', (channel, tags, message, self) => {
    if(self) return;

    if(message.includes('!redietabebe')){
        const cleanQuestion = message.replace('!redietabebe', '');
    }

    if(message.includes('!joybuolamwini')) {
        const cleanQuestion = message.replace('!joybuolamwini','');
      }

      if(message.includes('!timnitgebru')){
            const cleanQuestion = message.replace('!timnitgebru','');
      }
} 

Enter fullscreen mode Exit fullscreen mode

Then finally I am going to pull in the function that does the data analysis for my questions. In the code snippet I am going to drop all properties except "text." This is because I only want to display the answer in my Twitch channel.

// src/index.js
const response = async () => {
    try {
        const answer = await qnaModel('rediet-abebe', cleanQuestion);
        if(answer.text === undefined || !answer.text){
            return 'I am sorry we could not find the answer';
        }
        return answer.text;
    } catch(e) {
        return e.toString();
    }
}

Enter fullscreen mode Exit fullscreen mode

After that I am going to send the answer to my Twitch channel. To do this I use the "say" method. This method takes in two parameters: channel name and message.

response().then(data => {
   client.say(channel, data);
});
Enter fullscreen mode Exit fullscreen mode

The following snippet shows everything together in the src/index.js file.

require('dotenv').config()
const tmi = require('tmi.js')
const fs = require('fs');
const path = require('path');
const cache = require('memory-cache');
const qnaModel = require('./model/nlp').qnaModel;

let memCache = new cache.Cache();

const client = new tmi.Client({
   connection: {
       secure: true,
       reconnect: true
   },
   identity: {
       username: process.env.TWITCH_USERNAME,
       password: process.env.TWITCH_AUTH
   },
   channels: [process.env.TWITCH_CHANNEL]
})

client.connect();

client.on('message', (channel, tags, message, self) => {

   if(self) return;

   if(message.includes('!redietabebe')){
       const cleanQuestion = message.replace('!redietabebe', '');
       const response = async () => {
           try {
               const answer = await qnaModel('rediet-abebe', cleanQuestion);
               if(answer.text === undefined || !answer.text){
                   return 'I am sorry we could not find the answer'
               }
               return answer.text;
           }catch (e) {
               return e.toString()
           }
       }
       response().then(data => {
           client.say(channel, data);
       });

   }

   if(message.includes('!timnitgebru')){
       const cleanQuestion = message.replace('!timnitgebru', '');
       const response = async () => {
           try {
               const answer = await qnaModel('timnit-gebru', cleanQuestion);
               if(answer.text === undefined || !answer.text){
                   return 'I am sorry we could not find the answer'
               }
               return answer.text;
           }catch (e) {
               return e.toString()
           }
       }
       response().then(data => {
           client.say(channel, data);
       });
   }

   if(message.includes('!joybuolamwini')){
       const cleanQuestion = message.replace('!joybuolamwini', '');

       const response = async () => {
           try {
               const answer = await qnaModel('joy-buolamwini', cleanQuestion);
               if(answer.text === undefined || !answer.text){
                   return 'I am sorry we could not find the answer'
               }
               return answer.text;
           }catch (e) {
               return e.toString()
           }
       }
       response().then(data => {
           client.say(channel, data);
       });
   }
})

Enter fullscreen mode Exit fullscreen mode

Now here is a short clip on my Twitch bot working within my Twitch channel.

Conclusion

The code above is my first iteration in combining NLU and Twitch API. There are several improvements that I can make to this code. An example would be caching answers to common questions. This would cut down on the server and model's response time. Nevertheless, I wanted to share my journey in experimenting with Natural Language Understanding. I hope this tutorial gives you inspiration to dabble with Tensorflow using Javascript. You can view the full code here. If you enjoyed this tutorial, check out my technology streams on www.twitch.tv/thestrugglingblack

Top comments (0)