DEV Community

Ramakrushna Mohapatra
Ramakrushna Mohapatra

Posted on

Chatbot: Building From Scratch

Please read from start and end. If you will follow what I have mentioned here, you will able to build a chatbot without anyone's help. Believe me its gonna be a fun ride blog for you.
Be patience!

What is a Chatbot?

Perhaps you have heard this term and wondered: what is this chatbot, what is it used for, do I really need one, how can I create one?
A chatbot is a computer program designed to simulate human conversation, offering automated responses and assistance. It uses natural language processing to understand and generate text-based interactions. Chatbots are used for various tasks, from customer support to information retrieval, providing instant communication and problem-solving in diverse applications.

Image description

Preliminaries:

  • Python
  • Flask Framework
  • ML & its Algorithms
  • HTML & CSS Basics
  • Json Basics

Where it is used?

Chatbots find versatile applications across industries, making it impractical to detail all potential uses. However, they are commonly employed in: customer service desks, facilitating transactions, offering customer support, managing bookings, and ensuring round-the-clock real-time interactions with clients. These applications showcase the wide-reaching utility of chatbots in enhancing efficiency and user experiences.

Is it necessary to create one?

Certainly, determining the viability of such a venture involves a personalized assessment of costs and benefits. In today's technological landscape, numerous enterprises are progressively adopting chatbots for essential everyday operations. Prominent instances include Google Assistant, Apple Siri, Samsung Bixby, and Amazon Alexa.
Within this article, we'll delve into crafting a chatbot using Python, leveraging TensorFlow for model training, and employing Natural Language Processing (nltk) to enhance the machine's comprehension of user inquiries.

Types of chatbots:

  1. Rule-Based approach - Here the bot is trained based on some set rules. It is from these rules that the bot can process simple queries but can fail to process complex ones.
  2. Self-Learning approach - Here the bot uses some machine learning algorithms and techniques to chat. It is further subcategorized into two:
  3. Retrieval-Based models - In this model, the bot retrieves the best response from a list depending on the user input. Generative models - This model comes up with an answer rather than searching from a given list. These are the Intelligent Bots.

Terminology To Interact With

  • Natural Language Processing(nltk): It is a powerful Python library widely used for working with human language data, particularly in the field of natural language processing (NLP). NLTK provides tools, resources, and algorithms for various tasks in NLP, such as tokenization, stemming, part-of-speech tagging, parsing, and more.
  • Lemmatization: This is the process of grouping together the different inflected forms of a word so they can be analyzed as a single item and is a variation of stemming. For example "feet" and "foot" are both recognized as "foot".
  • Stemming: This is the process of reducing inflected words to their word stem, base, or root form. For example, if we were to stem the word "eat", "eating", "eats", the result would be the single word "eat".
  • Tokenization: Tokenization in NLP refers to the process of breaking down a text or a sequence of characters into individual units, typically words or sub-words, known as tokens. These tokens are the basic building blocks for further analysis in natural language processing tasks.
  • Bag of Words: This is an NLP technique of text modeling for representing text data for machine learning algorithms. It is a way of extracting features from the text for use in machine learning algorithms.

Enough of basic theory now, let's jump into work.

Let's Jump To Programming

Files you need are available in my GitHub repo. To clone or download, go to this link:
AI Chatbot: GitHub Link
Initially, it's important to confirm the availability of necessary libraries and modules. To do so, utilize the given command to install TensorFlow, nltk, and flask.

$pip install tensorflow
$pip install nltk
$pip install flask
$pip install tensorflow-gpu
Enter fullscreen mode Exit fullscreen mode

We will have a intents.json which will contain the tag, pattern, response and context. Its a simple file where all your questions and answers will be there to train the model. The file you can see here in this link intents.json.
First we will train our model by creating a new .py file named as:
train.py

# Import the necesseary libraries
import random
from tensorflow.keras.optimizers import SGD
from keras.layers import Dense, Dropout
from keras.models import load_model
from keras.models import Sequential
import numpy as np
import pickle
import json
import nltk
from nltk.stem import WordNetLemmatizer
Let's initializes a lemmatizer using the WordNetLemmatizer class. It then downloads linguistic resources: the Open Multilingual Wordnet, essential for multilingual NLP, and word-related datasets (punkt and wordnet) for text processing tasks using the Natural Language Toolkit (NLTK) library in Python.
lemmatizer = WordNetLemmatizer()
nltk.download('omw-1.4')
nltk.download("punkt")
nltk.download("wordnet")
Enter fullscreen mode Exit fullscreen mode

We now need to initialize some files and load our training data. Note that we are going to be ignoring "?" and "!". If you have some other symbols or letters that you want the model to ignore you can add them at the ignore_words array.

# initialize the files
words = []
classes = []
documents = []
ignore_words = ["?", "!"]
data_file = open("intents.json").read()
intents = json.loads(data_file)
Enter fullscreen mode Exit fullscreen mode

NOTE! If you are getting while running due to the path in data_file while loading the intents.json file. Please try to do like this. Rememmber X,Y,Z are random letter. That should be the exact path where your intents.json file stored

F:\\X\\Y\\Z\\intents.json

Then we tokenize our words. It's like dissecting a sentence into its fundamental components. For instance, the sentence "I love pizza" would be tokenized into three tokens: "I," "love," and "pizza."

# words
for intent in intents["intents"]:
    for pattern in intent["patterns"]:

        # take each word and tokenize it
        w = nltk.word_tokenize(pattern)
        words.extend(w)
        # adding documents
        documents.append((w, intent["tag"]))

        # adding classes to our class list
        if intent["tag"] not in classes:
            classes.append(intent["tag"])
Enter fullscreen mode Exit fullscreen mode

Next, we shall lemmatize the words and dump them in a pickle file
After tokenization, the words are further processed through lemmatization. Lemmatization involves reducing words to their base or root form. For example, "running" becomes "run," "better" becomes "good." These lemmatized words are then saved into a pickle file.

# lemmatizer
words = [lemmatizer.lemmatize(w.lower()) for w in words if w not in ignore_words]
words = sorted(list(set(words)))

classes = sorted(list(set(classes)))

print(len(documents), "documents")

print(len(classes), "classes", classes)

print(len(words), "unique lemmatized words", words)

pickle.dump(words, open("words.pkl", "wb"))
pickle.dump(classes, open("classes.pkl", "wb"))
Enter fullscreen mode Exit fullscreen mode

After preparing training data for a chatbot. We tokenize, lemmatize, and create a "bag of words" representation for each text pattern.

Now let's generate output labels for each class. Data is shuffled and organized into NumPy arrays. This structured data is crucial for training the chatbot's model to associate patterns with their respective intents. We initialize the model training

training = []
output_empty = [0] * len(classes)
for doc in documents:
    # initializing bag of words
    bag = []
    # list of tokenized words for the pattern
    pattern_words = doc[0]
    # lemmatize each word - create base word, in attempt to represent related words
    pattern_words = [lemmatizer.lemmatize(word.lower()) for word in pattern_words]
    # create our bag of words array with 1, if word match found in current pattern
    for w in words:
        bag.append(1) if w in pattern_words else bag.append(0)

    # output is a '0' for each tag and '1' for current tag (for each pattern)
    output_row = list(output_empty)
    output_row[classes.index(doc[1])] = 1

    training.append([bag, output_row])

# shuffle our features and turn into np.array
random.shuffle(training)

# Separate bag-of-words representations and output labels
train_x = [item[0] for item in training]
train_y = [item[1] for item in training]

# Convert to NumPy arrays
train_x = np.array(train_x)
train_y = np.array(train_y)
print("Training data created")
Enter fullscreen mode Exit fullscreen mode

We are constructing a three-layer output model. The initial layer consists of 128 neurons, followed by a second layer with 64 neurons. The third layer is tailored to match the count of intents, using a number of neurons equal to the intents to predict outcomes via softmax activation. Our choice of the Rectified Linear Unit (ReLU) activation function aims to ensure simpler training and achieve commendable performance results.

# Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
# equal to number of intents to predict output intent with softmax
model = Sequential()
model.add(Dense(128, input_shape=(len(train_x[0]),), activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(64, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(len(train_y[0]), activation="softmax"))
model.summary()
Enter fullscreen mode Exit fullscreen mode

Then, we compile the model. Stochastic gradient descent with Nesterov accelerated gradient gives good results for this model. I won't go into details about Stochastic gradient descent as this is a vastly complicated topic on its own.

sgd = SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="categorical_crossentropy", optimizer=sgd, metrics=["accuracy"])
Enter fullscreen mode Exit fullscreen mode

Now the model is being trained and saved. Training involves using the input data (train_x) and output labels (train_y) for 200 epochs with batch size 5. The resulting model, along with training history, is saved as "chatbot_model.h5". This completes the model creation process.
we can train our model and save it for fast access from the Flask REST API without the need of retraining. code for this:

# fitting and saving the model
hist = model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
model.save("chatbot_model.h5", hist)
print("model created")
Enter fullscreen mode Exit fullscreen mode

One new file will be added to your folder once this train.py will get run successfully. The file name would be chatbot_model.h5.
Please remember before running this program, if you have clone my repo from GitHub or downloaded and put it in any idle, try to remove the chatbot_model.h5 as it will get generated by running the train.py file.
So final train.py should look like this:

# libraries
import random
from tensorflow.keras.optimizers import SGD
from keras.layers import Dense, Dropout
from keras.models import load_model
from keras.models import Sequential
import numpy as np
import pickle
import json
import nltk
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
nltk.download('omw-1.4')
nltk.download("punkt")
nltk.download("wordnet")

# init file
words = []
classes = []
documents = []
ignore_words = ["?", "!"]
data_file = open("F:\\Data Science Course - IIITB\\NLP\\Chatbot\\AI Chatbot\\An-AI-Chatbot-in-Python-and-Flask-main\\intents.json").read()
intents = json.loads(data_file)

# words
for intent in intents["intents"]:
    for pattern in intent["patterns"]:

        # take each word and tokenize it
        w = nltk.word_tokenize(pattern)
        words.extend(w)
        # adding documents
        documents.append((w, intent["tag"]))

        # adding classes to our class list
        if intent["tag"] not in classes:
            classes.append(intent["tag"])

# lemmatizer
words = [lemmatizer.lemmatize(w.lower()) for w in words if w not in ignore_words]
words = sorted(list(set(words)))

classes = sorted(list(set(classes)))

print(len(documents), "documents")

print(len(classes), "classes", classes)

print(len(words), "unique lemmatized words", words)

pickle.dump(words, open("words.pkl", "wb"))
pickle.dump(classes, open("classes.pkl", "wb"))

# initializing training data
training = []
output_empty = [0] * len(classes)
for doc in documents:
    # initializing bag of words
    bag = []
    # list of tokenized words for the pattern
    pattern_words = doc[0]
    # lemmatize each word - create base word, in attempt to represent related words
    pattern_words = [lemmatizer.lemmatize(word.lower()) for word in pattern_words]
    # create our bag of words array with 1, if word match found in current pattern
    for w in words:
        bag.append(1) if w in pattern_words else bag.append(0)

    # output is a '0' for each tag and '1' for current tag (for each pattern)
    output_row = list(output_empty)
    output_row[classes.index(doc[1])] = 1

    training.append([bag, output_row])

# shuffle our features and turn into np.array
random.shuffle(training)

# Separate bag-of-words representations and output labels
train_x = [item[0] for item in training]
train_y = [item[1] for item in training]

# Convert to NumPy arrays
train_x = np.array(train_x)
train_y = np.array(train_y)
print("Training data created")

# Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
# equal to number of intents to predict output intent with softmax
model = Sequential()
model.add(Dense(128, input_shape=(len(train_x[0]),), activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(64, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(len(train_y[0]), activation="softmax"))
model.summary()

sgd = SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
model.compile(loss="categorical_crossentropy", optimizer=sgd, metrics=["accuracy"])

# fitting and saving the model
hist = model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
model.save("chatbot_model.h5", hist)
print("model created")
Enter fullscreen mode Exit fullscreen mode

Run the train.py to create the model.
Now that we are done with training let's create the Flask interface to initialize the chat functionalities. Create a .py file named as app.py
We load the required libraries and initialize the Flask app.

# import the necessesary libraries
import random
import numpy as np
import pickle
import json
from flask import Flask, render_template, request
from flask_ngrok import run_with_ngrok
import nltk
from keras.models import load_model
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
# chat initialization
model = load_model("chatbot_model.h5")
# intents = json.loads(open("intents.json").read())
data_file = open("intents.json").read()
words = pickle.load(open("words.pkl", "rb"))
classes = pickle.load(open("classes.pkl", "rb"))

app = Flask(__name__)
Enter fullscreen mode Exit fullscreen mode

We're initializing the chat process. The saved model is loaded, and intent data is read from a intents.json file. Word and class information is loaded from pickle files. This setup is for the Flask-based web application using Python, facilitating communication with the chatbot through user interactions.

NOTE! If you are getting while running due to the path in data_file while loading the intents.json file. Please try to do like this. Rememmber X,Y,Z are random letter. That should be the exact path where your intents.json file stored

F:\\X\\Y\\Z\\intents.json

@app.route("/")
def home():
    return render_template("index.html")

@app.route("/get", methods=["POST"])
def chatbot_response():
    msg = request.form["msg"]

    # Load and process the intents JSON file
    data_file = open("intents.json").read()
    intents = json.loads(data_file)

    # Rest of your existing code
    if msg.startswith('my name is'):
        name = msg[11:]
        ints = predict_class(msg, model)
        res1 = getResponse(ints, intents)
        res = res1.replace("{n}", name)
    elif msg.startswith('hi my name is'):
        name = msg[14:]
        ints = predict_class(msg, model)
        res1 = getResponse(ints, intents)
        res = res1.replace("{n}", name)
    else:
        ints = predict_class(msg, model)
        res = getResponse(ints, intents)
    return res
Enter fullscreen mode Exit fullscreen mode

The code defines a Flask app with routes for rendering an HTML interface and handling chatbot responses. User messages are processed, and the intents.json is loaded. Predicted classes trigger relevant responses, which can include personalized greetings using extracted names. The app facilitates user interactions and chatbot engagement on a web interface.

# chat functionalities
def clean_up_sentence(sentence):
    sentence_words = nltk.word_tokenize(sentence)
    sentence_words = [lemmatizer.lemmatize(word.lower()) for word in sentence_words]
    return sentence_words

# return bag of words array: 0 or 1 for each word in the bag that exists in the sentence
def bow(sentence, words, show_details=True):
    # tokenize the pattern
    sentence_words = clean_up_sentence(sentence)
    # bag of words - matrix of N words, vocabulary matrix
    bag = [0] * len(words)
    for s in sentence_words:
        for i, w in enumerate(words):
            if w == s:
                # assign 1 if current word is in the vocabulary position
                bag[i] = 1
                if show_details:
                    print("found in bag: %s" % w)
    return np.array(bag)
Enter fullscreen mode Exit fullscreen mode

The above function will call the following functions which clean up sentences and return a bag of words based on the user input.

def predict_class(sentence, model):
    # filter out predictions below a threshold
    p = bow(sentence, words, show_details=False)
    res = model.predict(np.array([p]))[0]
    ERROR_THRESHOLD = 0.25
    results = [[i, r] for i, r in enumerate(res) if r > ERROR_THRESHOLD]
    # sort by strength of probability
    results.sort(key=lambda x: x[1], reverse=True)
    return_list = []
    for r in results:
        return_list.append({"intent": classes[r[0]], "probability": str(r[1])})
    return return_list

def getResponse(ints, intents_json):
    tag = ints[0]["intent"]
    list_of_intents = intents_json["intents"]
    for i in list_of_intents:
        if i["tag"] == tag:
            result = random.choice(i["responses"])
            break
    return result
Enter fullscreen mode Exit fullscreen mode

The "predict_class" function analyzes input text using a trained model, filtering predictions based on a threshold. It sorts and returns intents with probabilities exceeding the threshold. "getResponse" retrieves a response based on the highest probability intent, selecting from predefined responses in the intents JSON. These functions enable accurate classification and appropriate responses in the chatbot application.
Final app.py file should look like this:

# libraries
import random
import numpy as np
import pickle
import json
from flask import Flask, render_template, request
from flask_ngrok import run_with_ngrok
import nltk
from keras.models import load_model
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()

# chat initialization
model = load_model("chatbot_model.h5")
# intents = json.loads(open("intents.json").read())
data_file = open("F:\\Data Science Course - IIITB\\NLP\\Chatbot\\AI Chatbot\\An-AI-Chatbot-in-Python-and-Flask-main\\intents.json").read()
words = pickle.load(open("words.pkl", "rb"))
classes = pickle.load(open("classes.pkl", "rb"))

app = Flask(__name__)
# run_with_ngrok(app) 

@app.route("/")
def home():
    return render_template("index.html")

@app.route("/get", methods=["POST"])
def chatbot_response():
    msg = request.form["msg"]

    # Load and process the intents JSON file
    data_file = open("F:\\Data Science Course - IIITB\\NLP\\Chatbot\\AI Chatbot\\An-AI-Chatbot-in-Python-and-Flask-main\\intents.json").read()
    intents = json.loads(data_file)

    # Rest of your existing code
    if msg.startswith('my name is'):
        name = msg[11:]
        ints = predict_class(msg, model)
        res1 = getResponse(ints, intents)
        res = res1.replace("{n}", name)
    elif msg.startswith('hi my name is'):
        name = msg[14:]
        ints = predict_class(msg, model)
        res1 = getResponse(ints, intents)
        res = res1.replace("{n}", name)
    else:
        ints = predict_class(msg, model)
        res = getResponse(ints, intents)
    return res

# chat functionalities
def clean_up_sentence(sentence):
    sentence_words = nltk.word_tokenize(sentence)
    sentence_words = [lemmatizer.lemmatize(word.lower()) for word in sentence_words]
    return sentence_words

# return bag of words array: 0 or 1 for each word in the bag that exists in the sentence
def bow(sentence, words, show_details=True):
    # tokenize the pattern
    sentence_words = clean_up_sentence(sentence)
    # bag of words - matrix of N words, vocabulary matrix
    bag = [0] * len(words)
    for s in sentence_words:
        for i, w in enumerate(words):
            if w == s:
                # assign 1 if current word is in the vocabulary position
                bag[i] = 1
                if show_details:
                    print("found in bag: %s" % w)
    return np.array(bag)

def predict_class(sentence, model):
    # filter out predictions below a threshold
    p = bow(sentence, words, show_details=False)
    res = model.predict(np.array([p]))[0]
    ERROR_THRESHOLD = 0.25
    results = [[i, r] for i, r in enumerate(res) if r > ERROR_THRESHOLD]
    # sort by strength of probability
    results.sort(key=lambda x: x[1], reverse=True)
    return_list = []
    for r in results:
        return_list.append({"intent": classes[r[0]], "probability": str(r[1])})
    return return_list

def getResponse(ints, intents_json):
    tag = ints[0]["intent"]
    list_of_intents = intents_json["intents"]
    for i in list_of_intents:
        if i["tag"] == tag:
            result = random.choice(i["responses"])
            break
    return result

if __name__ == "__main__":
    app.run()
Enter fullscreen mode Exit fullscreen mode

We have lastly index.html and style.css file. I don't think we need to go into details about these files as there are basic HTML and CSS codes.

index.html
<!DOCTYPE html>
<html>

<head>
    <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='style.css')}}" />
    <!-- <link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='css.css')}}" /> -->
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
</head>

<body>
    <div class="row">
        <div class="col-md-10 mr-auto ml-auto">
            <h1>Ramakrushna Mohapatra-ChatBot</h1>
            <form>
                <div id="chatbox">
                    <div class="col-md-8 ml-auto mr-auto text-center">
                        <p class="botText"><span>Hi! I'm Your bot.</span></p>
                    </div>
                </div>
                <div id="userInput" class="row">
                    <div class="col-md-10">
                        <input id="text" type="text" name="msg" placeholder="Message" class="form-control">
                        <button type="submit" id="send" class="btn btn-warning">Send</button>
                    </div>
                </div>
            </form>
        </div>
    </div>

    <script>
        $(document).ready(function() {
            $("form").on("submit", function(event) {
                var rawText = $("#text").val();
                var userHtml = '<p class="userText"><span>' + rawText + "</span></p>";
                $("#text").val("");
                $("#chatbox").append(userHtml);
                document.getElementById("userInput").scrollIntoView({
                    block: "start",
                    behavior: "smooth",
                });
                $.ajax({
                    data: {
                        msg: rawText,
                    },
                    type: "POST",
                    url: "/get",
                }).done(function(data) {
                    var botHtml = '<p class="botText"><span>' + data + "</span></p>";
                    $("#chatbox").append($.parseHTML(botHtml));
                    document.getElementById("userInput").scrollIntoView({
                        block: "start",
                        behavior: "smooth",
                    });
                });
                event.preventDefault();
            });
        });
    </script>
</body>

</html>
Enter fullscreen mode Exit fullscreen mode

style.css

body {
    font-family: Garamond;
}

h1 {
    color: black;
    margin-bottom: 0;
    margin-top: 0;
    text-align: center;
    font-size: 40px;
}

h3 {
    color: black;
    font-size: 20px;
    margin-top: 3px;
    text-align: center;
}
.row {
    display: flex;
    flex-wrap: wrap;
    margin-right: -15px;
    margin-left: -15px;
}

.ml-auto{
    margin-left:auto !important;
}
.mr-auto{
    margin-right:auto !important;
}

.col-md-10,.col-md-8,.col-md-4{
    position: relative;
    width: 100%;
    min-height: 1px;
    padding-right: 15px;
    padding-left: 15px;
}
.col-md-8{flex:0 0 66.666667%;max-width:66.666667%}
.col-md-4{flex:0 0 33.333333%;max-width:33.333333%}
.col-md-10{flex:0 0 83.333333%;max-width:83.333333%}

.form-control {
    background: no-repeat bottom,50% calc(100% - 1px);
    background-image: none, none;
    background-size: auto, auto;
    background-size: 0 100%,100% 100%;
    border: 0;
    height: 36px;
    transition: background 0s ease-out;
    padding-left: 0;
    padding-right: 0;
    border-radius: 0;
    font-size: 14px;
}
.form-control {
    display: block;
    width: 100%;
    padding: .4375rem 0;
    padding-right: 0px;
    padding-left: 0px;
    font-size: 1rem;
    line-height: 1.5;
    color: #495057;
    border:none;
    background-color: transparent;
    background-clip: padding-box;
    border-bottom: 1px solid #d2d2d2;
    box-shadow: none;
    transition: border-color .15s ease-in-out,box-shadow .15s ease-in-out;
}
.btn {
    float: left;
    text-align: center;
    white-space: nowrap;
    vertical-align: middle;
    user-select: none;
    border: 1px solid transparent;
    padding: .46875rem 1rem;
    font-size: 1rem;
    line-height: 1.5;
    border-radius: .25rem;
    transition: color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;
}
.btn-warning {
    color: #fff;
    background-color: #f08f00;
    border-color: #c27400;
}
.btn.btn-warning:active, .btn.btn-warning:focus, .btn.btn-warning:hover {
    box-shadow: 0 14px 26px -12px rgba(255,152,0,.42),0 4px 23px 0 rgba(0,0,0,.12),0 8px 10px -5px rgba(255,152,0,.2);
}

button, input, optgroup, select, textarea {
    margin: 0;
    font-family: inherit;
    font-size: inherit;
    line-height: inherit;
    overflow:visible;
}
#chatbox {
    background-color: cyan;
    margin-left: auto;
    margin-right: auto;
    width: 80%;
    min-height: 70px;
    margin-top: 60px;
}

#userInput {
    margin-left: auto;
    margin-right: auto;
    width: 40%;
    margin-top: 60px;
}

#textInput {
    width: 87%;
    border: none;
    border-bottom: 3px solid #009688;
    font-family: monospace;
    font-size: 17px;
}

#buttonInput {
    padding: 3px;
    font-family: monospace;
    font-size: 17px;
}

.userText {
    color: white;
    font-family: monospace;
    font-size: 17px;
    text-align: right !important;
    line-height: 30px;
    margin: 5px;
}

.userText span {
    background-color: #009688;
    padding: 10px;
    border-radius: 2px;
}

.botText {
    color: white;
    font-family: monospace;
    font-size: 17px;
    text-align: left;
    line-height: 30px;
    margin: 5px;
}

.botText span {
    background-color: #ef5350;
    padding: 10px;
    border-radius: 2px;
}

#tidbit {
    position: absolute;
    bottom: 0;
    right: 0;
    width: 300px;
}
Enter fullscreen mode Exit fullscreen mode

Executions Process

  • Firstly, Just delete the existing chatbot_model.h5 file from the folder.
  • Then, run the train.py file to train the model. This will generate a file named chatbot_model.h5. You will face error becauz if you haven't changed the path of the intents.json file path in datafile variable.
  • This is the model which will be used by the Flask REST API to easily give feedback without the need to retrain.
  • After running train.py, next run the app.py to initialize and start the bot.
  • To add more terms and vocabulary to the bot, modify the intents.json file and add your personalized words and retrain the model again.
  • Accessing the Chatbot: After running the Flask app (app.py), you can access the chatbot by navigating to http://127.0.0.1:5000/ in your web browser. If you're using ngrok, you'll access the chatbot through the Ngrok-generated URL.

Image description
There you are! Your chatbot is ready. Your webpage gonna look something like this:
For the full code with all the files visit my GitHub repo. Don't forget to Star if you find it helpful.
Follow me on:

If this helped you in any part of your professional life. Give it a like and comment.

Top comments (0)