DEV Community

Cover image for Deploy a Machine Learning Spam SMS Filter App In Minutes Using BentoML
Victor Isaac Oshimua
Victor Isaac Oshimua

Posted on

Deploy a Machine Learning Spam SMS Filter App In Minutes Using BentoML

Building a machine learning (ML) solution is an iterative process that involves different stages, with training a ML model being one of these stages. However, it doesn't end there. As a machine learning engineer, it is your job to deploy a trained model. In other words, you are responsible for making the model available to end users.

Let's say you build a machine learning model that can classify SMS as spam or not. This model did incredibly well in classifying new SMS. Wouldn't it be nice if you took this model out of your Jupyter notebook and made it available for anyone to use?
This is yet another difficult task, but don't worry, BentoML has got you covered.
In this article, we will discuss how to deploy a machine learning model using BentoML.

Table of contents

  1. Prerequisites
  2. What is BentoML?
  3. Building a spam detection model
  4. Building a Web service with BentoML
  5. Building Bentos
  6. Deploying to a Docker container
  7. Final thoughts

Prerequisites

  • Familiar with Docker. Make sure Docker is installed on your computer.
  • Install BentoML with this command: pip install bentoml .

What is BentoML?

BentoML is an open-source Python framework for building machine learning applications. With BentoML, you can build and serve a ML model, customise a ML service to fit a use case, and also deploy a ML service to production.
Speaking of deploying to production, BentoML provides the following:

  • Scalability
  • Operational efficiency
  • Repeatability (continuous integration and Continuous Development)
  • Flexibility
  • Resilience
  • Easy to use

To see how to deploy a machine learning model with BentoML, let us build a spam detection model.

Building a spam detection model

To build a spam detection model, you need a dataset that contains SMS labelled as spam or not.
Follow this link to download the dataset.
Next, copy and save this code as a Python script.

#import libraries
import bentoml 
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import re 
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
print("libraries import successful")

#read the data
data=pd.read_csv("SMSSpamCollection",sep='\t',header=None,names=['Label', 'SMS'])
print("data read successfully")

## Randomise the dataset
randomised_data=data.sample(frac=1,random_state=1)

#convert the target(label) to numerical feature
randomised_data.Label=(randomised_data.Label=="spam").astype(int)

#train test split
data_train,data_test=train_test_split(randomised_data,test_size=0.2,random_state=1)
y_train=data_train["Label"]
y_test=data_test["Label"]
del data_train["Label"]
del data_test["Label"]


## Remove punctuations from sms
data_train["SMS"]=data_train["SMS"].replace("\W", " ", regex=True)
data_test["SMS"]=data_test["SMS"].replace("\W", " ", regex=True)

# transform letter to lowercase
data_train["SMS"]=data_train["SMS"].str.lower()
data_test["SMS"]=data_test["SMS"].str.lower()

# transform letter to lowercase
data_train["SMS"]=data_train["SMS"].str.lower()
data_test["SMS"]=data_test["SMS"].str.lower()


# data transformation
vectorizer = CountVectorizer()
X_train_encoded = vectorizer.fit_transform(data_train['SMS'])
X_test_encoded = vectorizer.transform(data_test['SMS'])
print("data transformed successfully")

#trainning the model
nb_model = MultinomialNB()
nb_model.fit(X_train_encoded, y_train)
y_pred = nb_model.predict_proba(X_test_encoded)[:, 1]
accuracy = nb_model.score(X_test_encoded, y_test)
print("Accuracy:", accuracy)
print("model is trained")

#saving the trained model to disk
saved_model=bentoml.sklearn.save_model("naive_bayes_model",nb_model , custom_objects={"countvectorizer":vectorizer})
saved_model

print("model is saved")
Enter fullscreen mode Exit fullscreen mode

This script will load the downloaded dataset, process the dataset, build a Naive Bayes model, and save the model with BentoML.

Note: BentoML provides a way of saving any ML model,from Tensorflow to Sklearn and Pytorch. The list goes on. To learn more about saving various models with BentoML, Check out this guide.

In our own case, we saved the model with this code:

saved_model=bentoml.sklearn.save_model("naive_bayes_model",nb_model , custom_objects={"countvectorizer":vectorizer})
Enter fullscreen mode Exit fullscreen mode

Code Explanation:

saved_model = bentoml.sklearn.save_model(...) This line of code is invoking the save_model function from the bentoml.sklearn module.

"naive_bayes_model" This is the name you have given to the saved model. It is a string that will be used to identify and reference the model later.

nb_model This is the Naive Bayes model object that you want to save. It is passed as the second argument to the save_model function.

custom_objects={"countvectorizer":vectorizer} BentoML provides a way to specify parameter for any custom objects that are required for the model to be loaded and used correctly in the future. In this case, the countvectorizer object is required to be available when loading the model. By passing it as a custom object, BentoML will ensure that it is included and properly restored when the model is loaded.

Building a Web service with BentoML

After building the spam detection model, let us create a web service for the model using BentoML.
You can go ahead and copy this code, save it as a python script and run the script.

import bentoml
from bentoml.io import Text

model_ref = bentoml.sklearn.get("naive_bayes_model:latest")
vectorizer =model_ref.custom_objects['countvectorizer']

model_runner=model_ref.to_runner()

svc = bentoml.Service("spam_sms_dectector",runners=[model_runner])


@svc.api(input=Text(),output=Text())
def classify_sms(message):
    message = message.lower()
    message_list = [message]  
    encoded_data=vectorizer.transform(message_list)
    prediction=model_runner.predict.run(encoded_data)
    result=prediction
    if result >= 0.5:
        return("The SMS is classified as spam.")
    else:
        return("The SMS is classified as non-spam.")



Enter fullscreen mode Exit fullscreen mode

Code Explanation:

Import libraries: The code begins with importing the necessary dependencies, including BentoML and bentoml.io.Text.

Loading the Model: The model_ref variable is used to load the latest version of the Naive Bayes model from BentoML.
The vectorizer variable is assigned with the countvectorizer object from the model's custom objects. It is used to transform text data into numerical feature vectors.

Creating a Model Runner: The model_runner is created by converting the model_ref to a runner using to_runner(). This allows the model to be executed in a scalable and optimised manner.

Defining the BentoML Service: The svc variable represents the BentoML service and is created with the name "spam_sms_detector".

Defining the API Endpoint: The classify_sms function is decorated with @svc.api to define the API endpoint for the SMS classification.
It takes a single input parameter, message, which represents the SMS text to be classified.
The function performs the following steps:

  1. Converts the message to lowercase.
  2. Creates a list message_list containing the message for compatibility with the vectorizer.
  3. Uses the vectorizer to transform the message_list into encoded data (numerical feature vectors).
  4. Calls model_runner.predict.run to make predictions on the encoded data.
  5. Checks the prediction result and returns the appropriate classification result message.

After creating the web service, it's time to try it out.
Enter your command line and navigate to the project directory. This directory should contain the train script and the service script.

Run this command:

bentoml serve service.py:svc --reload

Enter fullscreen mode Exit fullscreen mode

You will get an output like this.
BentoML Service

You can interact with the service locally at port 3000 by going to this address: http://0.0.0.0:3000
BentoML provides a swagger UI to interact with your model. In the UI, click on Try it out, enter any message (spam or not), and the model will tell you if the message is classified as "spam" or "not spam".

Building Bentos

Building a bento simply means putting all your files, models, API services, dependencies, Docker images, etc. in a single unit (A bento) to make the ML service deployable.

To build a bento, start by creating a bento file named bentofile.yaml in your project directory. Copy and paste the below code in the bento file.

service: "service:svc"  
labels:
    owner: you
include:
- "*.py"  
python:
    packages:  # Additional pip packages required by the service
    - scikit-learn
Enter fullscreen mode Exit fullscreen mode

This file is used to specify the packages and dependencies used for building your ML service.
To learn more about what you can specify in a bento file, check out this [guide].(https://docs.bentoml.org/en/latest/concepts/bento.html)
Once you have your bento file ready, you can now create a bento.
Run this command in your project directory.

bentoml build
Enter fullscreen mode Exit fullscreen mode

Output:

Building a Bento

The bento will be saved in this directory: bentoml/bentos/bento name/bento tag
The file in the directory should look like this.

Image description

Amazing right? You have just created one unit file that contains your model, service, processors, and Python requirements with just a few lines of code.

Deploying to a Docker container

As a machine learning engineer, it is important to containerize your ML service to ensure reproducibility.

To containerize your bento, Run this command to build a Docker image:

bentoml containerize bento name:Tag
Enter fullscreen mode Exit fullscreen mode

Replace bento name with the name of your ML service name, and replace Tag with the bento tag.
This will take a few minutes, depending on how many times you've run the command.

Output:

Docker container
Now that you have your Docker image, run this command to access the Docker container.

docker run -it —rm -p 3000:3000 bento name tag serve
Enter fullscreen mode Exit fullscreen mode

Although you can easily see this command from the message you get when the image is built successfully.
Once the container is running, enter this URL to access the ML service. http://0.0.0.0:3000
You will get the same UI as when you were serving the model directly. From there, you can test the model by entering any message.

Output:

Deployed service

Pretty cool, right? You no longer need to worry about dependencies or environment management. Your ML service can be accessed from any computer.
From here, you can decide to deploy to a cloud service provider of your choice; I personally prefer AWS.

Final thoughts

BentoML provides a way to deploy your machine learning model to an API endpoint without worrying about end-user environment management.
In this article, we've discussed:

  • How to save your ML model with BentoML
  • How to build an ML service
  • How to make your service production ready

The source code for this project can be found here.
Happy coding!!
I write machine learning and data science articles regularly. You can connect with me on Linkedin, and Twitter, or send me an email.

Top comments (0)