DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

Bozhao
Bozhao

Posted on • Updated on

BentoML: From model in Jupyter Notebook to deployment in 5 mins

Hi guys, I want to share an open source project that we made for data scientists. It is a machine learning toolkit for packaging and deploying models.

BentoML is a python library for packaging and deploying machine learning models. It does two things without changing your model training workflow:

  • Standardize how to package your ML model for production, including its preprocessing/feature-fetching code, dependencies and configurations.

  • Easily distribute your ML model as PyPI package, API Server(in a Docker Image) , command line tool or Spark/Flink UDF.

We build BentoML because we think there should be a much simpler way for machine learning teams to ship models for production. They should not wait for engineering teams to re-implement their models for production environment or build complex feature pipelines for experimental models.

Our vision is to empower Machine Learning scientists to build and ship their own models end-to-end as production services, just like software engineers do. BentoML is essentially this missing 'build tool' for Machine Learning projects.

With that in mind, here is the top design goals for BentoML:

  • Multiple framework support - BentoML should supports a wide range of ML frameworks out-of-the-box including Tensorflow, PyTorch, Scikit-Learn, xgboost and can be easily extended to work with new or custom frameworks.

  • Best Practice built-in - BentoML users can easily customize telemetrics and logging for their model, and make it easy to integrate with production systems.

  • Streamlines deployment workflows - BentoML supports deploying models into REST API endpoints with Docker, Kubernetes, AWS EC2, ECS, Google Cloud Platform, AWS SageMaker, and Azure ML.

  • Custom model runtime - Easily integrate your python code with high-performance model runtime backend(e.g. tf-serving, tensorrt-inference-server) in real-time model serving.

Here is a quick example:

We have a very simple model from Scikit-learn:

from sklearn import svm
from sklearn import datasets

clf = svm.SVC(gamma='scale')
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)

To package this model with BentoML, you will need to create a new BentoService by subclassing it, and provides artifacts and env definition for it:

%%writefile iris_classifier.py
from bentoml import BentoService, api, env, artifacts
from bentoml.artifact import PickleArtifact
from bentoml.handlers import DataframeHandler

@artifacts([PickleArtifact('model')])
@env(conda_dependencies=["scikit-learn"])
class IrisClassifier(BentoService):

    @api(DataframeHandler)
    def predict(self, df):
        return self.artifacts.model.predict(df)

Now, to save your trained model for prodcution use, simply import your BentoService class and pack it with required artifacts:

from iris_classifier import IrisClassifier

svc = IrisClassifier.pack(model=clf)

svc.save('./saved_bento', version='v0.0.1') # Saving archive to ./saved_bento/IrisClassifier/v0.0.1/

That's it. Now you have created your first BentoArchive. It's a directory containing all the source code, data and configurations files required to run this model in production.

How to use packaged archive

Loading BentoArchive in Python
import bentoml

bento_svc = bentoml.load('./saved_bento/IrisClassifier/v0.0.1/')
bento_svc.predict(X[0])
Install BentoArchive as PyPI package
pip install ./saved_bento/IrisClassifier/v0.0.1/

Import it and used it as a python module:

from IrisClassifier import IrisClassifier

installed_svc = IrisClassifier()
installed_svc.predict(X[0])

Love to hear you guys feedback and thoughts!

Cheers

Top comments (0)

Why You Need to Study Javascript Fundamentals

>> Check out this classic DEV post <<