DEV Community

Trix Cyrus
Trix Cyrus

Posted on

Part 13: Building Your Own AI - Deployment of Machine Learning Models

Author: Trix Cyrus

[Try My], Waymap Pentesting tool: Click Here
[Follow] TrixSec Github: Click Here
[Join] TrixSec Telegram: Click Here


After training and tuning a machine learning model, the final step is deployment, which allows real-world applications to use your model for predictions or insights. This article explains how to save and load models, create APIs for serving predictions, and deploy them on cloud platforms like AWS, Google Cloud, and Heroku.


1. Why Deploy Machine Learning Models?

Model deployment transforms a trained model into a production-ready system. Key benefits include:

  • Accessibility: Serve predictions to applications or users through an API.
  • Scalability: Handle multiple requests and adapt to changing workloads.
  • Real-time Integration: Enable integration into existing systems or pipelines.

2. Saving and Loading ML Models

Saving Models

Saving models ensures they can be reused without retraining.

  • Using Pickle (for Scikit-learn models)
  import pickle
  from sklearn.ensemble import RandomForestClassifier

  # Train and save model
  model = RandomForestClassifier()
  model.fit(X_train, y_train)
  with open('model.pkl', 'wb') as file:
      pickle.dump(model, file)
Enter fullscreen mode Exit fullscreen mode
  • Using TensorFlow/Keras
  import tensorflow as tf

  # Save Keras model
  model.save('model.h5')
Enter fullscreen mode Exit fullscreen mode

Loading Models

Reloading saved models allows them to be used in production.

  • Using Pickle
  with open('model.pkl', 'rb') as file:
      model = pickle.load(file)
Enter fullscreen mode Exit fullscreen mode
  • Using TensorFlow/Keras
  model = tf.keras.models.load_model('model.h5')
Enter fullscreen mode Exit fullscreen mode

3. Creating RESTful APIs

APIs provide a way for applications to interact with your model.

3.1 Using Flask

Flask is a lightweight Python framework ideal for serving ML models.

from flask import Flask, request, jsonify
import pickle

# Load model
with open('model.pkl', 'rb') as file:
    model = pickle.load(file)

# Create Flask app
app = Flask(__name__)

@app.route('/predict', methods=['POST'])
def predict():
    data = request.json
    prediction = model.predict([data['features']])
    return jsonify({'prediction': prediction.tolist()})

if __name__ == '__main__':
    app.run(debug=True)
Enter fullscreen mode Exit fullscreen mode

3.2 Using FastAPI

FastAPI is a modern alternative with built-in support for data validation.

from fastapi import FastAPI
from pydantic import BaseModel
import pickle

# Load model
with open('model.pkl', 'rb') as file:
    model = pickle.load(file)

# Define input schema
class InputData(BaseModel):
    features: list

# Create FastAPI app
app = FastAPI()

@app.post('/predict')
def predict(data: InputData):
    prediction = model.predict([data.features])
    return {'prediction': prediction.tolist()}
Enter fullscreen mode Exit fullscreen mode

4. Deploying Models on Cloud Platforms

4.1 AWS (Amazon Web Services)

  1. Use Elastic Beanstalk:

    • Package your Flask or FastAPI app into a zip file or Docker container.
    • Deploy using AWS Elastic Beanstalk for easy scaling.
  2. Use SageMaker:

    • Ideal for deploying ML models with minimal infrastructure management.
    • Upload your model and create an endpoint for predictions.

4.2 Google Cloud Platform (GCP)

  1. Use AI Platform:

    • Export your TensorFlow or Scikit-learn model.
    • Deploy using the AI Platform to serve predictions via REST APIs.
  2. Use Cloud Run:

    • Containerize your application using Docker.
    • Deploy it on Cloud Run for a fully managed serverless solution.

4.3 Heroku

  1. Prepare Your App:

    • Include a requirements.txt file for dependencies.
    • Create a Procfile to specify the entry point for your application:
     web: gunicorn app:app
    
  2. Deploy to Heroku:

    • Push your code to a GitHub repository.
    • Link the repository to Heroku and deploy directly.

4.4 Other Options

  • Azure ML: Deploy models as REST endpoints.
  • Docker & Kubernetes: Use containerization for scalable deployments.

5. Practical Example: Deploying a Model with Flask

Model Code (model.py)

import pickle
from sklearn.ensemble import RandomForestClassifier

# Dummy training
model = RandomForestClassifier()
X_train, y_train = [[0, 1], [1, 0]], [0, 1]
model.fit(X_train, y_train)

# Save model
with open('model.pkl', 'wb') as file:
    pickle.dump(model, file)
Enter fullscreen mode Exit fullscreen mode

Flask App Code (app.py)

from flask import Flask, request, jsonify
import pickle

# Load model
with open('model.pkl', 'rb') as file:
    model = pickle.load(file)

app = Flask(__name__)

@app.route('/predict', methods=['POST'])
def predict():
    data = request.json
    prediction = model.predict([data['features']])
    return jsonify({'prediction': prediction.tolist()})

if __name__ == '__main__':
    app.run(debug=True)
Enter fullscreen mode Exit fullscreen mode

Deployment on Heroku

  1. Install Heroku CLI and log in.
  2. Initialize Git in your project folder:
   git init
   heroku create
Enter fullscreen mode Exit fullscreen mode
  1. Add, commit, and push code to Heroku:
   git add .
   git commit -m "Initial commit"
   git push heroku main
Enter fullscreen mode Exit fullscreen mode
  1. Access your API at the provided Heroku URL.

6. Conclusion

Deploying machine learning models involves saving the trained model, creating an API, and hosting it on a platform that can handle production workloads. Whether you use Flask, FastAPI, or a cloud service, deploying your model opens doors to real-world applications.


~Trixsec

Top comments (1)

Collapse
 
prodevstaff profile image
Info Comment hidden by post author - thread only accessible via permalink
ProDev. Staff

Some comments have been hidden by the post's author - find out more