DEV Community

Saron
Saron

Posted on

Deploying an ML model to Paperspace and creating an API

Background

I thought deploying a model and making it accessible via an API would require a ton of complex steps and an intimidating level of ML and DevOps knowledge. But when I went through the process of turning my little toy image detector into an API, I was surprised at how few components were involved.

So I thought I'd do a quick write-up and share, in case it helps you get started. At the very least, it'll give you a sense of the different pieces needed for accessing a model via API.

Goal

You have a model that's already trained and fine-tuned and now you want to use it to create an inference API (meaning it returns some sort of prediction).

Prerequisites

  • Model: Your model is trained, exported, (preferably tested locally) and ready to be used.
  • Basic knowledge of Python.

Overview

  1. Write Python script to interface with your model (this is the API part)
  2. Setup Paperspace machine
  3. Copy your Python script and your exported model to your Paperspace machine
  4. Run your Python script on your Paperspace machine (this exposes the API endpoint)
  5. Test your API

Steps

Python script:

You’ll need to create a script describing how to use your model. It's a good idea to test this script locally and make sure it works before deploying. That makes troubleshooting easier.
This script needs a few components:

1. Setup: After importing a bunch of methods and libraries we'll need (see gist to view my scripts with all my imports), we need a few lines of code to:

  • Set up Starlette, a tool we'll use to make async requests
  • Load your model – I'm using load_learner from the FastAI library to load my model as that's what's given to me. Your function might be different depending on how you built your model.
app = Starlette()
app.add_middleware(CORSMiddleware, allow_origins=['*'], allow_headers=['X-Requested-With', 'Content-Type'])

# Our model is our 'export.pkl' file which will be located in the same directory
learn = load_learner('export.pkl')
Enter fullscreen mode Exit fullscreen mode

2. Create routes: We need routes for our API. To simplify things, we’re going to define one route where we can receive a POST request with some information.

@app.route('/analyze', methods=['POST'])
Enter fullscreen mode Exit fullscreen mode

My example is an image detector, so the information it’ll be receiving is a link to an image. More on that next.

3. Determine how to respond to your POST request: Next, we need to tell it what to do with this request.

We'll create an analyze method that will manipulate the data we give it and call predict on our model to get us a prediction.

Note: What happens in this analyze method depends on what your model does and what predictions it returns. Since my example is an image detector, I want to send it a link to an image and get back a prediction of what category the image falls under.

For my image detector use case, I need to do a few things to the image first, namely, turn it into an object that I can use with predict. I'm using predict because that's the method that FastAI gives me to get a prediction. Depending on your model, your methods might be different.

Once I call predict, I want to take that result and turn it into some JSON to send back as a response. My response gives me the image's category. That's what the last line in my analyze function does.

async def analyze(request):
    img_data = await request.form()
    img = Image.open(requests.get(img_data['file'], stream=True, timeout = 10).raw) 
    pil_img = PILImage.create(img)
    prediction = learn.predict(pil_img)[0]
    return JSONResponse({'result': str(prediction)})
Enter fullscreen mode Exit fullscreen mode

4.Serve file: Finally, I need to call a method to actually serve this file and make it accessible online.

if __name__ == '__main__':
    if 'serve' in sys.argv:
        uvicorn.run(app=app, host='0.0.0.0', port=5500, log_level="info")
Enter fullscreen mode Exit fullscreen mode

Set up Paperspace:

Next, we’re going to get everything set up on a server using Paperspace that will host your API.
That means that on Paperspace we will:

  • Create a machine
  • Install any dependencies that your API needs on that machine
  • Host your Python script on that machine
  • Host your model on that machine

Let's start with setting up your machine.

  1. Click *"Create a machine." *
  2. It'll give you 3 options (as of May, 2023): ML-in-a-Box, Ubuntu, and Windows. Let's keep things simple and pick Ubuntu.
  3. Pick your "Machine Type." You'll have the option of GPU, Multi-GPU and CPU. For our little toy model, you probably won't need to power of a GPU. If you tested and ran the model locally on your machine, you probably won't need specs more powerful than what you already have. So I went with C5 which comes with 4 CPUs and 8 GiB RAM.
  4. Click "Create."
  5. Wait. Paperspace should show that it's "provisioning" your machine and getting it ready for you.
  6. Once it's ready, click "Connect" and it'll give you a command to ssh into your machine.
  7. Now setup your login so you can access your machine from your local terminal. You can either use SSH or a password. Once you've set up how you want to login, you're in!

Hosting your files on your Paperspace machine

**1.Install dependencies: **Before you install dependencies, you'll need to run sudo apt update. Then you'll need to run sudo apt install python3-pip to install pip. Then you should be ready to install dependencies.

Install whatever dependencies you need by running pip install [DEPENDENCIES] in your terminal. I needed the following for mine: pip install aiohttp asyncio uvicorn fastai starlette timm

2. Python script: I found the simplest thing to do is just to open up vim on your Paperspace machine and copy and paste the contents of your script. You can also use the scp command, which I use below for my model.

3. Model: A different way to copy a file over is using the scp command. Here's what it looks like: scp [PATH OF FILE YOU'RE COPYING] paperspace@[PAPERSPACE MACHINE IP ADDRESS]:/home/paperspace/. Then follow the prompts in your terminal.

Run your API

Now you're ready to run your API.

  • In your terminal, enter python3 [FILENAME OF YOUR PYTHON SCRIPT] serve to run your script and expose your API endpoint.

Test your API

  • Use curl to test your API. Since I'm doing image detection, I'm going to send a link to an image as part of my POST request. Here's what I entered in my terminal: curl -k -X POST -d "[IMAGE LINK]" https://[PAPERSPACE IP ADDRESS]:[PORT FROM PYTHON SCRIPT]/[API ROUTE FROM PYTHON SCRIPT]

If everything works well, you should get a JSON response back with your prediction as specified by your Python script. Good luck!

Top comments (1)

Collapse
 
jesserweigel profile image
Jesse R Weigel

This looks so cool! I need to make some time to try it out.