Retrospective learnings into building an Serverless API that uses a regression model built using ML.NET
Update: You can now download the full sample of this project from the Azure Serverless Community website! (along with some other fantastic samples if you want to explore Azure Functions further!)
With the release of ML.NET, a API that C# developers can use to infuse their applications with machine learning capability, I’ve been keen to combine my knowledge of Azure Functions with the API to build some wacky serverless machine learning applications that would allow me to enhance my GitHub profile and cater to all the buzzword enthusiasts out there!
(And of course, to increase my own skills and learn something new 😂)
This post won’t be a tutorial. I’m writing this more as a retrospective of the design decisions I took while building the application and the things I learnt about how different components work. Should you read this and decide to build upon it for your real world applications, hopefully you can apply what I’ve learnt in your projects or better yet, expand on the ideas and scenarios I was working with.
I’ll be focusing more on what I learnt about the ML.NET API itself rather than spending too much time about how Azure Functions work. If you want to get out this code, here’s the repo on GitHub.
Since I was doing this project just for a bit of fun, I did take some shortcuts that wouldn’t work in real-life production scenarios, so please be sympathetic about that (As I’m sure you will be 😊).
Overview of the application.
This sample builds on the Taxi Trip Regression tutorial that the ML.NET team provided on their docs. But for this sample, I’ve expanded that tutorial a little bit for the following scenario.
Let’s say we ran a Taxi company and every evening we receive a file of all the day’s taxi trips. We want to built a regression model that our customers can use to predict how much their cab ride will cost.
For this, I’ve built two Azure Functions:
- ServerlessPricePredictor.ModelTrainer takes a local CSV file and trains a regression model over the top of it. Provided the model is a good fit (using R-Squared value), the model is then uploaded to a container in Azure Blob Storage. This function uses a Timer Trigger to simulate a timed batch processing job.
- ServerlessPricePredictor.API then uses the trained model in Azure Blob Storage in a HTTP Trigger to create a prediction based on input data (JSON payload) and inserts this prediction into Azure Cosmos DB.
Model Trainer Function
In ML.NET, all operations needed to create our machine learning pipeline are provided through the MLContext class. This allows us to perform data preparation, feature engineering, training, prediction and model evaluation.
In order for us to load our data to train our regression model, ML.NET provides us with an IDataView class to describe numerical or text tabular data. The IDataView class provides us with a method to load our data files, which I just stuck in a folder within my project.
We can then use the MLContext class to perform our machine learning tasks. First up, I’ve used the LoadFromTextFile method to load our local file into an IDataView object. Then to create our pipeline, I’ve identified that I want to use our FareAmount column as our label that we want our regression model to predict, I’ve applied One Hot Encoding on our VendorId, RateCode and PaymentType columns (since this is a regression model, we apply one hot encoding onto categorical values to turn them into numerical values) and then I’ve concatenated all the features that I want to use into one feature column. After all that, I’ve applied a regression task onto my pipeline.
I’ve played around a bit with the Spark MLlib library before, so the process here in ML.NET is very similar, which I thought was quite cool! 😎
Finally, I’ve created a model object of ITransformer that fits our pipeline onto our IDataView object.
// Read flat file from local folder
_logger.LogInformation("Loading the file into the pipeline");
IDataView dataView = mlContext.Data.LoadFromTextFile<TaxiTrip>(trainFilePath,
hasHeader: true, separatorChar: ',');
// Create the pipeline
_logger.LogInformation("Training pipeline");
var pipeline = mlContext.Transforms.CopyColumns(outputColumnName: "Label",
inputColumnName: "FareAmount")
.Append(mlContext.Transforms.Categorical.OneHotEncoding(outputColumnName:
"VendorIdEncoded", inputColumnName: "VendorId"))
.Append(mlContext.Transforms.Categorical.OneHotEncoding(outputColumnName:
"RateCodeEncoded", inputColumnName: "RateCode"))
.Append(mlContext.Transforms.Categorical.OneHotEncoding(outputColumnName:
"PaymentTypeEncoded", inputColumnName: "PaymentType"))
.Append(mlContext.Transforms.Concatenate("Features", "VendorIdEncoded",
"RateCodeEncoded", "PassengerCount", "TripDistance", "PaymentTypeEncoded"))
.Append(mlContext.Regression.Trainers.FastTree());
// Fit the model
_logger.LogInformation("Fitting model");
var model = pipeline.Fit(dataView);
I’ve decorated my class properties with the LoadColumnAttribute that specifies the index of the columns in our data set:
public class TaxiTrip
{
[LoadColumn(0)]
public string VendorId;
[LoadColumn(1)]
public string RateCode;
[LoadColumn(2)]
public float PassengerCount;
[LoadColumn(3)]
public float TripTime;
[LoadColumn(4)]
public float TripDistance;
[LoadColumn(5)]
public string PaymentType;
[LoadColumn(6)]
public float FareAmount;
}
For our prediction, I’ve created another class that uses the ColumnNameAttribute decoration. This allows us to use the FareAmount property (which is the property that we are trying to create predictions on) to generate our score.
public class TaxiTripFarePrediction
{
[ColumnName("Score")]
public float FareAmount;
}
In real world machine learning scenarios, we wouldn’t just deploy models we didn’t know whether or not they performed well or were a good fit for our data.
The ML.NET API provides us with metrics that we can use to evaluate the effectiveness of our models. I’ve created the following Evaluate() method that takes in a MLContext, our model and our test data file:
private double Evaluate(MLContext mlContext, ITransformer model, string testFilePath)
{
IDataView dataView = mlContext.Data.LoadFromTextFile<TaxiTrip>(testFilePath,
hasHeader: true, separatorChar: ',');
var predictions = model.Transform(dataView);
var metrics = mlContext.Regression.Evaluate(predictions, "Label", "Score");
double rSquaredValue = metrics.RSquared;
return rSquaredValue;
}
Here, I’ve loaded up my test data into an IDataView object and then applied a transform on that data using my model. The Transform() method here doesn’t actually do any transformations here, it’s just validating the schema of my test file against the model it’s testing.
For this example, I’m just using the r-squared value to test the effectiveness of my model, but the RegressionMetrics class allows us to retrieve the LossFunction, MeanAbsoluteError, MeanSquaredError, RootMeanSquaredError and RSquared values for our regression model.
Once our model produces our r-squared metrics and provided it’s produced a score that shows it’s a good fit, we then upload the model’s zip file to Azure Storage:
if (modelRSquaredValue >= 0.7)
{
_logger.LogInformation("Good fit! Saving model");
mlContext.Model.Save(model, dataView.Schema, modelPath);
// Upload Model to Blob Storage
_logger.LogInformation("Uploading model to Blob Storage");
await _azureStorageHelpers.UploadBlobToStorage(cloudBlobContainer, modelPath);
}
The main focus here that I’m saving my model first, then uploading it to Azure Storage as a file. We save our models using our MLContext.Model.Save line, which takes in our model, the schema of our IDataView object and the path that we wish to save our model to. I’ve created my own helper class to upload my saved model to a specified blob container in Azure Storage.
(N.B: I’ve tried uploading the Model as a Stream rather than a file, but when I attempt to inject the model into an HTTP Trigger Function or a ASP.NET Web API app, I’ve always run into problems. One day I’ll figure it out or, more likely, someone smarter than me will tell me what I’m doing wrong 😂)
Using our model in a HTTP API Function
If we want to make predictions in our API’s, we’ll need to create a PredictionEngine. Essentially this allows us to use trained models to make single predictions. However, this isn’t thread safe, so instead I’ve used a PredictionEnginePool and I’ve injected it in my Startup class so we have a Singleton instance of it available in our application. If we were making prediction across several functions within our API Function App, we’d need to create instances of PredictionEnginePool for each prediction which would be a nightmare to manage. I’ve done this like so:
builder.Services.AddPredictionEnginePool<TaxiTrip, TaxiTripFarePrediction>
().FromUri(
modelName: "TaxiTripModel",
uri: "https://velidastorage.blob.core.windows.net/mlmodels/Model.zip",
period: TimeSpan.FromMinutes(1));
Here we’re calling our model that’s stored remotely in Azure Blob Storage.The period parameter defines how often we would poll the uri for a new model. Models shouldn’t be static, so you can use this parameter to set how often you want your application to poll for a new model.
Once we’ve injected that into our app, we can then use our PredictionEnginePool to make predictions on our requests:
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
var input = JsonConvert.DeserializeObject<TaxiTrip>(requestBody);
// Make Prediction
TaxiTripFarePrediction prediction = _predictionEnginePool.Predict(
modelName: "TaxiTripModel",
example: input);
The Predict() method here allows us to make a single prediction on our JSON input. We define the model that we wish to use using the same name that we gave it when we injected it into our app.
In this example, I’m creating new predictions based on input data and then inserting the new prediction values into a Cosmos DB container:
[FunctionName(nameof(PredictTaxiFare))]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PredictTaxiFare")] HttpRequest req)
{
IActionResult returnValue = null;
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
var input = JsonConvert.DeserializeObject<TaxiTrip>(requestBody);
// Make Prediction
TaxiTripFarePrediction prediction = _predictionEnginePool.Predict(
modelName: "TaxiTripModel",
example: input);
var insertedPrediction = new TaxiTripInsertObject
{
Id = Guid.NewGuid().ToString(),
VendorId = input.VendorId,
RateCode = input.RateCode,
PassengerCount = input.PassengerCount,
TripTime = input.TripTime,
TripDistance = input.TripDistance,
PaymentType = input.PaymentType,
FareAmount = input.FareAmount,
PredictedFareAmount = prediction.FareAmount
};
try
{
ItemResponse<TaxiTripInsertObject> predictionResponse = await
_container.CreateItemAsync(
insertedPrediction,
new PartitionKey(insertedPrediction.VendorId));
returnValue = new OkObjectResult(predictionResponse);
}
catch (Exception ex)
{
_logger.LogError($"Inserting prediction failed: Exception thrown:
{ex.Message}");
returnValue = new StatusCodeResult(StatusCodes.Status500InternalServerError);
}
return returnValue;
}
Sounds great! What do I need to get this sample working myself?
If you fancy building this app yourself (or expanding it) you’ll to create an Azure Storage account with a container that you can upload your models to. Check out this article to see how you can do that. You’ll also need to create a Cosmos DB account that uses the SQL API. Check out this article to get that going.
You can run Azure Functions locally using Visual Studio and you can use Postman to test the API on your machine.
Conclusion
I hope you’ve learnt a little bit about the ML.NET library and how you can use it to build basic, but pretty awesome Serverless Machine Learning solutions. Bear in mind I did say basic. I did take a few shortcuts just to get something working for the sake of this blog post, and doing this in a production scenario would be far more complex. But it is doable and ML.NET is a great library to use if you’re already using the .NET stack within your work.
If you want to see the full code, please check it out on GitHub. If you have any questions for me, please comment below. 😊
Top comments (0)