What is NestJS?
NestJS is a modern Node.js framework Layouted to help you Construct efficient and scalable Host-side Uses. it stacked with typescript and follows principles from object-oriented and practical scheduling devising it light to form and run your cipher. With its Angular-inspired Structure it comes packed with helpful Characteristics like Requirement injection middleware and decorators making it a solid pick for Constructing reliable and powerful APIs.
Why TensorFlow.js for ML?
TensorFlow.js is a JavaScript library that allows developers to train and Use Calculater learning Representations directly in the browser or Node.js. it provides the power to:
- charge pre-trained Representations or school green ones
- do real-time predictions
- Check cc computations exploitation webgl for acceleration
by combine tensorflowjs with nestjs you get form smart genus apis that incorporate cc Rolealities with amp light and ascendable backend frame
Use Cases and Goals of the Series
This series will take you from a beginner to a proficient developer capable of integrating ML with NestJS. the name goals include:
- scope leading amp nestjs diligence with tensorflowjs
- burden pre-trained Representations and education bespoke Representations
- construction ascendable genus apis to break cc capabilities
- exploring real-world cc Uses such as arsenic Checkimonial systems or see credit
Setting Up the Environment
1. Installing NestJS
Install the NestJS CLI:
npm i -g @nestjs/cli
Create a new NestJS project:
nest new ml-backend
Navigate to the project directory:
cd ml-backend
2. Adding TensorFlow.js to Your Project
Install TensorFlow.js:
npm install @tensorflow/tfjs
3. Overview of the Project Structure
After setting up, your project structure should look like this:
ml-backend/
├── src/
│ ├── app.controller.ts
│ ├── app.service.ts
│ ├── ml/
│ │ ├── ml.service.ts
│ │ └── ml.controller.ts
├── main.ts
├── package.json
└── tsconfig.json
In this structure:
- app.controller.ts: Entry point for handling basic routes.
- ml/: Folder dedicated to your machine learning logic and APIs.
Building Your First ML Model
1. How to Load and Use a Pre-Trained Model
Create an ML service:
nest generate service ml
Add the logic to load a pre-trained model in ml.service.ts:
import * as tf from '@tensorflow/tfjs';
import { Injectable } from '@nestjs/common';
@Injectable()
export class MlService {
private model: tf.LayersModel;
async loadModel(): Promise<void> {
this.model = await tf.loadLayersModel('https://path-to-your-model/model.json');
}
async predict(inputData: number[]): Promise<number[]> {
const tensor = tf.tensor2d([inputData], [1, inputData.length]);
const prediction = this.model.predict(tensor) as tf.Tensor;
return prediction.dataSync();
}
}
Call loadModel() during the application bootstrap process to ensure the model is ready.
2. Making Predictions Using TensorFlow.js
TensorFlow.js allows you to work with tensors, which are multi-dimensional arrays optimized for ML computations.
Example prediction flow:
const input = [5.1, 3.5, 1.4, 0.2];
const result = await this.mlService.predict(input);
console.log('Prediction:', result);
3. Hands-On Example with a Simple Dataset
To keep it simple, let’s use a pre-trained Iris flower classification model. The input would be an array of four features (sepal length, sepal width, petal length, petal width), and the output will predict the class of the flower.
Integrating ML into a NestJS API
1. Creating a NestJS Service for ML Logic
The MlService already handles loading the model and making predictions. We’ll now expose this logic through a controller.
2. Building REST APIs to Expose Predictions
Create a controller:
nest generate controller ml
Add an endpoint to ml.controller.ts:
import { Controller, Post, Body } from '@nestjs/common';
import { MlService } from './ml.service';
@Controller('ml')
export class MlController {
constructor(private readonly mlService: MlService) {}
@Post('predict')
async predict(@Body('input') input: number[]): Promise<number[]> {
return this.mlService.predict(input);
}
}
3. Testing APIs with Postman
Start the application:
npm run start
Open Postman and send a POST request to:
URL: http://localhost:3000/ml/predict
Body:
{
"input": [5.1, 3.5, 1.4, 0.2]
}
The response will return predictions for the given input.
Next Steps
- In the upcoming articles, we’ll explore:
- Training and deploying custom ML models.
- Optimizing API performance for real-world applications.
- Building and deploying a full ML-powered application.
Top comments (0)