In this post, I am going to be writing about the Aws Sagemaker as a general overview. I have found Sagemaker to be very useful in my day-to-day machine learning life cycle, that is why I decided to write about it.
AWS Sagemaker is an end-to-end fully managed service that helps machine learning engineers or data scientists to build, train and deploy machine learning models on the cloud with less stress.
The AWS Sagemaker is part of the AWS platform services which is a form of the platform as a service (PAAS). It consists of the tools to perform an end-to-end machine learning life cycle, but also requires you to have enough knowledge about machine learning in order to obtain good results through model fine-tuning.
As Machine Learning practitioner, you know that the machine learning lifecycle consists of the following:
1) Data Preparation: this entails extracting, transforming, and loading your input data
2) Model Building: this consists of using cross-validation methods to select the right model for your use case
3) Model Training and fine-tuning: this consists of training, debugging and fine-tuning your models for better results
4) Model Deployment and Monitoring: this consists of deploying your trained models to production and also monitoring how the model is performing in production.
So how does the AWS Sagemaker fit into the above Machine Learning life cycle?
Data Preparation: you can use the s3 buckets for collecting and storing the data and also the ETL process can be applied to the stored data in the Bucket. The Groundtruth section of the AWS Sagemaker allows you to annotate your input dataset by utilizing some workflows or it also provides a workforce for labeling of data. It offers ground truth labeling for a variety of input data which can help you speed up the data preparation stage of a machine learning lifecycle which is always presumed to be the most tedious stage in the machine learning process.
Model Building and Training: the Sagemaker notebooks and training functionality are both used for model building. The choice of which to use is dependent on the size of the input data ( i would write a more detailed practical article on the application of using either the notebooks or training functionality to build and train your ML models)
Model Deployment and Monitoring: The Inference section of the Sagemaker is responsible for the deployment of your trained models. Firstly you would need to make an endpoint that can be utilized to call the model either with HTTP or HTTPS.
Top comments (0)