DEV Community

Sainath Patil
Sainath Patil

Posted on • Edited on

I Deployed a Machine Learning Model to AWS SageMaker Using GitHub Actions (With Just One Command!)

I built an ML agent that learns from flower data and makes predictions — deployed fully on the cloud using AWS SageMaker.


😩 The Problem: Training ML Models Is Easy. Deploying? Not So Much.

As a student developer, I’ve trained many models locally.
But when it comes to deploying them in production, it gets messy:

  • Uploading files
  • Writing inference.py scripts
  • Managing AWS configs
  • Manually setting up endpoints

I wanted a setup where I could:

✅ Train a model
✅ Zip it
✅ Push to GitHub
✅ Let GitHub Actions deploy it to AWS SageMaker
✅ And then test it in real-time


🧰 The Stack

Here’s what I used:

  • Scikit-learn for training a basic ML model on the Iris dataset
  • AWS S3 for model storage
  • AWS SageMaker for deployment
  • GitHub Actions for CI/CD automation
  • Python + Boto3 to trigger and test predictions

🛠️ How It Works

The model is trained using this simple Scikit-learn code:

from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_iris
import pickle

X, y = load_iris(return_X_y=True)
model = LogisticRegression(max_iter=200)
model.fit(X, y)

with open("model.pkl", "wb") as f:
    pickle.dump(model, f) 
Enter fullscreen mode Exit fullscreen mode

Then I zipped the model:

tar -czf model.tar.gz model.pkl
Enter fullscreen mode Exit fullscreen mode

To tell SageMaker how to think, I wrote this inference.py:

def model_fn(model_dir):
    # Load the model
def input_fn(...):
    # Handle incoming CSV
def predict_fn(...):
    # Make predictions
def output_fn(...):
    # Return the result
Enter fullscreen mode Exit fullscreen mode

next,

💻 GitHub Actions Does the Magic

I created a GitHub Actions workflow that deploys the model on every push to main.

name: Deploy to SageMaker

on:
  push:
    branches: [ main ]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    - uses: actions/setup-python@v4
    - name: Install dependencies
      run: pip install boto3 sagemaker
    - name: Deploy Model
      env:
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      run: python deploy.py
Enter fullscreen mode Exit fullscreen mode

Now every time I push to GitHub, a new version of my model gets deployed. 🚀


🧪 Real-Time Prediction Test

After deployment, I tested my SageMaker endpoint with Python:

import boto3

endpoint_name = "my-sagemaker-endpoint"
csv_input = "5.1,3.5,1.4,0.2"

runtime = boto3.client("sagemaker-runtime", region_name="ap-south-1")
response = runtime.invoke_endpoint(
    EndpointName=endpoint_name,
    ContentType="text/csv",
    Body=csv_input
)

print("🧠 Prediction:", response["Body"].read().decode())
Enter fullscreen mode Exit fullscreen mode

It worked perfectly — my ML model told me which flower it was predicting based on the features! 🌸


📽️ Demo Video

🎥 Watch how the model trains, deploys, and predicts — all in under 20 minutes:

Top comments (0)