DEV Community

Citadel Cloud Management
Citadel Cloud Management

Posted on • Originally published at citadelcloudmanagement.com

AI/ML Infrastructure on AWS: A Production-Ready Blueprint

Deploying ML models to production requires more than just a SageMaker endpoint. Here's the 5-layer architecture I use for every ML deployment.

Layer 1: Data Layer (FSx for Lustre + S3)

Training data needs high-throughput storage:

# Create FSx for Lustre linked to S3 training data
aws fsx create-file-system \
  --file-system-type LUSTRE \
  --storage-capacity 1200 \
  --lustre-configuration ImportPath=s3://training-data-bucket
Enter fullscreen mode Exit fullscreen mode

FSx for Lustre delivers 100+ GB/s throughput vs S3's ~5 GB/s. Training that takes 8 hours on S3 finishes in 45 minutes.

Layer 2: Compute (EKS with GPU Karpenter)

# Karpenter provisioner for GPU nodes
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: gpu-training
spec:
  requirements:
    - key: node.kubernetes.io/instance-type
      operator: In
      values: ["p4d.24xlarge", "p3.8xlarge", "g5.12xlarge"]
    - key: karpenter.sh/capacity-type
      operator: In
      values: ["spot", "on-demand"]
  limits:
    resources:
      nvidia.com/gpu: 32
Enter fullscreen mode Exit fullscreen mode

Spot GPU instances save 60-70%. Karpenter auto-provisions the right GPU type.

Layer 3: Model Registry (SageMaker)

import sagemaker
from sagemaker.model import ModelPackage

model_package = ModelPackage(
    model_package_arn="arn:aws:sagemaker:us-east-1:123456:model-package/my-model/1",
    role=sagemaker_role,
    sagemaker_session=session
)

# Deploy with auto-scaling
predictor = model_package.deploy(
    initial_instance_count=2,
    instance_type="ml.g5.xlarge",
    endpoint_name="production-inference"
)
Enter fullscreen mode Exit fullscreen mode

Layer 4: Inference (Multi-Model Endpoints)

Host 10+ models on a single endpoint to cut costs:

from sagemaker.multidatamodel import MultiDataModel

mme = MultiDataModel(
    name="multi-model-endpoint",
    model_data_prefix=f"s3://{bucket}/models/",
    model=model,
    sagemaker_session=session
)
Enter fullscreen mode Exit fullscreen mode

Layer 5: Monitoring (Drift Detection)

from sagemaker.model_monitor import DataCaptureConfig

data_capture = DataCaptureConfig(
    enable_capture=True,
    sampling_percentage=20,
    destination_s3_uri=f"s3://{bucket}/capture"
)
Enter fullscreen mode Exit fullscreen mode

Monitor for data drift, model drift, and feature importance changes.


Complete ML Infrastructure Resources

40+ AI/ML toolkits with Terraform modules, pipeline templates, and deployment blueprints: AI/ML Toolkits

Architecture blueprints for production ML: Architecture Blueprints

Free AI/ML course: Free Courses

What's your ML infrastructure stack?

Top comments (0)