DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

Contrastive Learning for Few-Shot Bearing Fault Classification: SimCLR on CWRU Dataset

⚡ Key Takeaways SimCLR achieves 89.3% accuracy with only 5 labeled samples per fault class, outperforming supervised baselines by 34.1% on CWRU dataset. Contrastive pretraining learns robust vibration signal embeddings by maximizing similarity between augmented views of the same sample while pushing apart different samples. Data augmentation strategies for 1D vibration signals—additive noise, time shifting, scaling—are critical for SimCLR performance but require careful tuning to avoid corrupting fault signatures. Projection head dimensionality and temperature parameter strongly affect convergence; empirical testing shows 128-dim projections and τ=0.07 work best for bearing signals. SimCLR's computational cost (2-3 hours pretraining on single GPU) pays off when labeled data is scarce, but supervised CNN wins when you have 50+ samples per class. Why Contrastive Learning Matters for Bearing Diagnostics You’ve collected vibration data from your rotating machinery. Maybe you’ve got 10,000 hours of normal operation, but only a handful of labeled fault examples—inner race crack, outer race defect, ball fault. The classic supervised learning playbook falls apart here.


Continue reading the full article on TildAlice

Top comments (0)