DEV Community

Cover image for How I Reached 84.35% on CIFAR-100 Using ResNet-50 (PyTorch Guide)
Amirali Soltani rad
Amirali Soltani rad

Posted on

How I Reached 84.35% on CIFAR-100 Using ResNet-50 (PyTorch Guide)

Introduction

Reaching high accuracy on standard benchmarks like CIFAR-100 is challenging. In this article, I show how I trained a ResNet-50 with PyTorch to achieve 84.35% test accuracy and provide a production-ready repository with a live Streamlit demo.

GitHub & Demo: cifar100-classification


Why CIFAR-100 & ResNet-50?

  • CIFAR-100: 100 fine-grained categories, only 500 images per class → hard to classify.
  • ResNet-50: classic, widely adopted, and great for transfer learning.
  • Public ResNet-50 implementations rarely exceed ~81% on CIFAR-100; hitting 84.35% is rare and significant.

Challenges

  • High class similarity → tough separation
  • Limited images → risk of overfitting
  • Mid-range GPU constraints → careful optimization needed

Model & Training Strategy

Architecture: Pretrained ResNet-50 (ImageNet), modified final layer for 100 classes

Training:

  • OneCycleLR scheduling for fast convergence
  • Progressive fine-tuning: FC only → Layers 3&4 + FC → Full model
  • Mixed-precision training & checkpointing

Data Augmentation:

  • Resize → ColorJitter → RandomHorizontalFlip
  • RandomRotation (±15°) → RandomAffine → GaussianBlur → RandomErasing
  • Mixup α=0.8 & CutMix α=0.8 (p=0.5)
  • Label smoothing

Results

Metric Value
Test Accuracy 84.35%
Test Loss 0.7622
Best Val Loss 0.9729
Epochs 66

Even on mid-range GPUs like NVIDIA GTX 1650 (~15.5 hours), this setup achieved state-of-the-art ResNet-50 results on CIFAR-100.


Interactive Demo & Repo

Streamlit app lets you upload images and get predictions + confidence scores.

Repo structure:

cifar100-classification/
├── app.py
├── train.ipynb
├── requirements.txt
├── README.md
├── model_checkpoint.pth
└── ...
Enter fullscreen mode Exit fullscreen mode

Why It Matters

  • Classic architectures + modern tricks (augmentation, LR scheduling) can outperform expectations
  • Sets a practical benchmark for ResNet-50 on CIFAR-100
  • Next steps: ResNet-101, EfficientNet, ViT, larger datasets, API or edge deployment

Call to Action

  • Share with fellow ML & CV enthusiasts
  • Modify augmentations or architecture and test
  • Comment your results — can we push beyond 85%?

Top comments (0)