<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 1DS23AI063 VISHUNU RAJAGOPALAN</title>
    <description>The latest articles on DEV Community by 1DS23AI063 VISHUNU RAJAGOPALAN (@1ds23ai063_vishunurajago).</description>
    <link>https://dev.to/1ds23ai063_vishunurajago</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/1ds23ai063_vishunurajago"/>
    <language>en</language>
    <item>
      <title>Fine-Tuning ResNet-101 Tackling Overfitting #deeplearning #machinelearning #computervision #python #tensorflow</title>
      <dc:creator>1DS23AI063 VISHUNU RAJAGOPALAN</dc:creator>
      <pubDate>Tue, 21 Apr 2026 15:56:58 +0000</pubDate>
      <link>https://dev.to/1ds23ai063_vishunurajago/fine-tuning-resnet-101-tackling-overfitting-deeplearning-machinelearning-computervision-python-3e9f</link>
      <guid>https://dev.to/1ds23ai063_vishunurajago/fine-tuning-resnet-101-tackling-overfitting-deeplearning-machinelearning-computervision-python-3e9f</guid>
      <description>&lt;p&gt;Transfer learning with pretrained CNNs sounds simple — use a model like ResNet-101, modify the final layer, and train. However, in practice, two major challenges arise: domain gap and overfitting when working with small datasets.&lt;br&gt;
Context&lt;br&gt;
This project applies ResNet-101 to a specialized image classification task using a small dataset. The techniques used are applicable to any domain where pretrained models are adapted to limited data scenarios.&lt;br&gt;
Why ResNet-101?&lt;br&gt;
ResNet-101 is a deep convolutional neural network with 101 layers, built on residual (skip) connections. These connections allow inputs to bypass layers, helping solve the vanishing gradient problem and enabling stable training of deep networks.&lt;br&gt;
Model Architecture&lt;br&gt;
The original classification layer is removed and replaced with a custom head consisting of GlobalAveragePooling, Dropout, and a Dense Softmax layer. This allows the model to adapt to new classification tasks while retaining learned features.&lt;br&gt;
Challenges&lt;br&gt;
Domain Gap: Pretrained models learn generic features that may not directly transfer to specialized tasks.&lt;br&gt;
Overfitting: Large models with small datasets tend to memorize rather than generalize.&lt;br&gt;
Solutions&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Augmentation
Mild augmentation techniques such as flipping, rotation, zoom, and brightness adjustment are used to increase dataset diversity while preserving meaningful features.&lt;/li&gt;
&lt;li&gt;Two-Stage Fine-Tuning
Stage 1: Freeze the base model and train only the custom head.
Stage 2: Unfreeze the top layers and train with a very low learning rate to adapt to domain-specific features.&lt;/li&gt;
&lt;li&gt;Regularization Techniques
Dropout is used to prevent memorization. EarlyStopping halts training when validation loss increases, and ReduceLROnPlateau adjusts the learning rate during training.
Key Takeaways
Transfer learning is effective for small datasets. Handling domain gap and preventing overfitting are crucial. Fine-tuning strategies significantly impact performance.
Implementation
&lt;a href="https://colab.research.google.com/drive/1uHGUZGnOM7KLVf0FLFEhgIIRUGFdlWVh?usp=sharing" rel="noopener noreferrer"&gt;https://colab.research.google.com/drive/1uHGUZGnOM7KLVf0FLFEhgIIRUGFdlWVh?usp=sharing&lt;/a&gt;
Conclusion
Pretrained models require careful adaptation. With the right techniques, they can achieve strong performance even with limited data.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>python</category>
    </item>
  </channel>
</rss>
