DEV Community

Cover image for Machine Learning -  Image Augumentation
Sandeep Balachandran
Sandeep Balachandran

Posted on

Machine Learning - Image Augumentation

Hey there,
Its day 12 of Quarantine.

details

Hoping everyones doing good. We talk about validation in the last post. Lets see how image augumentation works in this one.

When training a CNN to recognize particular objects in images, you want your CNN to be able to detect these objects regardless of their size or position in the image.
For example, suppose we want to train our CNN to recognize dogs in images.

details

In this case, we want our CNN to be able to recognize if a dog is in an image regardless of how big the dog is, or if the dog is in the middle of the picture or in the left-hand corner, or if the dog is at an angle, or if we can only see part of the dog.

Therefore, in the ideal case, you want your CNN to see all these examples during training. If you're lucky enough to have a big training set with many different examples, your CNN will perform very well and would be less likely to overfit.

However, it is not uncommon to find yourself working with a training set that doesn't have a lot of different examples, in which case, your CNN is likely to suffer from overfitting and won't generalize well to data it hasn't seen before. This problem can be avoided by using a technique called
image augmentation.

details

Image augmentation works by creating new training images by applying a number of random image transformations to the images in the original training set.

details

For example, we can take an image from our original training set and create a new one by applying a random rotation, or a horizontal flip, or a random zoom.

details

By adding this new transformed images to our training set, we are ensuring that our CNN sees a lot of different examples during training. Consequently, our CNN will generalize better to unseen data and avoid overfitting. In the next lesson, we will learn about
dropout, another technique that can be used to prevent overfitting.

Top comments (0)