- filter = kernel == weight
- input_image == feature_map
- applying a filter == dot product b/w filter and patch_of_i/p_image
- no of channels in filter or i/p_image == color_scheme of the image/filter
3 color channel => RGB
1 color channel => grayscale
In the context of the Conv2D layer, a filter (also known as a kernel or a weight) is a small matrix of numerical values that is used to extract features from an input image or feature map. During the convolution operation, the filter is applied to each position of the input image or feature map, and a dot product is computed between the filter and the corresponding patch of the input. The result is a single scalar value, which is used to generate an output feature map.
Filters are typically learned during the training process of a neural network. The weights of the filters are adjusted to minimize the difference between the predicted output of the network and the true output, using a loss function and an optimization algorithm such as stochastic gradient descent.
Conclusively,
filters contain weights of edges/connections b/w 2 neurons of adjacent layer (L1 => L2)
Top comments (0)