<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Miguel Perez</title>
    <description>The latest articles on DEV Community by Miguel Perez (@miguelito929).</description>
    <link>https://dev.to/miguelito929</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/miguelito929"/>
    <language>en</language>
    <item>
      <title>All in 1 Image Classification using CNN's</title>
      <dc:creator>Miguel Perez</dc:creator>
      <pubDate>Thu, 30 Nov 2023 06:13:38 +0000</pubDate>
      <link>https://dev.to/miguelito929/all-in-1-image-classification-using-convolutional-neural-networks-1895</link>
      <guid>https://dev.to/miguelito929/all-in-1-image-classification-using-convolutional-neural-networks-1895</guid>
      <description>&lt;p&gt;In the realm of image classification, &lt;strong&gt;Convolutional Neural Networks&lt;/strong&gt; (CNNs) have established themselves as a pinnacle of success. Unlike traditional neural networks such as Multi-Layer Perceptrons (MLPs), CNNs are uniquely tailored for image data, using a specialized architecture that finds local patterns within images. In this blog we are gonig to understand and use CNNs, as well as seeing how they stack up against MLPs that have been fed extracted features from preprocessed data. We are only going in depth on CNNs, for specifics on the MLPs used for this comparison go to &lt;a href="https://dev.to/miguelito929/classical-ml-vs-neural-networks-image-classification-25df"&gt;Classical ML vs Neural Networks&lt;/a&gt; [image 1]&lt;/p&gt;

&lt;h2&gt;
  
  
  Differentiation from Classical Methods and MLPs
&lt;/h2&gt;

&lt;p&gt;CNNs surpass classical image classification methods and MLPs primarily due to their inherent capacity to learn hierarchical spatial representations directly from raw pixel data. While classical methods heavily rely on manually engineered feature extraction, CNNs autonomously learn relevant features through their convolutional layers, eliminating the need for image preprocessing.&lt;/p&gt;

&lt;p&gt;The local connectivity and weight sharing in CNNs allow them to preserve capture intricate patterns across the entire image, in contrast MLP's treat input features as independent and lack the ability to capture these patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture and Functionality
&lt;/h2&gt;

&lt;p&gt;CNNs operate by employing specialized layer designed to extract intricate features from input images: convolutional, pooling, and fully connected layers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Convolutional Layers:&lt;/strong&gt; These layers consist of learnable filters or kernels applied across the input image. Each filter identifies specific patterns, generating feature maps that highlight relevant features such as edges, textures, and shapes. The network learns these filters iteratively during training, enhancing its ability to detect hierarchical features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pooling Layers:&lt;/strong&gt; Following convolutional layers, pooling layers reduce spatial dimensions while retaining essential information. Max pooling, for instance, selects the maximum value from each pool window, downsampling the feature maps and enhancing computational efficiency. This process helps in capturing the most relevant information while reducing computational load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fully Connected Layers:&lt;/strong&gt; These layers, typically at the end of the network, interpret the high-level features extracted by the previous layers for classification. Each neuron in the fully connected layers is connected to all neurons in the preceding layer, amalgamating the learned features to make predictions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v235ixw522u12ogaxof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v235ixw522u12ogaxof.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;Given that we are dealing with classification and taking into account that all 3 of our datasets are balanced, meaning that every class is evenly represented in terms of observations, **accuracy **will be our main metric for comparing model performance.&lt;/p&gt;

&lt;p&gt;Accuracy measures the overall correctness of the model's predictions across all classes. It is calculated as (True Positives + True Negatives) / Total Observations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Datasets
&lt;/h2&gt;

&lt;p&gt;First off we have the &lt;strong&gt;Fashion-MNIST Dataset&lt;/strong&gt;, it's comprised of 70,000 grayscale images and 10 classes, meticulously scaled and normalized. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qfzg8lfbk78dvpohs5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qfzg8lfbk78dvpohs5m.png" alt="First images in Fashion-MNIST dataset"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second dataset has approximately 2,000 high-definition images of different landscapes across Mexico obtained from &lt;strong&gt;satellite captures&lt;/strong&gt; and categorized into six classes. Given that these are HD colored images, I performed feature extraction for our MLP and for the CNN I only resized the images to a lower resolution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtyyd1qafcs7n971vypn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtyyd1qafcs7n971vypn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The third and final dataset is comprised of blood images to classify &lt;strong&gt;white blood cells&lt;/strong&gt;. Given that these images have HD resolution and color, I also performed feature extraction for the MLP in the same way as the satellite dataset and for the CNN I only resized the images to a lower resolution. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fl8c92s6lvr1rf524qy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fl8c92s6lvr1rf524qy.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More detailed information on the datasets and feature extraction here:&lt;br&gt;
&lt;a href="https://dev.to/miguelito929/classical-ml-vs-neural-networks-image-classification-25df"&gt;Classical ML vs Neural Networks&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Methodology and Architecture
&lt;/h2&gt;

&lt;p&gt;The architectures used in all 3 datasets followed the same basic structure, involving a sequence of convolutional and pooling layers followed by fully connected layers. After exploring different configurations within the hardware limitations of my personal laptop (sorry about that), this was the structure of largest network (used with the satellite dataset):&lt;/p&gt;

&lt;p&gt;The network started with a &lt;strong&gt;convolutional layer&lt;/strong&gt; with 32 filters of size (3, 3), employing the Leaky ReLU activation function and a max pooling layer to reduce spatial dimensions. Then there are 2 convolutional layers with 64 filters each, also followed by pooling layers.&lt;/p&gt;

&lt;p&gt;After the convolutional and pooling layers, the network utilized a &lt;strong&gt;flattening layer&lt;/strong&gt; to transform the multidimensional feature maps into a single vector. &lt;/p&gt;

&lt;p&gt;Following this there were 4 &lt;strong&gt;dense layers&lt;/strong&gt; (fully connected) containing 256 neurons each, with the Leaky ReLU activation function and incorporating dropout layers after each dense layer (with a dropout rate of 0.5) to prevent overfitting. The final output layer comprised 6 neurons (representing the 6 classes) activated by the softmax function for classification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Across the Blood, Satellite, and Fashion datasetsthe CNN achieved accuracies of 96.5%, 90%, and 86%, respectively vs very similar scores by the MLP which were 96%, 89.3% and 86.1%, respectively. These results show that CNN's regularly outperform or acheive similar performance to traditional NN models which require extensive and manual feature extraction to perform. This proves CNN's superiority over traditional methods given that they learn these features on their own, reducing manual workload. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5u92m3myb4obchz88w3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5u92m3myb4obchz88w3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conlcusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, the evolution of Convolutional Neural Networks has significantly transformed the landscape of image classification. Their architecture, tailored for image data, empowers them to discern complex patterns autonomously, surpassing traditional methods reliant on manual feature engineering and establishing CNNs as the leading model in image analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://medium.com/@draj0718/convolutional-neural-networks-cnn-architectures-explained-716fb197b243" rel="noopener noreferrer"&gt;https://medium.com/@draj0718/convolutional-neural-networks-cnn-architectures-explained-716fb197b243&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Images:&lt;br&gt;
&lt;a href="https://saturncloud.io/blog/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way/" rel="noopener noreferrer"&gt;https://saturncloud.io/blog/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cnn</category>
      <category>neuralnetworks</category>
    </item>
    <item>
      <title>Self Organizing Maps for Image Classification</title>
      <dc:creator>Miguel Perez</dc:creator>
      <pubDate>Thu, 30 Nov 2023 04:41:16 +0000</pubDate>
      <link>https://dev.to/miguelito929/self-organizing-maps-for-image-classification-2ja1</link>
      <guid>https://dev.to/miguelito929/self-organizing-maps-for-image-classification-2ja1</guid>
      <description>&lt;h2&gt;
  
  
  Understanding Self-Organizing Maps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Self-Organizing Maps&lt;/strong&gt; are a powerful unsupervised learning tool, particularly in the realm of image classification. Unlike traditional neural networks, SOMs are distinctive in their ability to preserve the topological properties of input data in a &lt;strong&gt;lower-dimensional space&lt;/strong&gt;. This trait allows them to capture intricate relationships and patterns within datasets without the need for labeled data during training, making them specially valuable in exploratory data analysis and pattern recognition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture and Functionality
&lt;/h2&gt;

&lt;p&gt;SOMs can be represented as a grid of neurons in a lower-dimensional space. These neurons organize themselves in a way that neighboring neurons exhibit similarity in response to input. &lt;/p&gt;

&lt;p&gt;During the training process, when an input vector is introduced to the SOM, each neuron calculates its similarity or distance to the input vector. The neuron with the smallest distance or &lt;strong&gt;highest similarity&lt;/strong&gt; to the input vector is identified as the &lt;strong&gt;winner neuron&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FooU7DBy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26mv0gt80pn1fl2ft781.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FooU7DBy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26mv0gt80pn1fl2ft781.png" alt="Image description" width="651" height="469"&gt;&lt;/a&gt; [image 1]&lt;/p&gt;

&lt;p&gt;Once the winner neuron is determined, the weights of this neuron and its neighboring neurons within a certain radius (according to the SOM's topology) are adjusted to align more closely with the input vector. This process facilitates the self-organization of the map, enabling similar input vectors to be mapped close to each other on the SOM grid.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages and Challenges
&lt;/h2&gt;

&lt;p&gt;The versatility of SOMs lies in their ability to handle complex and high-dimensional data while providing a visual representation of the relationships between inputs. However, the effectiveness of SOMs can be influenced by parameters such as grid size, learning rate, and neighborhood function. Selecting the number of clusters is another challenge that requires a solid understanding of the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Datasets
&lt;/h2&gt;

&lt;p&gt;Before jumping into our results, I want to go over the datasets we're using. &lt;/p&gt;

&lt;p&gt;First off we have the &lt;strong&gt;Fashion-MNIST Dataset&lt;/strong&gt;, it's comprised of 70,000 grayscale images, meticulously scaled and normalized. It includes 60,000 images for training and 10,000 for testing, each depicting various fashion items categorized into 10 classes. These images, sized at 28x28 pixels, offer a diverse collection of wearable items, making it a valuable resource for machine learning tasks like image classification and pattern recognition. The dataset's size and organization make cross validation unnecessary, simplifying training.&lt;/p&gt;

&lt;p&gt;The classes present are the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;T-shirt&lt;/li&gt;
&lt;li&gt;Trouser&lt;/li&gt;
&lt;li&gt;Pullover&lt;/li&gt;
&lt;li&gt;Dress&lt;/li&gt;
&lt;li&gt;Coat&lt;/li&gt;
&lt;li&gt;Sandal&lt;/li&gt;
&lt;li&gt;Shirt&lt;/li&gt;
&lt;li&gt;Sneaker&lt;/li&gt;
&lt;li&gt;Bag&lt;/li&gt;
&lt;li&gt;Ankle Boot&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dtw8aAnP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qfzg8lfbk78dvpohs5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dtw8aAnP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qfzg8lfbk78dvpohs5m.png" alt="First images in Fashion-MNIST dataset" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second dataset has approximately 2,000 high-definition images of different landscapes across Mexico obtained from &lt;strong&gt;satellite captures&lt;/strong&gt;. Each image showcases distinct environmental settings categorized into six classes: Water, Forest, City, Agriculture, Desert, and Mountain. Given that these are HD colored images we need to perform feature extraction in order to train our models. First we resized each image to 128x128 pixels, then we computed the color histograms (RGB) and concatenated them to represent the &lt;strong&gt;color distribution&lt;/strong&gt; in the image. Then we captured &lt;strong&gt;texture features&lt;/strong&gt; in the image by converting to grayscale and computing a Gray Level Co-occurrence Matrix (GLCM). These extracted features are concatenated into a single feature vector for each image along with its class (target variable).&lt;/p&gt;

&lt;p&gt;The classes present are the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Water&lt;/li&gt;
&lt;li&gt;Forest&lt;/li&gt;
&lt;li&gt;City&lt;/li&gt;
&lt;li&gt;Crops&lt;/li&gt;
&lt;li&gt;Desert&lt;/li&gt;
&lt;li&gt;Mountain&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PmM0a9U6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtyyd1qafcs7n971vypn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PmM0a9U6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtyyd1qafcs7n971vypn.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The third and final dataset is comprised of blood images using a microscope, taken by our team with the purpose of classifying the different type of &lt;strong&gt;white blood cells&lt;/strong&gt; present. Given that these images share characteristics with the satellite dataset such as HD resolution and color, we also need to perform feature extraction. In the same way as the satellite dataset, we computed the color histograms and computing the GLCM to create our feature vectors for each image. &lt;/p&gt;

&lt;p&gt;The classes present are the following: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Neutrophils&lt;/li&gt;
&lt;li&gt;Monocytes&lt;/li&gt;
&lt;li&gt;Eosinophils&lt;/li&gt;
&lt;li&gt;Basophils&lt;/li&gt;
&lt;li&gt;Lymphocytes&lt;/li&gt;
&lt;li&gt;Erythroblasts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g90MzIiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fl8c92s6lvr1rf524qy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g90MzIiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fl8c92s6lvr1rf524qy.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Blood dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K32FQI5a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4znt2l88es2nu6djebc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K32FQI5a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4znt2l88es2nu6djebc.png" alt="Image description" width="794" height="665"&gt;&lt;/a&gt;&lt;br&gt;
We can observe that each group is mostly restricted to its own defined area, which means that the SOM was able to separate each class and differentiate them accurrately using their attributes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H2wnsHGn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwkp8e80lr6hz2z2cu3m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H2wnsHGn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwkp8e80lr6hz2z2cu3m.png" alt="Image description" width="800" height="509"&gt;&lt;/a&gt;&lt;br&gt;
Here we can further observe some of the relationships found amongst the classes, we can see that Erythroblasts (red) occupy some of the space dominated by Lymphocytes (purple) and Monocytes (brown), which indicates that they could be related in terms of their attributes. We also see some overlap between Basophils (blue) and Lymphocytes, which means the SOM found their observations to be similar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fashion dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t4mJ2gDU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iuj53sj6d5n6jpkt81o9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t4mJ2gDU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iuj53sj6d5n6jpkt81o9.png" alt="Image description" width="790" height="659"&gt;&lt;/a&gt;&lt;br&gt;
We can again observe that each almost all groups are mostly restricted to their own defined area, but now we can see that the SOM mixed 2,4 and 6 in the same areas, which might indicate that the observations in those 3 classes are extremely similar. This makes sense since pullovers, coats and shirts are all garnments worn over the torso and have a very similar shape.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qtK4pq1R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wegbhcfx8uenasvb83nk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qtK4pq1R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wegbhcfx8uenasvb83nk.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
Here we can see even more how the SOM relates these classes, sneakers (pink) and sandals (purple) are practically in the same are which makes sense give that they are both footwear. We can also observe that apart from the aformentioned relationship between pullovers, coats and shirts, shirts (brown) also share area with t-shirts which also makes perfect sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Satellite dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hZ_W7ONN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hm6ospmb1yog86oyf9d9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hZ_W7ONN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hm6ospmb1yog86oyf9d9.png" alt="Image description" width="763" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we can see that the SOM struggled a bit more in differentiating between different biomes, this was expected as this dataset is by far the most complex in terms of classifying its images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9hJZgj-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqoawmcbswz66oyej5rk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9hJZgj-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqoawmcbswz66oyej5rk.png" alt="Image description" width="800" height="509"&gt;&lt;/a&gt;&lt;br&gt;
We can see that given the area covered, it found some relationship between cultivo (red) and montaña (brown), which makes sense given their brownish color paletts and similar features. It also found close relationship between Agua and Bosque, which might be attributed to their blue/green color palettes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conlusions
&lt;/h2&gt;

&lt;p&gt;Throughout the exploration of diverse datasets, including Fashion-MNIST, satellite landscape captures, and blood cell images, SOMs showcased their adaptability in discerning intricate patterns and relationships. The visual representations of SOM grids provided insights into class separations, overlaps, and associations within the datasets, illustrating the SOMs' capability to organize and differentiate data clusters.&lt;/p&gt;

&lt;p&gt;By leveraging their ability to find underlying structures within complex datasets, SOMs have proven to be a valuable asset in uncovering hidden insights and patterns, providing enhanced understanding and decision-making in many applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Achraf KHAZRI. (2019, August 7). Self Organizing Maps - Towards Data Science. Medium; Towards Data Science. &lt;a href="https://towardsdatascience.com/self-organizing-maps-1b7d2a84e065"&gt;https://towardsdatascience.com/self-organizing-maps-1b7d2a84e065&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Images:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.analyticsvidhya.com/blog/2021/09/beginners-guide-to-anomaly-detection-using-self-organizing-maps/"&gt;https://www.analyticsvidhya.com/blog/2021/09/beginners-guide-to-anomaly-detection-using-self-organizing-maps/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Classical ML vs Neural Networks: Image Classification</title>
      <dc:creator>Miguel Perez</dc:creator>
      <pubDate>Tue, 28 Nov 2023 03:24:02 +0000</pubDate>
      <link>https://dev.to/miguelito929/classical-ml-vs-neural-networks-image-classification-25df</link>
      <guid>https://dev.to/miguelito929/classical-ml-vs-neural-networks-image-classification-25df</guid>
      <description>&lt;p&gt;In this blog I am going to compare two approaches to image classification: &lt;strong&gt;support vector machine&lt;/strong&gt; (a classical machine learning method) vs &lt;strong&gt;multilayer perceptron&lt;/strong&gt; (neural network). &lt;/p&gt;

&lt;p&gt;Now, I am not going to use a convolutional neural network as that wouldn't be a fair fight given that CNNs automatically learn hierarchical features from raw pixel data, eliminating the need for explicit feature engineering. Rather, I am going to manually extract important features when needed and run the resulting dataset through both methods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;Given that we are dealing with classification, the metrics predominantly used to compare model performance are recall, precision and accuracy. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recall&lt;/strong&gt; measures the model's ability to correctly identify all relevant instances, specifically the ratio of correctly predicted positive observations to the total actual positives. It is calculated as True Positives / (True Positives + False Negatives). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precision&lt;/strong&gt; measures the accuracy of positive predictions made by the model, specifically the ratio of correctly predicted positive observations to the total predicted positives. It is calculated as True Positives / (True Positives + False Positives).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accuracy&lt;/strong&gt; measures the overall correctness of the model's predictions across all classes. It is calculated as (True Positives + True Negatives) / Total Observations. &lt;/p&gt;

&lt;p&gt;Taking into account that all 3 of our datasets are balanced, meaning that every class is evenly represented in terms of observations, accuracy will be our main metric for comparing model performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Models
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Support Vector Machines&lt;/strong&gt; (SVM) constitute a powerful class of supervised learning models proficient at classification and regression tasks. The fundamental principle of SVM involves identifying an optimal hyperplane that separates classes in the input space. In the case of &lt;strong&gt;linear SVM&lt;/strong&gt;, this entails finding a hyperplane that maximizes the distance between the hyperplane and the nearest data points from different classes. Lineal SVM is highly effective for linearly separable datasets, providing robustness against overfitting and demonstrating efficiency in high-dimensional spaces without requiring complex computations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_LF_fUYk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8xgezgzoqqftee2ibzjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_LF_fUYk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8xgezgzoqqftee2ibzjc.png" alt="Image description" width="800" height="605"&gt;&lt;/a&gt; [image 1]&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Radial Basis Function SVM&lt;/strong&gt; (RBF), extends the capability of SVM to handle non-linearly separable data. By employing a kernel trick, it transforms the data into a higher-dimensional space, enabling the creation of non-linear decision boundaries.  Consider the figure below, at first you can’t draw a line that accurately separates the 2 classes, but if you transform those points intoa higher dimension you can find a plane that achieves 100% separation. This flexibility in modeling intricate relationships makes RBF SVM suitable for diverse datasets with non-linear patterns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bEn6lOgj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pr0rzzd1ehdtvulbgubt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bEn6lOgj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pr0rzzd1ehdtvulbgubt.png" alt="Image description" width="616" height="375"&gt;&lt;/a&gt; [image 2]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multilayer Perceptron&lt;/strong&gt; (MLP) represents a class of artificial neural networks structured with multiple layers, including an input layer, one or more hidden layers, and an output layer. MLPs leverage non-linear activation functions (such as ReLU) within each neuron, allowing them to learn complex (often non linear) relationships between inputs and outputs. Through forward propagation, data is processed through the network, and by utilizing backpropagation (a technique involving gradient descent) the weights are iteratively adjusted to minimize prediction errors. MLPs excel in feature learning, automatically extracting and representing essential features from raw data and reducing the need for explicit feature engineering. However, the use of feature extraction is still needed in cases such as this one, where HD images result in a large enough number of features that the computational effort becomes too costly to be a practical approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6ykoW2k6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ataeylnnihaejcdyr7rr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6ykoW2k6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ataeylnnihaejcdyr7rr.png" alt="Image description" width="800" height="492"&gt;&lt;/a&gt; [image 3]&lt;/p&gt;

&lt;h2&gt;
  
  
  Datasets
&lt;/h2&gt;

&lt;p&gt;Before jumping into our results, I want to go over the datasets we're using. &lt;/p&gt;

&lt;p&gt;First off we have the &lt;strong&gt;Fashion-MNIST Dataset&lt;/strong&gt;, it's comprised of 70,000 grayscale images, meticulously scaled and normalized. It includes 60,000 images for training and 10,000 for testing, each depicting various fashion items categorized into 10 classes. These images, sized at 28x28 pixels, offer a diverse collection of wearable items, making it a valuable resource for machine learning tasks like image classification and pattern recognition. The dataset's size and organization make cross validation unnecessary, simplifying training.&lt;/p&gt;

&lt;p&gt;The classes present are the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;T-shirt&lt;/li&gt;
&lt;li&gt;Trouser&lt;/li&gt;
&lt;li&gt;Pullover&lt;/li&gt;
&lt;li&gt;Dress&lt;/li&gt;
&lt;li&gt;Coat&lt;/li&gt;
&lt;li&gt;Sandal&lt;/li&gt;
&lt;li&gt;Shirt&lt;/li&gt;
&lt;li&gt;Sneaker&lt;/li&gt;
&lt;li&gt;Bag&lt;/li&gt;
&lt;li&gt;Ankle Boot&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dtw8aAnP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qfzg8lfbk78dvpohs5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dtw8aAnP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qfzg8lfbk78dvpohs5m.png" alt="First images in Fashion-MNIST dataset" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second dataset has approximately 2,000 high-definition images of different landscapes across Mexico obtained from &lt;strong&gt;satellite captures&lt;/strong&gt;. Each image showcases distinct environmental settings categorized into six classes: Water, Forest, City, Agriculture, Desert, and Mountain. Given that these are HD colored images we need to perform feature extraction in order to train our models. First we resized each image to 128x128 pixels, then we computed the color histograms (RGB) and concatenated them to represent the &lt;strong&gt;color distribution&lt;/strong&gt; in the image. Then we captured &lt;strong&gt;texture features&lt;/strong&gt; in the image by converting to grayscale and computing a Gray Level Co-occurrence Matrix (GLCM). These extracted features are concatenated into a single feature vector for each image along with its class (target variable).&lt;/p&gt;

&lt;p&gt;The classes present are the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Water&lt;/li&gt;
&lt;li&gt;Forest&lt;/li&gt;
&lt;li&gt;City&lt;/li&gt;
&lt;li&gt;Crops&lt;/li&gt;
&lt;li&gt;Desert&lt;/li&gt;
&lt;li&gt;Mountain&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PmM0a9U6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtyyd1qafcs7n971vypn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PmM0a9U6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xtyyd1qafcs7n971vypn.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The third and final dataset is comprised of blood images using a microscope, taken by our team with the purpose of classifying the different type of &lt;strong&gt;white blood cells&lt;/strong&gt; present. Given that these images share characteristics with the satellite dataset such as HD resolution and color, we also need to perform feature extraction. In the same way as the satellite dataset, we computed the color histograms and computing the GLCM to create our feature vectors for each image. &lt;/p&gt;

&lt;p&gt;The classes present are the following: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Neutrophils&lt;/li&gt;
&lt;li&gt;Monocytes&lt;/li&gt;
&lt;li&gt;Eosinophils&lt;/li&gt;
&lt;li&gt;Basophils&lt;/li&gt;
&lt;li&gt;Lymphocytes&lt;/li&gt;
&lt;li&gt;Erythroblasts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g90MzIiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fl8c92s6lvr1rf524qy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g90MzIiQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fl8c92s6lvr1rf524qy.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Methodology and Results
&lt;/h2&gt;

&lt;p&gt;Now that we know how our chosen models work, we can start with the training phase. We are using &lt;a href="https://scikit-learn.org/stable/index.html"&gt;scikit-learn&lt;/a&gt; to import both &lt;a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html"&gt;SVC&lt;/a&gt; (support vector classification) and &lt;a href="https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html"&gt;MLPClassifier&lt;/a&gt; (multilayer perceptron classifier). &lt;/p&gt;

&lt;p&gt;Using each dataset, I first trained the both linear and radial based SVM models using cross validation (except in the case of the fashion dataset due to the high number of observations) and evaluated them using accuracy. &lt;/p&gt;

&lt;p&gt;We can then proceed to train our MLP neural networks using a parameter grid to find the best hiperparameters for each dataset, and in the case of the Satellite and Blood datasets we used 5 fold cross validation via scikit-learn's &lt;a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html"&gt;StratifiedKFold&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These were the results in terms of accuracy:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B_z19_dA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3128uuv6h87denbgrdf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B_z19_dA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3128uuv6h87denbgrdf.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fashion dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lineal SVM: 85%&lt;br&gt;
Radial Base SVM: 88%&lt;br&gt;
MLP Neural Network: 90%&lt;/p&gt;

&lt;p&gt;We can see that Radial Base SVM outperformed Lineal SVM which might indicate that the dataset benefited from projection into higher dimension in order to be sepparable. Alas, the Neural Network still outperformed both models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Satellite dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lineal SVM: 82%&lt;br&gt;
Radial Base SVM: 79%&lt;br&gt;
MLP Neural Network: 86%&lt;/p&gt;

&lt;p&gt;Both SVM models achieved a similar accuracy score, but the Neural Network substantially outperformed the other models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blood dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lineal SVM: 95%&lt;br&gt;
Radial Base SVM: 96%&lt;br&gt;
MLP Neural Network: 96%&lt;/p&gt;

&lt;p&gt;Here we can see that all models had similar performance with a high accuracy score of 95-96%.&lt;/p&gt;

&lt;p&gt;Here the Neural Network was the high performer! Across the Fashion-MNIST dataset, the Radial Basis SVM achieved an 88% accuracy, while the MLP soared ahead with 90%. Similarly, in the satellite dataset, SVMs reached 79% to 82% accuracy, but the MLP secured an 86% accuracy score. Notably, in the blood cell images, both SVMs and the MLP achieved high accuracy, around 95% to 96%, indicating a balanced performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When comparing Support Vector Machines (SVM) and Multilayer Perceptron (MLP) for image classification across diverse datasets – Fashion-MNIST, satellite landscapes, and blood cell images – the MLP consistently outperformed SVMs in accuracy. This might be attributed to the MLP's inherent ability to automatically learn complex relationships within the data. &lt;/p&gt;

&lt;p&gt;In contrast, SVMs, while powerful, often require extensive feature manipulation to achieve optimal results, particularly in datasets with intricate features and complex patterns such as the satellite image dataset.&lt;/p&gt;

&lt;p&gt;The MLP's consistent superiority in accuracy signifies its adaptability and efficiency in image classification, allowing it to outperform other models and positioning it as a potent choice in this domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Gandhi, R. (2018, June 7). Support Vector Machine — Introduction to Machine Learning Algorithms. Medium; Towards Data Science. &lt;a href="https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47Support"&gt;https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47Support&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Vector Machine  SVM  Algorithm. (2021, January 20). GeeksforGeeks; GeeksforGeeks. &lt;a href="https://www.geeksforgeeks.org/support-vector-machine-algorithm/"&gt;https://www.geeksforgeeks.org/support-vector-machine-algorithm/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Images:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://towardsdatascience.com/support-vector-machines-svm-clearly-explained-a-python-tutorial-for-classification-problems-29c539f3ad8"&gt;https://towardsdatascience.com/support-vector-machines-svm-clearly-explained-a-python-tutorial-for-classification-problems-29c539f3ad8&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://stats.stackexchange.com/questions/63881/use-gaussian-rbf-kernel-for-mapping-of-2d-data-to-3d"&gt;https://stats.stackexchange.com/questions/63881/use-gaussian-rbf-kernel-for-mapping-of-2d-data-to-3d&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/"&gt;https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>neuralnetworks</category>
      <category>svm</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
