<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Olaoyo Michael</title>
    <description>The latest articles on DEV Community by Olaoyo Michael (@sirmike).</description>
    <link>https://dev.to/sirmike</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sirmike"/>
    <language>en</language>
    <item>
      <title>Naturals Detection Using Python.</title>
      <dc:creator>Olaoyo Michael</dc:creator>
      <pubDate>Sun, 23 Apr 2023 13:09:47 +0000</pubDate>
      <link>https://dev.to/sirmike/naturals-detection-using-python-39hf</link>
      <guid>https://dev.to/sirmike/naturals-detection-using-python-39hf</guid>
      <description>&lt;p&gt;The earth is blessed with lots of natural phenomenons such as Forests, Glaciers and all sort. In this article, I'll be taking you through how to use computer vision to detect these things. In this article we will be detecting Forests, Glaciers, Mountains, Seas and Buildings. In this article, we shall be using a pre-trained model called VGG16 to train our model. The VGG16 is an example of a transfer learning model.&lt;/p&gt;

&lt;p&gt;Now let’s get started with the task of detecting Naturals within an image. The most challenging task in this project is to find a dataset that includes some images that we can use to train our neural network. &lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Datasets for the Neural Network
&lt;/h2&gt;

&lt;p&gt;To get the dataset, we will use chrome extension of zip downloads which will download all the images we have from google. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, we head to google, search for Forest, the we use the chrome extension Zip downloader to download all the images for our forest images&lt;/li&gt;
&lt;li&gt;We go google, search for Glaciers, then use the chrome zip downloader to download all the images.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We continue in that order, till we have the five data folders needed for classification to train the neural network. &lt;br&gt;
After getting all the five folders(Froret, Glacier, Sea, Buildings and Mountains), we save all these folders into a single folder which hold all the five datasets folders.&lt;/p&gt;
&lt;h2&gt;
  
  
  Removing Doggy Images
&lt;/h2&gt;

&lt;p&gt;When we download the images from google, there are some images which will be downloaded by the extension that we dont want to feed into our network. These are called outliers. They tend to reduce the learning of our models and reduce the accuracy. We need to get rid of them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Import the libraries and Modules
from PIL import Image
import os
import numpy as np
from sklearn.model_selection import train_test_split
import cv2
import tensorflow as tf
import matplotlib.pyplot as plt
import imghdr

# Loading into a path
data_dir = 'The path to which the folder that contains all five folders to the dataset is' 
image_exts = ['jpg', 'bmp', 'png', 'jpeg']
for image_class in os.listdir(data_dir):
    for image in os.listdir(os.path.join(data_dir, image_class)):
        image_path = os.path.join(data_dir, image_class, image)
        try:
           img = cv2.imread(image_path)
           tip = imghdr.what(image_path)
           if tip not in image_exts:
              print('Image not in exts list {}'.format(image_path))
              os.remove(image_path)
           except Exception as e:
           print('Issues with image {}'.format(image_path))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This block of code will remove outliers and doggy images.&lt;/p&gt;

&lt;h2&gt;
  
  
  Load and preprocess the Data
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;img_size = 128
def load_data():
    training_data = []
    for category in CATEGORIES:
        path = os.path.join(DATADIR, category)
        class_num = CATEGORIES.index(category)
        for img in os.listdir(path):
            try:
                img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_COLOR)
                new_array = cv2.resize(img_array, (img_size, img_size))
                training_data.append([new_array, class_num])
            except Exception as e:
                pass
    x = np.array([i[0] for i in training_data]) / 255.0
    y = np.array([i[1] for i in training_data])
    y = tf.keras.utils.to_categorical(y, len(CATEGORIES))
    x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.2, random_state=42)
    return x_train, x_val, y_train, y_val
x_train, x_val, y_train, y_val = load_data()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating the Model
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load the pre-trained VGG16 model
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(img_size, img_size, 3))

# Freeze the pre-trained layers
for layer in base_model.layers:
    layer.trainable = False

# Add new trainable layers on top of the pre-trained model
x = base_model.output
x = Flatten()(x)
x = Dense(1024, activation='relu')(x)
output = Dense(len(CATEGORIES), activation='softmax')(x)

# Define the new model
model = tf.keras.models.Model(inputs=base_model.input, outputs=output)

# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=10, validation_data=(x_val, y_val))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Evaluating the model
&lt;/h2&gt;

&lt;p&gt;test_loss, test_acc = model.evaluate(x_val, y_val, verbose=0)&lt;/p&gt;

&lt;h1&gt;
  
  
  Print the test loss and accuracy
&lt;/h1&gt;

&lt;p&gt;print('Test loss:', test_loss)&lt;br&gt;
print('Test accuracy:', test_acc)&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the model to predict new images
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
from PIL import Image
from keras.models import load_model

# Load your trained CNN model
model = load_model('my_model.h5')

# Load the image you want to classify
image = Image.open(r'C:\Users\mine\Desktop\natural\mountains\images344.jpg') # Now you can use any image the model hasn't seen before to see the prediction

# Preprocess the image
image = image.resize((128, 128))
image = np.array(image) / 255.0
image = image.reshape((1, 128, 128, 3))

# Make a prediction
preds = model.predict(image)

# Find the index of the highest probability value
pred_class = np.argmax(preds)

# Map the predicted index to the corresponding class label
classes = ['buildings', 'forest', 'sea', 'glaciers', 'mountains']
pred_label = classes[pred_class]

# Display the result
print('The predicted class of the image is:', pred_label)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope you got and the model predicted well. If there is question, feel free to ask in the comment section and I'll be very happy to help. &lt;/p&gt;

&lt;p&gt;Check out the full code and results on my &lt;a href="https://github.com/Mykel4uu"&gt;GitHub account&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I deployed the model on streamlit. You can try it out &lt;a href="https://avikumart-image-classification-web-app-rms-app-o1tcw1.streamlit.app/"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Flower Recognition with Python</title>
      <dc:creator>Olaoyo Michael</dc:creator>
      <pubDate>Sun, 23 Apr 2023 11:54:17 +0000</pubDate>
      <link>https://dev.to/sirmike/flower-recognition-with-python-3k2j</link>
      <guid>https://dev.to/sirmike/flower-recognition-with-python-3k2j</guid>
      <description>&lt;p&gt;The edge and color properties of flower photos are used in flower recognition to categorize flowers. I'll introduce you to a Python machine learning project that focuses on flower detection in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Flower Recognition?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The planet is home to numerous flower species. Some species, like roses, have a variety of hues. The names and details of every flower are tough to recall. Additionally, people could mistakenly identify similar floral species.&lt;br&gt;
For instance, although having similar names and flower forms, white champaka and champak have different colors and petal lengths.&lt;/p&gt;

&lt;p&gt;Currently, the only way to identify any specific flower or flower species is to search for information based on one's own knowledge and professional experience. The availability of such expertise may provide a challenge in this investigation.&lt;br&gt;
Today, the only real options for finding such content online are keyword searches and word processors. The problem is that even then, the searcher would still need to come up with suitably relevant keywords, which they are unable to do.&lt;/p&gt;

&lt;p&gt;This post will demonstrate how to use Python to recognize flowers using machine learning.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Machine Learning Project on Flower Recognition with Python&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;4242 flower photos make up the dataset I'm utilizing for this job of flower recognition. Data is gathered using photos from Yandex, Google, and Flickr. This data collection can be used to identify the flowers in the image.&lt;/p&gt;

&lt;p&gt;Five categories—chamomile, tulip, rose, sunflower, and dandelion—are used to categorize the photographs. There are around 800 images for each class. The images have a resolution of just 320 x 240 pixels, which is not very great. Photos have varying proportions and are not scaled down to one size.&lt;br&gt;
Let's import the required Python libraries now to begin the Python work of flower recognition: &lt;a href="https://www.kaggle.com/datasets/alxmamaev/flowers-recognition" rel="noopener noreferrer"&gt;Download Dataset&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import cv2
import numpy as np

#Encoding and Split data into Train/Test Sets
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split

#Tensorflow Keras CNN Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten, Activation, Conv2D, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam,SGD,Adagrad,Adadelta,RMSprop

#Plot Images
import matplotlib.pyplot as plt


folder_dir = 'dataset path' # Choose your path
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the next step is to read each image in the data and create a label for each with the name of the folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data = []
label = []

SIZE = 128 #Crop the image to 128x128

for folder in os.listdir(folder_dir):
    for file in os.listdir(os.path.join(folder_dir, folder)):
        if file.endswith("jpg"):
            label.append(folder)
            img = cv2.imread(os.path.join(folder_dir, folder, 
                                                            file))
            img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
            im = cv2.resize(img_rgb, (SIZE,SIZE))
            data.append(im)
        else:
            continue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s convert the data into numerical values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data_arr = np.array(data)
label_arr = np.array(label)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s use the Label encoder and normalize the data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;encoder = LabelEncoder()
y = encoder.fit_transform(label_arr)
y = to_categorical(y,5)
X = data_arr/255
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to split the dataset into 80% training and 20% test sets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=10)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s build a neural network model for the task of Flower Recognition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = Sequential()
model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same',activation ='relu', input_shape = (SIZE,SIZE,3)))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'Same',activation ='relu'))
model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'Same',activation ='relu'))
model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'Same',activation ='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(5, activation = "softmax"))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before compiling the model we need to create more training images to prevent overfitting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datagen = ImageDataGenerator(
        rotation_range=20,
        zoom_range = 0.20,
        width_shift_range=0.3,
        height_shift_range=0.3,
        horizontal_flip=True,
        vertical_flip=True)

datagen.fit(X_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s compile the neural network model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model.compile(optimizer=Adam(lr=0.0001),loss='categorical_crossentropy',metrics=['accuracy'])
batch_size=32
epochs=64
history = model.fit_generator(datagen.flow(X_train,y_train, batch_size=batch_size),
                              epochs = epochs,
                              validation_data = (X_test,y_test),
                              verbose = 1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s let the model if it recognize flowers properly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;categories = np.sort(os.listdir(folder_dir))
fig, ax = plt.subplots(6,6, figsize=(25, 40))

for i in range(6):
    for j in range(6):
        k = int(np.random.random_sample() * len(X_test))
        if(categories[np.argmax(y_test[k])] == categories[np.argmax(model.predict(X_test)[k])]):
            ax[i,j].set_title("TRUE: " + categories[np.argmax(y_test[k])], color='green')
            ax[i,j].set_xlabel("PREDICTED: " + categories[np.argmax(model.predict(X_test)[k])], color='green')
            ax[i,j].imshow(np.array(X_test)[k].reshape(SIZE, SIZE, 3), cmap='gray')
        else:
            ax[i,j].set_title("TRUE: " + categories[np.argmax(y_test[k])], color='red')
            ax[i,j].set_xlabel("PREDICTED: " + categories[np.argmax(model.predict(X_test)[k])], color='red')
            ax[i,j].imshow(np.array(X_test)[k].reshape(SIZE, SIZE, 3), cmap='gray')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0p8nyfnfjowmr2hb195l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0p8nyfnfjowmr2hb195l.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you liked this article on Machine Learning Project on Flower Recognition with Python programming language. Check out the full code and results on my &lt;a href="https://github.com/Mykel4uu" rel="noopener noreferrer"&gt;GitHub account&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>computervision</category>
    </item>
  </channel>
</rss>
