DEV Community

Cover image for Using a Neural Network Pt.2
Timothy Cummins
Timothy Cummins

Posted on

Using a Neural Network Pt.2

Introduction

In last weeks blog we downloaded our dataset, found an understanding of our data so that we could determine what metrics we wanted to use and setup our images so that they could be used in our Neural Net. So today I will be continuing I will be going over the setup of the Neural Net itself and trying my hardest to prepare a model that is both accurate and efficient enough that if you are trying this yourself it won't take forever to run.

Batches and Epochs

Back in the last blog in the section "Prepping the Image" I talked about all of the conversions we were doing to the image but I left out talking about another important feature that I had assigned value to, batch_size. Batch size controls how many images (in our case), that we are fitting to our model at one time. After the every time a batch is fit to the model the neural net will then adjust it's weights on each node and then run the next batch through. The completion of all of the batches is called an Epoch, and when the model completes an Epoch it reshuffles the images into new batches and begins the next Epoch. Why I bring this up now is because previously I accidentally had my batch size set to 8, which will get us better results in less Epochs, it would take a lot of time to run each one. So we are going to adjust batch_size to 64.

Building the Network

For our model we will be using a couple of different layer types such as dense, convolutional, batch normalization and dropout layers.

Dense layers are the most common layer in neural networks, as the Keras description says "Just your regular densely-connected NN layer". Dense Layers find associations between features by taking the dot product of the input tensor and a weight kernel we feature in our model.

Convolutional layers go through the pixels in the image and compare them to the surrounding pixels to find patterns in the image.

Batch normalization is one of the layers I do not understand but I do know that it speeds up the neural network by re-centering and re-scaling the input layer.

Then finally we have dropout layers which do exactly what they sound like, they drop out a percentage of the weights to prevent overfitting.

So lets define our model and add our layers.

model = Sequential()
Enter fullscreen mode Exit fullscreen mode
model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(image_size, image_size, 3)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())

model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())

model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())

model.add(Conv2D(96, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(.3))

model.add(Conv2D(32, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.4))

model.add(Flatten())
model.add(Dense(128, activation='relu'))
# model.add(Dropout(0.3))
model.add(Dense(1, activation = 'sigmoid'))
Enter fullscreen mode Exit fullscreen mode

So as you can see first we are using our convolutional layers of different sizes to go through the image and find out what patterns it sees and then we flatten the image down which allows us to throw in some dense layers and then eventually our last layer just has a single output, so we can get our diagnoses.

Then finally we can compile our model together.

model.compile(optimizer = 'adam',
              loss='binary_crossentropy',
              metrics=['accuracy',keras.metrics.Recall(name='recall')])
Enter fullscreen mode Exit fullscreen mode

Select our number of epochs and run out model.

epochs = 25
steps_per_epoch = train_generator.n // batch_size
validation_steps = test_generator.n // batch_size
history = model.fit(train_generator,
                              steps_per_epoch=steps_per_epoch,
                              epochs=epochs,
                              validation_data=test_generator,
                              validation_steps=validation_steps,
                              class_weight=class_weight)
Enter fullscreen mode Exit fullscreen mode

Alt Text

And like that we have a fitted neural network!

Conclusion

So now that we have our Neural Network fitted we can save it by using model.save('pneu_model.h5') and then we can continue do some changes to our model without losing the one we have just fitted. Next week I will be going over adding some Hyperparameters to help get us the accuracy and recall we are looking for as well as finish up this series.

Top comments (0)