DEV Community

Aravind B N
Aravind B N

Posted on

DenseNet-inspired CNN architecture for cancer image classification.

"Hello, I'm Aravind, and I work at Luxoft India as a junior software engineer. I'm excited to share this post, in which I've described the past project of a CNN architecture inspired by DenseNet and specifically designed for the essential task of cancer image classification.

The Basic ideas of CNN will be covered in the first section of CNN, and the second section is covered in DenseNet inspired custom CNN architecture. The last section is here.

Introduction

Histopathological image analysis is an important technique for early identification of breast cancer. To build up efficiency of breast cancer i.e., Breast cancers detection using histopathological pictures and also reduce difficulty from doctors, we design a deep learning method to diagnose cancer using medical pics. In this research, we employ Convolution Neural Network (CNN), a deep learning method, to perform recognition. characteristics are extracted by way of the using CNN version known as DenseNet. There are two types of training for the categorization task: malignant (Cancer) and benign(NON-Cancer).

Convolution Neural Network

DenseNet inspired custom CNN architecture

Methodology

Image description

Exploring the pre-existing Databases

  • After exploring a couple of datasets, we picked up Wisconsin dataset for the project.
  • This dataset is used to Predict whether the cancer is benign or malignant.
  • In this dataset Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image.
  • The diagnosis i.e. whether the nuclei is Benign OR Malignant is displayed in the 'diagnosis' column in the data frame.
  • Breast cancer is the most common type of cancer affecting women among which Invasive Ductal Carcinoma (IDC) is the most common form of breast cancer representing 80% of all breast cancer diagnosis.
  • By detecting the malignant/benign cell nuclei classification becomes easier as well as faster resulting in early diagnosis and preventing fatality.
  • This data set has 569 records with corresponding values under 32 parameters

Comparing accuracy using different neural network models w.r.t the database

  1. Logistic Regression
  2. Support Vector Machine Kernal
  3. Gaussian Naive Bayes
  4. Random Forest classifier

1 Logistic Regression
Although there are numerous additional intricate expansions, logistic regression is a mathematical framework that, in its simplest form, uses a function known as logistic to represent a dependent variable with a binary form. By estimating each parameter of a logarithmic model—a type of binary regression—logistic regression, also known as logit regression—is used in the study of regression.

Logistic function is an S-shaped curve that can take any real-valued number and map it into a value between 0 and 1, but never exactly at those limits.

1 / (1 + e^-value)

Where e is the base of the natural logarithms (Euler’s number or the EXP() function in your spreadsheet) and value is the actual numerical value that you want to transform.

Image description

2 Support Vector Machine Kernal

Support Vector Machine (SVM) is a supervised machine learning algorithm capable of performing classification, regression and even outlier detection. The linear SVM classifier works by drawing a straight line between two classes.
Support vectors are the data points that lie closest. to the decision surface (or hyperplane). They are the data points most difficult to classify. They have direct bearing on the optimum location.
The goal of a support vector machine is not only to draw hyperplanes and divide data points, but to draw the hyperplane the separates data points with the largest margin, or with the most space between the dividing line and any given data point.

Image description

3 Gaussian Naive Bayes

A Gaussian Naive Bayes algorithm is a subset of the NB algorithms family. It is used when the characteristics have constant values. It is also expected that all of the features have a the gaussian distribution, i.e., a normal distribution.

The Naive Bayes classifiers work based on the Bayes’ theorem, which describes the probability of an event, based on prior knowledge of conditions be related of conditions to the event. It is a very simple and fast classifier and works sometimes very good, and even without much effort you can get a okay accuracy.

Image description

4 Random Forest classifier

Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes or mean/average prediction of the individual trees.
The random forest is a classification algorithm consisting of many decisions trees. It uses bagging and feature randomness when building each individual tree to try to create an uncorrelated forest of trees whose prediction by committee is more accurate than that of any individual tree.

Image description

Tabulating the results to observe the differences.

Logistic Regression -- 0.958042
Support Vector Machine-- 0.972028
Gaussian Naive Bayes -- 0.916084
Random Forest -- 0.986014

Trail

The following graphs were produced by replacing the AlexNet architecture with the DenseNet201 architecture and pairing it with the Adam optimizer.
As can be seen from the graphs above, DenseNet201 has a 90% accuracy rate with significantly lower losses. This model, on the other hand, is computationally intensive and requires at least 10 times the amount of training time as AlexNet.

Image description

Image description

Accuracy and Loss
Accuracy | Loss
0.9043 | 0.2584

Result
Accuracy is the most popular statistic for evaluating model performance. Misclassification scores, on the other hand, don't make sense when just 2% of your dataset is of one class (malignant) and 90 percent is of another (benign). You can be 90 percent accurate and still miss all of the malignant instances, making your classifier useless.

Test model 1

img_ = image.load_img("../input/breakhis-400x/BreaKHis 400X/train/benign/SOB_B_A-14-22549AB-400-002.png", target_size=(244, 244))
imag = image.img_to_array(img_)
imag = np.expand_dims(imag, axis=0)
pred = model.predict(imag)
pred = np.argmax(pred,axis=1)
print(pred)
print(label[pred[0]])
plt.imshow(img_)
Enter fullscreen mode Exit fullscreen mode

This above code is used for testing one of the Histopathological image weathers they are cancer(benign) or non-cancer(malignant) image.

Image description

Hence this image is non-cancer.

Test model 2

img_ = image.load_img("../input/breakhis-400x/BreaKHis 400X/test/malignant/SOB_M_DC-14-11031-400-003.png", target_size=(244, 244))
imag = image.img_to_array(img_)
imag = np.expand_dims(imag, axis=0)
pred = model.predict(imag)
pred = np.argmax(pred,axis=1)
print(pred)
print(label[pred[0]])
plt.imshow(img_)
Enter fullscreen mode Exit fullscreen mode

Image description

Hence this image is cancer.

Conclusion

we show a set of experiments utilizing a deep learning strategy to avoid hand-crafted features on the BreaKHis dataset. We demonstrated that we could modify an existing CNN architecture, in this case AlexNet, that was built for categorizing color photos of objects to classify BC histological images. We have offered different ways for training the CNN architecture, based on the extraction of random patches or a sliding window mechanism, that allow us to deal with the high-resolution of these textured images without modifying the CNN model built for low-resolution pictures. When compared to traditional machine learning models trained on the same dataset but using state-of-the-art texture descriptors, our experimental results on the BreaKHis dataset demonstrated that CNN achieved better accuracy.

Do let me know in case of any queries in comments below.

Thanks for reading.

Top comments (0)