I Introduction
Epilepsy is a noncommunicable disease and one of the most common neurological disorders of humans, usually associated with sudden attacks [one]. Seizures are a swift and early abnormality in the electrical activity of the brain that disrupts the part of the whole body [two]. Epileptic seizures are affecting around 60 million people worldwide by varied kinds [three]. These attacks occasionally provoke cognitive disorders which can cause severe physical injury to the patient. Besides, people with epileptic seizures sometimes suffer emotional distress due to embarrassment and lack of appropriate social status. Hence, early detection of epileptic seizures can help the patients and improve their quality of life.
Various screening techniques have been developed to diagnose epileptic seizures, including magnetic resonance imaging (MRI) [four], Electroencephalogram (EEG) [five], Magnetoencephalography (MEG) [six] and Positron Emission Tomography (PET) [seven]
. The EEG signals are widely preferred as they are economical, portable, and show clear rhythms in the frequency domain
[eight, ACHARYA2018103]. The EEG provides the voltage variations produced by the ionic current of neurons in the brain, which indicate the brain’s bioelectric activity
[nine]. Diagnosing epilepsy with EEG signals is timeconsuming and strenuous, as the epileptologist or neurologist needs to screen the EEG signals minutely. Also, there is a possibility of human error, and hence, developing a computerbased diagnosis may alleviate these problems.Many machine learning algorithms have been developed using statistical, frequency domain and nonlinear parameters to detect epileptic seizures [ten, eleven, twelve, thirteen, fourteen, fifteen]
. In conventional machine learning techniques, the selection of features and classifiers is done by trial and error method. Also, one needs to have sound knowledge of signal processing and data mining techniques to develop an accurate model. Such models perform well for limited data. Nowadays, with the increase in the availability of data, machine learning techniques may not perform very well. Hence, the deep learning techniques, which are the stateofart methods, have been employed.
In traditional machine learning algorithms, most simulations were executed in the Matlab software environment, but the deep learning models are usually developed using Python programming language with numerous opensource toolboxes. The python language with more freely available deep learning libraries have helped the researchers to develop novel automated systems. Also created the accessibility of computation resource to everyone thanks to cloud computing. Figure
1shows that, the Tensorflow and one of its highlevel APIs, Keras, are widely used for epileptic seizure detection using deep learning in reviewed works due to their versatility and applicability.
Since 2016, substantial research has been done to detect epilepsy using deep learning models such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Autoencoders (AE), CNNRNN, and CNNAE
[seventeen, eighteen, ninteen, twenty]. The number of studies in this area using deep learning is growing by proposing new efficient models. Figure 2 provides the overview of number of studies conducted using various deep learning models from 2014 to 2020 in detecting epileptic seizures.The main aims of this study are as follows:

Providing information on available EEG datasets.

Reviewing works done using various deep learning models for automated detection of epileptic seizures with various modality signals.

Introducing future challenges on the detection of epileptic seizures.

Analyzing the best performing model for various modalities of data.
Epileptic seizures detection using deep learning is discussed in section II. Section III describes the nonEEG based epileptic seizure detection. Hardware used for epileptic seizures detection is provided in section IV. Discussion on the paper is outlined in section V. The challenges faced by employing deep learning methods for the epileptic seizure detection are summarized in section VI. Finally, the conclusion and future work are delineated in section VII.
Ii Epileptic Seizure Detection Based on Deep Learning Techniques
Figure 3 illustrates the working of a ComputerAided Diagnosis System (CADS) for epileptic seizures using deep learning methods. The input to the deep learning model can be EEG, MEG, Electrocorticography (ECoG), functional NearInfrared Spectroscopy (fNIRS), PET, SinglePhoton Emission Computed Tomography (SPECT) and MRI. Then the signal is subjected to the preprocessing to remove the noise. These noise eliminated signals are used to develop the deep learning models. The performance of the model is evaluated using accuracy, sensitivity, and specificity. Additionally, a table combining all the works conducted on epileptic seizure detection using deep learning is presented in the table form in Appendix A of the paper.
Iia Epileptic Datasets
Datasets play an important role in developing accurate and robust CADS. Multiple EEG datasets, namely Freiburg [twentyOne], CHBMIT [twentyTwo], Kaggle [twentyThree], Bonn [twentyFour], FlintHills [thirteen], BernBarcelona [twentyFive], Hauz Khas [thirteen], and Zenodo [twentySix] are available to develop the automated epileptic seizure detection systems. The signals from these datasets are recorded either intracranially or from the scalp of humans or animals. The supplementary information on each dataset is listed in table I.
Dataset  Number of Patients  Number of Seizures  Recording  Total Duration (hour)  Sampling Frequency (Hz) 
FlintHills [thirteen]  10  59  Continues intracranial ling term ECoG  1419  249 
Hauz Khas [thirteen]  10  NA  Scalp EEG (sEEG)  NA  200 
Freiburg [twentyOne]  21  87  Intracranial Electroencephalography (IEEG)  708  256 
CHBMIT [twentyTwo]  22  163  sEEG  844  256 
Kaggle [twentyThree]  5 dogs  48  IEEG  627  400 
2 patients  5 KHz  
Bonn [twentyFour]  10  NA  Surface and IEEG  39 m  173.61 
Barcelona [twentyFive]  5  3750  IEEG  83  512 
Zenodo [twentySix]  79 neonatal  460  sEEG  74 m  256 
Figure 4 shows the number of times each dataset employed to detect epileptic seizures using deep learning techniques. It can be observed that the Bonn dataset is most widely used for automated detection of seizure using deep learning.
IiB Preprocessing
In developing CAD system using deep learning with EEG signals, the preprocessing involves three steps: noise removal, normalization, and signal preparation for deep learning network applications [ACHARYA2018270, Craik_2019]
. In the noise removal step, finite impulse response (FIR) or infinite impulse response (IIR) filters are usually used to eliminate extra signal noise. Normalization is then performed using various schemes such as the zscore technique. Finally, different time domain, frequency, and timefrequency methods are employed to prepare the signals for the deployment of deep networks.
IiC Review of Deep Learning Techniques
In contrast to conventional neural networks, or socalled shallow networks, deep neural networks are structures with more than two hidden layers. Some recent deep nets have more than hundreds of layers [seventeen]
. This increase in the size of the networks results in massive rise in the number of parameters of the network, requiring appropriate methods for learning, and also measures to avoid overfitting of the learned network. Convolutional networks use filters convolved with input patterns instead of multiplying a weight vector (matrix), which reduces the number of trainable parameters dramatically.
Furthermore, other methods are suggested to help the network to learn, as well [goodfellow]
. Pooling layers reduce the size of the input pattern to the next convolutional layer. Batch normalization, dropout, early stopping, unsupervised or semiunsupervised learning, and regularization techniques prevent the learned network from overfitting and increase the learning ability and speed. The AE and DBN are employed as unsupervised learning and then finetuned to avoid overfitting for limited labeled data. LongShortTimeMemory (LSTM) and GatedRecurrentUnits (GRU) are recurrent neural networks capable of revealing the long term time dependencies of data samples.
IiC1 Convolutional Neural Networks (CNNs)
CNNs are one class of the most popular deep learning networks in which most of the researches in machine learning have been devoted to these networks [seventeen]. They were initially presented for image processing applications but have recently been adopted to one and twodimensional architectures for diagnosis and prediction of diseases using biological signals [addedOne]. This class of deep learning networks is widely used for the detection of epileptic seizures using EEG signals. In two dimensional convolutional neural networks (2DCNN), the one dimensional (1D) EEG signals are first transformed into two dimensional plots employing visualization methods such as spectrogram [YILDIRIM2019103387], higher order bispectrum [Martis13500147, ijerph17030971], and wavelet transforms then are applied to the input of the convolutional network. In 1D architectures, the EEG signals are applied in the form of one dimensional to the input of convolutional networks. In these networks, changes are made to the core architecture of 2DCNN that are capable of processing the 1DEEG signals. Therefore, since both 2D and onedimensional convolutional neural networks (1DCNNs) are used in the field of epileptic seizures detection, they are investigated separately.
2D  Convolutional Neural Networks
Nowadays, deep 2D networks are applied to resolve a wide range of computer vision obstacles such as image segmentation
[twentySeven], medical image classification [twentyEight], and face recognition
[face]. First, in 2012, Krizovsky et al. [alexnet] suggested this network to solve image classification problems, and then quickly used similar networks for different tasks such as medical image classification, in an effort to obviate the difficulties of previous networks and solve more intricate problems with better performance. Figure 5 shows a general form of a 2DCNN used for epileptic seizure detection. The application of 2DCNN architectures is arguably the most important architecture in the deep neural nets. Also, more information about visualization and preprocessing method can be found in Appendix A.In one study [thirty], the SeizNet 16layer convolutional network is introduced, with additional dropout layers and batch normalization (BN) behind each convolutional layer having a structure similar to VGGNet. The researchers in [thirtyTwo] presented a new 2DCNN model that can extract the spectral and temporal characteristics of EEG signals and used them to learn the general structure of seizures. Zuo et al. [thirtyThree] developed the diagnosis of Higher Frequency Oscillations (HFO) epilepsy from 16layer 2DCNN and EEG signals. A deep learning framework called SeizureNet is proposed in [thirtyFour] using convolution layers with dense connections. A novel deep learning model called the temporal graph convolutional network (TGCN) has been introduced by Covert et al. [thirtySeven], comprising of five architectures with 14, 18, 22, 23 and 26 layers. Bouaziz et al. [fourty] split the EEG signals of CHBMIT with 23 channels into 2second time windows and then converted them into density images (spatial representation), which were fed as inputs to the CNN network.
Alexnet
FeiFei Li, Professor of Stanford University, created a dataset of labeled images of realworld objects and termed her project as ImageNet
[imagenet]. ImageNet organized a computer vision competition called ILSVRC annually to solve the image classification problems. Alex Krizhevsky revolutionized the image classification world with his algorithm, AlexNet, which won the 2012 ImageNet challenge and started the whole deep learning era [alexnet]. AlexNet won the competition by achieving the top5 test accuracy of 84.6%. Taqi et al. [fourtyTwo]used the AlexNet network to diagnose focal epileptic seizures. This proposed network used the feature extraction approach and eventually applied the softmax layer for classification purposes and achieved 100% accuracy. In another research, the AlexNet network was employed
[fourtyThree]. They transformed the 1D signal in to 2D image by passing through Signal2Image (S2I) module. The several methods used in this are signal as image, spectrogram, one layer 1DCNN, and twolayer 1DCNN.VGG
A research team at Oxford proposed the Visual Geometry Group (VGG) CNN model in 2014 [vgg]. They configured various models, and one such model was VGG16 submitted to ILSVRC 2014 competition. The model was known as VCG16 because it comprised of 16 layers. It delivered an excellent performance in image detection and classification problems. AhmedtAristizabal et al. [fourtyFour] performed VGG16 architecture to diagnose epilepsy from facial images. Their proposed approach attempted to extract and classify semiological patterns of facial states automatically. After recording the images, the proposed VGG architecture is trained primarily by wellknown datasets, followed by various networks such as 1DCNN and LSTM in the last few layers. In [fourtyThree]
, the VGG network used onedimensional and twodimensional signals. To train the models, Adam’s optimizer and a crossentropy error function were used. They used the batch size and number of epochs as 20 and 100 respectively. The idea of detecting epileptic seizures on the sEEG signal plots was examined by Emami et al.
[fourtyFive]. In the preprocessing step, the signals were segmented into different time windows and VGG16 was used for classification using small (3×3) convolution filters to efficiently detect small EEG signal changes. This architecture was pretrained by applying an ImageNet dataset to differentiate 1000 classes, and the last two layers had 4096 and 1000 dimensional vectors. They modified these last two layers to have 32 and 2 dimensions, respectively, to detect seizure and nonseizure classes.GoogLeNet
GoogLeNet won the 2014 ImageNet competition with 93.3% top5 test accuracy [googlenet]. This 22layer network was called GoogLeNet to honor Yann Lecun, who designed LeNet. Before the introduction of GoogLeNet, it was stated that by going deep, one could achieve better accuracy and results. Nevertheless, the google team proposed an architecture called inception, which achieved better performance by not going deep but by better design. It represented a robust design by using filters of different sizes on the same image. In the field of EEG signal processing to diagnose epileptic seizures, this architecture has recently received the attention of researchers. Taqi et al. [fourtyTwo] used this network in their preliminary researches to diagnose epileptic seizures. Their model was used to extract features from the BernBarcelona dataset and achieved excellent results.
ResNet
Microsoft’s ResNet won ImageNet challenge with 96.4% accuracy by applying a 152layer network which utilized a Resnet module [resnet]. In this network, residual blocks capable of training deep architecture were introduced by using skip connections which copied inputs of each layer to the next layer. The idea was to learn something different and new in the next layer. So far, not much research has been accomplished on the implementation of ResNet networks to diagnose epilepsy, but may grow significantly in coming days. Bizopoulos et al. [fourtyThree] introduced two ResNet and DenseNet architectures to diagnose epileptic seizures and attained good results. They showed that S2IDenseNet base model with an average of 70 epochs was sufficient to gain the best accuracy of 85.3%. A summary of related works done using 2D CNNs is shown in Table II. A sketch of accuracy (%) obtained by various authors is shown in Figure 6.
Work  Networks  Number of Layers  Classifier  Accuracy% 
[twentyNine]  2DCNN  3  Logistic Regression (LR)  87.51 
4  
[sixtyOne]  2DCNN  9  softmax  NA 
[sixtyTwo]  Combination of 1D CNN and 2DCNN  11  sigmoid  90.58 
[sixtyFour]  2DCNN  18  softmax  NA 
[sixtyFive]  2DCNN/MLP hybrid  11  sigmoid  NA 
[fourtySix]  2DCNN  9  softmax  86.31 
[thirty]  SeizNet  16  NA  NA 
[thirtyOne]  2DCNN with 1DCNN  12  softmax  NA 
[thirtyTwo]  2DCNN  9  softmax  98.05 
[thirtyThree]  2DCNN  16  softmax  NA 
[thirtyFour]  SeizureNet  133  softmax  NA 
[fourtyFour]  2DCNN  VGG16,8  SVM  95.19 
[thirtyFive]  2DCNN  6  softmax  74 
[sixtyEight]  2DCNN  12  softmax and sigmoid  99.50 
[thirtySix]  2DCNN  16  softmax  91.80 
[thirtySeven]  TGCN  14  sigmoid  NA 
18  
22  
22  
26  
[thirtyEight]  2DCNN  23  softmax  100 
[sixtyNine]  2DCNN  5  softmax  100 
[thirtyNine]  2DCNN  14  softmax  2 classes 98.30 
3 classes 90.10  
[seventy]  2DCNN  7  MVTSKFS  98.33 
5  
3DCNN  8  
[fourty]  2DCNN  8  softmax  99.48 
[fourtyOne]  2DCNN  23  sigmoid  NA 
18  RF  
[seventyTwo]  2DCNN  7  KELM  99.33 
[fourtyTwo]  GoogleNet  Standard Networks  softmax  100 
AlexNet  
LeNet  
[fourtyFive]  2DCNN  VGG16  softmax  NA 
[fourtyThree]  Standard Networks  softmax  85.30 
1D  Convolutional Neural Networks
1DCNNs are intrinsically suitable for processing of biological signals such as EEG for detection of epileptic seizures. These architectures present a more straightforward structure and a single pass of them is faster as compared to CNN with 2D architecture, due to fewer parameters. The most important superiority of 1D to 2D architectures is the possibility of employing pooling and convolutional layers with a larger size. In addition to that, signals are 1D in nature, and using preprocessing methods to transform them to 2D may lead to information loss. Figure 7 shows a general form of a 1DCNN used for epileptic seizure detection.
The authors in [fourtyThree] conducted experiments using onedimensional LeNet, AlexNet, VGGnet, ResNet, and DenseNet architectures, and applied wellknown 2D architectures in 1D space is the first study in this section. In [fourtyNine], 1DCNN was used for the feature extraction procedure. The researchers in [fifty] used 1DCNN for other work. They used CHBMIT dataset and the signals from each channel are segmented into 4second intervals; overlapping segments are also accepted to increase the data and accuracy. Combining CNNs with traditional feature extraction methods was explored in [fiftyThree]; they used the Empirical Mode Decomposition (EMD) method for feature extraction, and CNN was used to acquire high accuracy in the multiclass classification tasks. In [fiftyFive], an integrated framework for the diagnosis of epileptic seizures is presented that combined the capability of interpreting probabilistic graphical models (PGMs) with advances in deep learning. The authors in [fiftyEight] submitted a 1DCNN architecture defined CNNBP (stading for CNN bipolar). In this work, they used the data from patients monitored with combined foramen ovale (FO) electrodes and EEG surface electrodes. A new scheme to classify EEG signals based on temporal convolution neural networks (TCNN) was introduced by Zhang et al. [fiftyTwo]. Table III shows the summary of related works done using 1D CNNs. Figure 8 shows the sketch of accuracy (%) obtained by various authors using 1DCNN models for seizure detection.
Work  Networks  Number of Layers  Classifier  Accuracy% 
[fourtySix]  1DCNN  7  softmax  82.04 
[fourtyThree]  1DCNN  VGGNet  13  83.30  
VGGNet  19  
Densenet  161  
[seventyThree]  P1DCNN  14  softmax  99.10 
[fourtySeven]  1DCNN  13  softmax  88.67 
[seventyFour]  MPCNN  11  softmax  NA 
[fourtyEight]  1DFCNN  11  softmax  NA 
[seventyFive]  1DCNN  5  binary LR  NA 
[seventySix]  1DCNN  23  softmax  79.34 
[fourtyNine]  1DCNN  5  softmax, SVM  83.86 
[fifty]  1DCNN  33  NA  99.07 
[fiftyOne]  1DCNN  4  sigmoid  97.27 
[fiftyTwo]  1DTCNN  NA  NA  100 
[fiftyThree]  1DCNN  12  softmax  98.60 
[fiftyFour]  1DCNN  13  NA  82.90 
[seventyEight]  1DCNN with residual connections  17  softmax  99.00 
91.80  
[fiftyFive]  PGMCNN  10  softmax  NA 
[seventyNine]  1DCNN  15  softmax  84 
[fiftySix]  1DCNN  10  sigmoid  86.29 
[fiftySeven]  1DCNN  13  softmax  NA 
[fiftyEight]  1DCNNBP  14  sigmoid  NA 
[fiftyNine]  1DCNN  9  sigmoid  NA 
IiC2 Recurrent Neural Networks (RNNs)
Sequential data such as text, signals, and videos, show characteristics like variable and great length, which makes them not suitable for simple deep learning methods [goodfellow]. However, these data form a significant part of the information in the world, compelling the need for deep learning based schemes to process these types of data. RNNs are the solution suggested to overcome the mentioned challenges, and are widely used for physiological signals. Figure 9 shows a general form of RNN used for epileptic seizure detection. In the following section, an overview of popular RNN model is presented in addition to the reviewed papers.
Long ShortTerm Memory (LSTM)
The main problem of a simple recurrent neural network is shortterm memory. RNN may leave out key information as it has a hard time transporting information from earlier time steps to the next steps in long sequence data. Another drawback of RNN is the vanishing gradient problem
[seventeen, eighteen, ninteen, twenty]. The problem arises because of the shrinking of gradients as it backpropagates. To solve the shortterm memory problem, LSTM gates were created [seventeen]. The flow of information can be regulated through gates. The gates can preserve the long sequence of necessary data, and throw away the undesired ones. The building block of LSTM is the cell state and its gates.In this section, Golmohammadi et al. [sixtyFive] evaluated two LSTM architectures with 3 and 4 layers together with the softmax classifier in their investigation and obtained satisfactory results. In [fiftyOne], a 3layer LSTM deep network is used for feature extraction and classification. The last layer of this network is a sigmoid classification algorithm, and they achieved 96.82% accuracy. According to directed experiments in [fiftyNine], they employed two LSTM and GRU architectures. The LSTM, GRU model architecture, comprised a layer of Reshape, four layers of LSTM / GRU with the activator, and one layer of Fully Connected (FC) with sigmoid activator. In another study, Yao et al. [eighty] practiced ten different and ameliorated Independently Recurrent Neural Network (IndRNN) architectures and achieved the best accuracy using Dense IndRNN with attention (DIndRNN) with 31 layers.
Gated Recurrent Unit (GRU)
One variation of LSTM is GRU, which combines the input and forgets gates into one update gate [seventeen, eighteen, ninteen, twenty]. It merges the input and forgets gates and also makes some other modifications. The gating signals are decreased to two. One is the reset gate, and another is the updating gate. These two gates decide which information is necessary to pass to the output. In one experiment, Chen et al. [fiftyOne] used a 3layer GRU network with sigmoid classification and yielded 96.67% accuracy. A new GRUbased epileptic seizure detection system has been conducted by Talathi et al. [eightyOne]. In the proposed technique, during the preprocessing, the input signals were split into time windows and spectrogram are obtained from them. Then these plots are fed to 4layer GRU network with a softmax FC layer in the classification stage and achieved 98% accuracy. In another study, Roy et al. [eightyTwo] employed a 5layer GRU network with softmax classifier and achieved remarkable results. Table IV provides the summary of related works done using RNNs. Figure 10 shows the sketch of accuracy (%) obtained by various authors using RNN models for seizure detection.
Work  Networks  Number of Layers  Classifier  Accuracy% 
[sixtyFive]  LSTM  3  sigmoid  NA 
4  
[fiftyOne]  GRU  3  sigmoid  96.67 
[fiftyOne]  LSTM  3  sigmoid  96.82 
[fiftyFour]  15IndRNN  48  NA  87.00 
[fiftyFour]  LSTM  4  NA  84.35 
[fiftyNine]  LSTM  6  sigmoid  NA 
GRU  
[eightyThree]  RNN  NA  MLP with 2 layers (logistic sigmoid Classifier)  NA 
[eightyFour]  LSTM  4  softmax  100 
[eightyFive]  LSTM  2  sigmoid  95.54 Validation 
5  91.25 Test  
[eightySix]  LSTM  4  softmax  100 
[eightyEight]  LSTM  3  softmax  97.75 
[eighty]  ADIndRNN(3,3)  31  NA  88.70 
[eightyOne]  GRU  4  LR  98.00 
[eightyTwo]  GRU  5  softmax  NA 
[hundredTwentyNine]  LSTM  4  softmax  100 
IiC3 Autoencoders
Standard Autoencoders
AE is an unsupervised neural network machine learning model for which the input is the same as output [seventeen, eighteen, ninteen, twenty]. Input is compressed to a latentspace representation, and then the output is obtained from the representation. So, in AE, the compression and decompression functions are coupled with the neural network. AE consists of three parts, i.e., encoder, code, and decoder. Autoencoder networks are the most commonly used for feature extraction or dimensionality reduction in brain signal processing. Figure 11 shows a general form of an AE used for epileptic seizure detection. As the first investigation in this section, Rajaguru et al. [eightyNine]
separately surveyed the Multilayer Autoencoders (MAE) and ExpectationMaximization with Principal Component Analysis (EMPCA) methods to diminish the representation dimensions and then employed the Genetic Algorithm (GA) for classification. They have obtained an average classification accuracy of 93.78% when MAEs were applied for dimensionality reduction and combined with GA as classification. In another research, it was proposed to design an automated system based on AEs for the diagnosis of epilepsy using the EEG signals
[ninty]. First, Harmonic Wavelet Packet Transform (HWPT) was used to decompose the signal into frequency subbands, and then fractal features, including BoxCounting (BC), MultiResolution BC (MRBC) and Katz Fractal Dimension (KFD) were extracted from each of the subbands.Other Types of Autoencoders
To create a more robust representation, a number of schemes such as Denoising AE (DAE) (which tries to recreate input from a corrupted form of it) [goodfellow], Stacked AE (SAE) (stacking few autoencoders on top of each other to go deeper) [goodfellow], and Sparse AutoEncoders(SpAE) (which attempts to harness from sparse representations) [goodfellow] have been applied. While these methods might pursue other objectives as well, for example, the DAE can be used to recover the corrupted input.
Works in this section begin with Golmohammadi et al. [sixtyFive], who presented various deep networks, one of which is Stacked Denoising AE (SDAE). Their architecture in this section consists of three layers, and the final results demonstrated good performance of their approach. Qiu et al. [nintyOne] exerted the windowed signal, zscore normalization step of preprocessing EEG signals and imported preprocessed data into the Denoising Sparse AE (DSpAE) network. In their experiment, they achieved an outstanding performance of 100% accuracy. In [nintyTwo]
, a highperformance automated EEG analysis system based on principles of machine learning and big data is presented, which consists of several parts. At first, the signal features are extracted by Linear Predictive Cepstral Coefficients (LPCC) coefficients, then three paths are applied for precise detection. The first pass is sequential decoding using Hidden Markov Models (HMM), the second pass is composed of both temporal and spatial context analysis based on deep learning, in the third pass, a probabilistic grammar is employed.
In another research, Yan et al. [nintyThree]
proposed a feature extraction and classification method based on SpAE and Support Vector Machine (SVM). In this approach, first, the feature extraction of the input EEG signals is performed using SAE and, finally, the classification by SVM. Another SAE architecture was proposed by Yuan et al.
[nintyFour], which is named Wave2Vec. In the preprocessing stage, the signals were first framed, and in the deep network segment, the SAE with softmax was applied and achieved 93.92% accuracy. Following the experiments of Yuan et al., in [nintyFive], different Stacked Sparse Denoising AE (SSpDAE) architectures have been tested and compared. In this work, feature extraction is accomplished by the SSpDAE network and finally classification by softmax. They obtained an accuracy of 93.64%. Table V provides the summary of related works done using AEs. Also, Figure 12 shows the comparison of the accuracies obtained by different researchers.Work  Networks  Number of Layers  Classifier  Accuracy% 

[sixtyFive]  SDAE  3  NA  NA 
[eightyNine]  MAE  NA  GA  93.92 
[ninty]  AE  3  softmax  98.67 
[nintySix]  AE  One layer  sigmoid  NA 
[nintySeven]  SSpDAE  2 hidden layers (intra channel) & 3 hidden layer (cross channel) + 2 hidden layer (FC)+ classifier  softmax  93.82 
[nintyOne]  DSpAE  3  LR  100 
[nintyTwo]  SPSWSDA  Each model has 3 hidden layers  LR  NA 
6WSDA  
EYEMSDA  
[nintyThree]  SpAE  singlelayer SpAE  SVM  100 
[nintyEight]  SSpAE  3hiddenlayer SSpAE  softmax  100 
[nintyFour]  Wave2Vec  NA  softmax  93.92 
[nintyFour]  SSpDAE  2  softmax  93.64 
[nintyNine]  SAE  3  softmax  86.50 
[hundred]  SSpAE  3  softmax  100 
[hundredOne]  Deep SpAE  3  softmax  100 
[hundredTwo]  SAE  3 (2 AE+ classifier)  softmax  96.00 
[nintyFive]  SAE  3  softmax  96.61 
[hundredThree]  SSpAE  3 (two sparse encoders as hidden layers+ classifier)  softmax  94.00 
[hundredFour]  SAE  3  softmax  88.80 
IiC4 Deep Belief and Boltzmann Networks
Restricted Boltzmann Machines (RBM) is a variant of Deep Boltzmann Machines (DBM) and an undirected graphical model [seventeen]. The unrestricted boltzmann machines may also have connections between the hidden units. Stacking the RBMs forms a DBN; RBM is the building block of DBN. DBNs are unsupervised probabilistic hybrid generative deep learning models comprising of latent and stochastic variables in multiple layers [seventeen, eighteen]. Furthermore, a variation of DBN is called Convolutional DBN (CDBN), which could successfully scale the high dimensional model and uses the spatial information of the nearby pixels [seventeen, eighteen]. Deep Boltzmann machines are probabilistic, generative, unsupervised deep learning model which contains visible and multiple layers of hidden units [seventeen, eighteen].
Xuyen et al. [hundredFive] used DBN to identify epileptic spikes in EEG data. The proposed architecture in their study consisted of three hidden layers and achieved an accuracy of 96.87%. In another study, Turner et al. [hundredsix] applied the DBN network to diagnose epilepsy and found promising results. More information about DBN architecture for epileptic seizures is shown in table VI.
Work  Networks  Number of Layers  Classifier  Accuracy% 

[hundredFive]  DBN  3 hidden layers  NA  96.87 
[hundredsix]  DBN  3  LR  NA 
IiC5 Cnn  Rnn
It is a highly efficient combination of deep learning networks to predict and diagnose epileptic seizures from EEG signals is the CNNRNN architecture. Adding convolutional layers to RNN helps to find spatially nearby patterns effectively as RNN characteristic is more suitable for timeseries data. In [sixtyFive], they applied numerous preprocessing schemes; then, a modified CNNLSTM architecture is proposed comprising of 13 layers and the sigmoid is used for the last layer. Finally, the proposed approach demonstrated better performance.
Roy et al. [fourtySix] used different CNNRNN hybrid architectures to improve the experimental results. Their first network comprised a onedimensional 7layer CNNGRU convolution architecture, and the second one is a threedimensional (3D) CNNGRU network. In another work, Roy et al. [eightyTwo] concentrated on natural and abnormal brain activities and suggested four different deep learning architectures. The proposed ChronoNet model was developed using previous models. It achieved 90.60% and 86.57% training and test accuracies respectively.
Fang et al. [hundredSeven] used the InceptionV3 network. At the outset, a preliminary training was used on this network. Then, to finetune this architecture, an RNN based network called Spatial Temporal GRU (STGRU) CNN was applied, and achieved 77.30% accuracy. Choi et al. [hundredEight] proposed a multiscale 3DCNN with RNN model for the detection of epileptic seizures. The CNN module output is applied as the input of the RNN module. The RNN module consisting of a unilateral GRU layer that extracted the temporal feature of epileptic seizures and is finally classified using an FC layer. At the end of this section, generalized information from the CNNRNN research is presented in Table VII and Figure 13, respectively.
Work  Networks  Number of Layers  Classifier  Accuracy% 
[sixtyFive]  2DCNN biLSTM  13  sigmoid  NA 
[fourtySix]  1D CNNGRU  7  softmax  99.16 
[fourtySix]  TCNNRNN  10  softmax  95.22 
[fourtyFour]  2D CNNLSTM  VGG16  sigmoid  95.19 
[eightyTwo]  CRNN  8  softmax  83.58 
[eightyTwo]  ICRNN  14  softmax  86.93 
[eightyTwo]  CDRNN  8  softmax  87.20 
[eightyTwo]  ChronoNet  14  softmax  90.60 
[hundredNine]  2D CNNLSTM  8  NA  NA 
[hundredSeven]  STGRU ConvNets  pretrained Inception V3+ GRU + FC  NA  77.30 
[hundredEight]  3DCNN biGRU  NA  NA  99.40 
[hundredTwelve]  2D CNNLSTM  18  softmax  99.00 
[hundredThirteen]  1D CNNLSTM  7  sigmoid  89.73 
8 
IiC6 CNN  AEs
In addition to finding nearby patterns, convolutional layers can reduce the number of parameters in structures such as autoencoders. These two reasons make their combination suitable for many tasks like unsupervised feature extraction for epileptic seizure detection. In this section, a novel approach based on CNNAE was presented by Yuan et al. [hundredFourteen]. At the feature extraction stage, two deep AE and 2DCNN were used to extract the supervised and unsupervised features respectively. The unsupervised features were obtained directly from the input signals, and the supervised features were acquired from the spectrogram of the signals. Finally, the softmax classifier was utilized for classification and achieved 94.37% accuracy. In another investigation, Yuan et al. [hundredSixteen] proposed an approach called Deep Fusional Attention Network (DFAN) which can extract channelaware representations from multichannel EEG signals. They developed a fusional attention layer which utilized a fusional gate to fully integrate multiview information to quantify the contribution of each biomedical channel dynamically. A multiview convolution encoding layer, in combination with CNN, has also been used to train the integrated deep learning model. Table VIII provides the summary of related works done using CNNAEs and Figure 14 shows the accuracies (%) obtained by different researchers.
Work  Networks  Number of Layers  Classifier  Accuracy% 
[hundredFourteen]  CNNAE  10  softmax  94.37 
[hundredFifteen]  CNNAE  15  Different Classifiers  92.00 
[hundredseventeen]  1D CNNAE (feature extraction)+ MLP/LSTM/Bi LSTM (classification)  16 + 3/1/1  sigmoid  100 2 classes 
softmax  99.33 3 classes  
[hundredEighteen]  CNNASAE  8  LR  66.00 
CNNAAE  7  68.00  
[hundredSixteen]  CNNAE  NA  softmax  96.22 
Iii NonEEG Based Epileptic Seizure Detection
Iiia Medical Imaging Methods
Various deep learning models were developed to detect epileptic seizure using MRI, structural MRI (sMRI), functional MRI (fMRI), restingstate fMRI (rsfMRI) and PET scans with or without EEG signals [hundredNinteen, hundredTwenty, hundredTwentyOne, hundredTwentyTwo, hundredTwentyThree, hundredTwentyFour, hundredTwentyFive, hundredTwentySix]. These models outperformed the conventional models in terms of automatic detection and monitoring of the disease. However, due to the nature and difficulties in using imaging methods, these models are mostly practiced for localization of seizure and detection.
The authors [hundredNinteen] proposed automatic localization and detection of Focal Cortical Dysplasia (FCD) from the MRI scan using a CNN. The FCD detection rate is only 50% despite the progress in the analytics of MRI scans. Gill et al. [hundredTwenty] proposed a CNN based algorithm with feature learning capability to detect FCD automatically. The researchers [hundredTwentyOne]
designed DeepIED based on deep learning and EEGfMRI scans for epilepsy patients, combining the general linear model with EEGfMRI techniques to estimate the epileptogenic zone. Hosseini et al.
[hundredTwentyTwo] proposed an edge computing autonomic framework for evaluation, regulation, and monitoring of the epileptic brain. The epileptogenic network estimated the epilepsy using rsfMRI and EEG. Shiri et al. [hundredTwentySix] presented a technique for direct attenuation correction of PET images by applying emission data via CNNAE. Nineteen radiomic features from 83 brain regions were evaluated for image quantification via Hammersmith atlas. Finally, the summary of related works done using medical imaging methods and deep learning are shown in Table IX.Work  Networks  Number of Layers  Classifier  Accuracy% 
[hundredNinteen]  2DCNN  30  sigmoid  82.50 
[hundredTwenty]  2DCNN  11  softmax  NA 
[hundredTwentyOne]  ResNet  31  softmax  NA 
Triplet  
[hundredTwentyTwo]  2DCNN  NA  SVM  NA 
[hundredTwentyThree]  2DCNN  11  softmax  89.80 
3DCNN  82.50  
[hundredTwentyFour]  2DCNN  NA  NA  NA 
[hundredTwentyFive]  ResNet  14  sigmoid  98.22 
VGGNet  
InceptionV3  
SVGGC3D  
[hundredTwentySix]  Deep Direct Attenuation Correction (DeepDAC)  44  tanh  NA 
IiiB Other Detection Methods
Ravi Prakash et al. [hundredThirteen] introduced an algorithm based on deep learning for ECoG based Functional Mapping (ECoGFM) for eloquent language cortex identification. However, the success rate of ECoGFM is low as compared to Electrocortical Stimulation Mapping (ESM). In another work, RosasRomero et al. [hundredTwentySeven] have used fNIRS to detect epileptic seizure and obtained better performance than using conventional EEG signals.
Iv Hardware And Software Used For The Epileptic Seizure Detection
The high performance and its robustness to noise has made the deep learning algorithms suitable for commercial products. Nowadays various commercial products have been developed in the field of deep learning, one of which is deep learning applications and hardware for diagnosing epileptic seizures. In the first study investigated, the braincomputer interface (BCI) system was developed using an AE for epileptic seizure detection by Hosseini et al. [hundredThree]. In another study, Singh et al. [hundredFour] indicated a utilitarian product for the diagnosis of epileptic seizures, which comprised the user segment and the cloud segment. The block diagram of the proposed system presented by Singh et al. is shown in figure 15.
KiralKornek et al. [hundredTwentyEight] demonstrated that deep learning in combination with neuromorphic hardware could help in developing a wearable, realtime, alwayson, patientspecific seizure warning system with low power consumption and reliable longterm performance.
V Discussion
Anticipating and timely recognition of epileptic seizures is of the essence, as it directly influences the quality of life of patients with this disease and can enhance their confidence in all life’s stages. Numerous research has been carried out on automated diagnosis of epileptic seizures but without using graphical processing unit (GPU) and hence may not be able to use in the real time applications. So far, no efficient software programs or functional hardware have been implemented to recognize the disease. Until recently many machine learning methods to detect the seizure automatically have been proposed which can not be used in the real time. Recent years of research into the diagnosis of epileptic seizures have led to the appearance of deep learning algorithms, and experts in the areas of artificial intelligence and signal processing reassure that these methods can sketch and lead to implementing concrete and functional tools. Table X in the Appendix shows the overview of works done in this area. It also shows the type of dataset used, implementation tool, preprocessing, deep learning network, and evaluation methods utilized.
As shown in this study, various deep learning structures are applied for epileptic seizure detection, yet none of them has superiority over others. The best structure should be chosen carefully based on the dataset and problem characteristics, such as the need for realtime detection or minimum acceptable accuracy or even the use pretrained models. There are many databases available with different models. Hence, it is difficult to compare them as they have been developed using different datasets and models. Overall, one of the most important advantages of deep learning algorithms is their high performance. Hence, such models have been widely used for many applications. Another advantage of deep learning methods is that they are robust to noise. So, noise removal can be omitted in many applications. However, they need more data to train and training takes time. So, developing a robust model is time consuming and requires huge data.
Vi Challenges
The challenges in the automated detection of epileptic seizures using deep learning are as follows; firstly, many datasets only contain selected segments of EEG signals, which is not suitable for realworld applications where detection must be done from realtime signals, and clinical datasets are usually not publicly available. Secondly, there are few datasets available in this area with different sampling frequencies. However, the size of the data available to train the model is not sufficient to obtain a robust and accurate model. Hence, huge public datasets need to be available. Lastly, deep learning models require massive computational resources, and these resources are not accessible to everyone as they are expensive. The researchers need to focus on the early detection of epilepsy (interictal) and also seizure prediction. It will significantly improve the quality of life of the patients and also their family members.
Vii Conclusion and Future Works
In this paper, a comprehensive review of works done in the field of epileptic seizure detection using various deep learning techniques such as CNNs, RNNs, and AEs is presented. Various screening methods have been developed using EEG and MRI. We have investigated the deep learning based practical and applied hardware used for diagnosing epileptic seizures. It is very encouraging that much of the future research will concentrate on hardware  practical applications to aid in the accurate detection of such diseases. The functional hardware has also been utilized to boost the performance of detection strategies. Furthermore, the models can be placed in the cloud by hospitals, so handheld applications, mobile or wearable devices, may be equipped with such models and the computations will be performed by cloud servers. The patients may also be benefited from predictive models for the epileptic seizure and take some measures to avert in a timely manner. Alert messages may be generated to the family, relatives, the concerned hospital, and doctor in the detection of epileptic seizures through the handheld devices or wearables, and thus the patient can be provided with proper treatment in time. Moreover, a cap with EEG electrodes in it can obtain the EEG signals and sent to the model kept in the cloud to achieve realtime detection. Additionally, if we can detect early stage of seizure using interictal periods of EEG signals, the patient can take medication immediately and prevent seizure. This field of research requires more research by combining different screening methods for more precise and fast detection of epileptic seizures and also applying semisupervised and unsupervised methods to further overcome the dataset size limits. Finally, having publicly available comprehensive datasets can help to develop an accurate and robust model which can detect the seizure in the early stage.
Appendix A
Table X shows the detailed summary of deep learning methods employed for automated detection of epileptic seizures.
Work  Dataset  Tools  Preprocessing  Network  Kfold  Classifier  Accuracy% 
[twentyNine]  Clinical  NA  Spectrogram  2DCNN  NA  LR  87.51 
[sixtyOne]  Clinical  MATLAB  Normalization  2DCNN  NA  softmax  NA 
[sixtyTwo]  Clinical  NA  Filtering  1DCNN with 2DCNN  NA  sigmoid  90.50 
CHBMIT  85.60  
[sixtyFour]  Clinical  Octave  Filtering, Rereferenced, Down Sampling  2DCNN  NA  softmax  NA 
Keras  
Theano  
[sixtyFive]  TUH EEG  NA  Filtering  CNNRNN  NA  Different activation functions  NA 
Clinical  
[fourtySix]  TUH EEG  NA  Different methods  1DCNNGRU  NA  softmax  99.16 
[thirty]  Clinical  Keras  Down sampling, Z normalization, augmentation  SeizNet  NA  NA  NA 
[thirtyOne]  Clinical  Python 3.6  ZScore normalization, STFT  1DCNN  NA  softmax  NA 
PyTorch  2DCNN  
[thirtyTwo]  CHBMIT  PyTorch  Visualization  2DCNN  NA  softmax  98.05 
[thirtyThree]  Clinical  NA  Filtering, Visualization, Normalization  2DCNN  10  softmax  NA 
[thirtyFour]  TUH EEG  PyTorch  DivSpec  SeizureNet  5  softmax  NA 
[fourtyFour]  Clinical  Caffe  Different Methods  FRCNN with 2DCNN  5  SVM  95.19 
OpenCV  
Keras  FRCNN with 2DCNNLSTM  sigmoid  
Theano  
[thirtyFive]  TUH EEG  TensorFlow  Feature Extraction  2DCNN  10  softmax  74.00 
[sixtyEight]  Bern Barcelona  Octave  Filtering, EMD, DWT, Fourier  2DCNN  5  sigmoid  99.50 
Clinical  Keras  softmax  
[thirtySix]  Bern Barcelona  TensorFlow  STFT, ZScore Normalization  2DCNN  10  softmax  91.80 
[thirtySeven]  Clinical  NA  STFT  TGCN  NA  sigmoid  NA 
[thirtyEight]  Bonn  NA  DWT  2DCNN  10  softmax  100 
[sixtyNine]  Bonn  Keras  CWT  2DCNN  10  softmax  100 
[thirtyNine]  Bonn  MATLAB  Filtering  2DCNN  NA  softmax  99.60 
90.10  
[seventy]  CHBMIT  MATLAB  Over Sampling Method, FFT, WPD  2DCNN  5  MVTSKFS  98.35 
TensorFlow  3DCNN  
[fourty]  CHBMIT  NA  Spatial Representation  2DCNN  NA  softmax  99.48 
[fourtyOne]  Clinical  MATLAB  Different Methods  2DCNN  10  sigmoid  NA 
RF  
[seventyTwo]  CHBMIT  NA  MAS  2DCNN  5  KELM  99.33 
Clinical  
[fourtyNine]  Clinical  TensorFlow  Filtering, Down Sampling  1DCNN  4  softmax, SVM  83.86 
[fourtyTwo]  Bern Barcelona  Caffe  NA  AlexNet  NA  softmax  100 
GoogleNet  
LeNet  
[fourtyThree]  UCI  PyTorch  Signal2Image  2D one Layer CNN  NA  DenseNet  85.30 
[fourtyFive]  Clinical  Chainer  Filtering, Visualization  2DCNN  NA  softmax  NA 
[seventyThree]  Bonn  TensorFlow  Data Augmentation  P1DCNN  10  Majority Voting  99.10 
[fourtySeven]  Bonn  MATLAB  ZScore Normalization  1DCNN  10  softmax  86.67 
[seventyFour]  CHBMIT  NA  Filtering, Augmentation  MPCNN  NA  softmax  NA 
[fourtyEight]  Clinical  Keras  DownSampling, Filtering  1DFCNN  5  softmax  NA 
[seventySix]  TUH EEG  Keras  Normalization and Standardization  1DCNN  NA  softmax  79.34 
[seventyFive]  Clinical  Theano  Filtering  1DCNN  NA  Binary LR  NA 
Lasagne Library  
[fifty]  CHBMIT  NA  DWT, Feature Extraction, Normalization  1DCNN  10  NA  99.07 
[fiftyOne]  Bonn  NA  DWT, Normalization  1DCNN  5  sigmoid  97.27 
[fiftyTwo]  Bonn  NA  Normalization  1DTCNN  NA  NA  100 
[fiftyThree]  Bonn  NA  EMD, MPF  1DCNN  10  softmax  98.60 
[fiftyFour]  CHBMIT  NA  Windowing  IndRNN  10  NA  87.00 
[seventyEight]  Bonn  TensorFlow  Filtering, ZScore Normalization  1DCNN  NA  softmax  99.00 
Bern Barcelona  91.80  
[fiftyFive]  CHBMIT  PyTorch  Filtering  1DPCMCNN  5  softmax  NA 
Clinical  
[seventyNine]  CHBMIT  NA  MIDS, WGANs  1DCNN  NA  softmax  84.00 
[fiftySix]  Clinical  NA  Down Sampling, PSD, FFT  1DCNN  4  sigmoid  86.29 
[fiftySeven]  CHBMIT  TensorFlow  Filtering  1DCNN  4  softmax  NA 
[fiftyEight]  NA  Keras  Down Sampling, Filtering, Data Augmentation  CNNBP  5  sigmoid  NA 
TensorFlow  
MATLAB  
[fiftyNine]  Clinical  NA  Filtering, DWT  1DCNN  NA  sigmoid  NA 
LSTM  RF  
GRU  SVM  
[eightyThree]  CHBMIT  MATLAB  Filtering, Montage Mapping  DRNN  NA  MLP  NA 
[hundredTwentyNine]  Bonn  NA  Filtering  LSTM  NA  softmax  100 
[eightyFour]  Bonn  MATLAB  Filtering  LSTM  3  softmax  100 
Keras  5  
TensorFlow  10  
[eightyFive]  Bonn  Keras  Windowing  LSTM  10  sigmoid  91.25 
[eightySix]  Bonn  MATLAB  Filtering  LSTM  3  softmax  100 
Keras  5  
TensorFlow  10  
[eightyEight]  Freiburg  Anaconda Navigator  Normalization, Filtering  LSTM  5  softmax  97.75 
[eighty]  CHBMIT  NA  Windowing  ADIndRNN  10  NA  88.70 
Bonn  
[eightyOne]  Bonn  Keras  AutoCorrelation  GRU  NA  LR  98 
[eightyTwo]  TUH EEG  NA  TCP  ChronoNet  NA  softmax  90.60 
[eightyNine]  Clinical  NA  Windowing  AE with EMPCA  NA  GA  93.92 
[ninty]  Bonn  MATLAB  Filtering, HWPT, FD  AE  NA  softmax  98.67 
[nintySix]  Clinical  TensorFlow  Down Sampling, Filtering, Normalization  AE  NA  sigmoid  NA 
[nintySeven]  CHBMIT  NA  STFT  SSDA  NA  softmax  93.82 
[nintyOne]  Bonn  MATLAB  ZScore Normalization, Standardization  DSAE  NA  LR  100 
[nintyTwo]  TUH EEG  Open Source Toolkits  Different Methods  SDA  NA  LR  NA 
Theano  
[nintyThree]  Bonn  NA  Filtering  SAE  NA  SVM  100 
[nintyEight]  Bonn  NA  Normalization  SSAE  NA  softmax  100 
[nintyFour]  CHBMIT  Theano  Scalogram  Wave2Vec  NA  softmax  93.92 
[hundredFourteen]  CHBMIT  PyTorch  Data Augmentation, STFT  CNNAE  5  softmax  94.37 
[nintyNine]  Clinical  NA  Filtering, CWT, Feature Extraction  SAE  NA  softmax  86.50 
[hundred]  Bonn  NA  Taguchi Method  SSAE  NA  softmax  100 
[hundredOne]  Clinical  NA  Dimension reduction, ESD  DeSAE  NA  softmax  100 
[hundredTwo]  Bonn  NA  DWT  SAE  NA  softmax  96.00 
[nintyFive]  CHBMIT  NA  Different Methods  mSSDA  NA  softmax  96.61 
[hundredThree]  Clinical  MATLAB  PCA, IICA  SSAE  NA  softmax  94 
[hundredFour]  Bonn  MATLAB  Windowing  SAE  NA  softmax  88.80 
[hundredFive]  Clinical  MATLAB  DWT  DBN  NA  NA  96.87 
[hundredsix]  Clinical  Theano  Normalization, Feature Extraction, Standardization  DBN  NA  LR  NA 
SVM  
KNN  
[hundredNine]  CHBMIT  NA  Image Based Representation  2D CNNLSTM  NA  NA  NA 
[hundredSeven]  Clinical  TensorFlow  Filtering  STGRU ConvNets  NA  NA  77.30 
[hundredEight]  CHBMIT  NA  STFT, 2Dmapping  3DCNN with Bi GRU  NA  NA  99.40 
Clinical  
[hundredTwelve]  CHBMIT  NA  Visualization  2DCNNLSTM  NA  softmax  99.00 
[hundredThirteen]  clinical ECoG  NA  Filtering  1DCNNLSTM  5  sigmoid  89.73 
[hundredFifteen]  CHBMIT  ScikitLearn  Channel Selection  CNNAE  5  Different Methods  92.00 
Bonn  10  
[hundredseventeen]  Bonn  NA  Windowing  1DCNN with Bi LSTM  NA  softmax  99.33 
sigmoid  100  
[hundredEighteen]  Clinical  Theano  Mapping  ASAECNN  NA  LR  66.00 
AAECNN  68.00  
[hundredSixteen]  CHBMIT  PyTorch  STFT  CNNAE  5  softmax  96.22 
[hundredNinteen]  SCTIMST  FSL  Noise reduction with BM3D algorithm, Skullstripping, Segmentation, Postprocessing  2DCNN  5  sigmoid  NA 
Keras  
TensorFlow  
[hundredTwenty]  Clinical MRI  NA  Different Methods  2DCNN  5  softmax  NA 
[hundredTwentyOne]  Clinical MRI  Brain Vision Analyzer  Filtering, ICA, BCG, GLM, MCS  ResNet  NA  softmax  NA 
Triplet  
[hundredTwentyTwo]  ECoG Dataset  GIFT  Different Methods  2DCNN  NA  SVM  NA 
RsFMRI Dataset  FSL  
FreeSurfer  
[hundredTwentyThree]  Clinical MRI  NA  Scaling Down  3DCNN  5  softmax  89.80 
[hundredTwentyFour]  Clinical MRI  FSL  Connectivity Feature Extracion  2DCNN  NA  NA  NA 
[hundredTwentyFive]  ImageNet  DPABI  ROI, Normalization, AAL, CNNI, Downsampling, NNI (3D images)  2DResNet  NA  sigmoid  98.22 
Pulmonary nodules Kaggle  Python  2DVGGNET  
Keras  2DInception V3  
Clinical PET  TensorFlow  3DSVGGC3D  
[hundredTwentySix]  Clinical PET  TensorFlow  OSEM, Data Augmentation Radionics Features  DAC  NA  tanh  NA 
Comments
There are no comments yet.