Next Article in Journal
Biofeedback Systems for Gait Rehabilitation of Individuals with Lower-Limb Amputation: A Systematic Review
Next Article in Special Issue
Deep Learning–Based Methods for Automatic Diagnosis of Skin Lesions
Previous Article in Journal
Evaluation of the Visual Stimuli on Personal Thermal Comfort Perception in Real and Virtual Environments Using Machine Learning Approaches
Previous Article in Special Issue
A Camera Sensors-Based System to Study Drug Effects on In Vitro Motility: The Case of PC-3 Prostate Cancer Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Selection of Activation Layers for Convolutional Neural Networks

1
Department of Information Enginering, University of Padua, viale Gradenigo 6, 35131 Padua, Italy
2
DISI, Università di Bologna, Via dell’università 50, 47521 Cesena, Italy
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(6), 1626; https://doi.org/10.3390/s20061626
Submission received: 14 February 2020 / Revised: 11 March 2020 / Accepted: 12 March 2020 / Published: 14 March 2020
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing)

Abstract

:
In recent years, the field of deep learning has achieved considerable success in pattern recognition, image segmentation, and many other classification fields. There are many studies and practical applications of deep learning on images, video, or text classification. Activation functions play a crucial role in discriminative capabilities of the deep neural networks and the design of new “static” or “dynamic” activation functions is an active area of research. The main difference between “static” and “dynamic” functions is that the first class of activations considers all the neurons and layers as identical, while the second class learns parameters of the activation function independently for each layer or even each neuron. Although the “dynamic” activation functions perform better in some applications, the increased number of trainable parameters requires more computational time and can lead to overfitting. In this work, we propose a mixture of “static” and “dynamic” activation functions, which are stochastically selected at each layer. Our idea for model design is based on a method for changing some layers along the lines of different functional blocks of the best performing CNN models, with the aim of designing new models to be used as stand-alone networks or as a component of an ensemble. We propose to replace each activation layer of a CNN (usually a ReLU layer) by a different activation function stochastically drawn from a set of activation functions: in this way, the resulting CNN has a different set of activation function layers.

1. Introduction

Deep neural networks have become extremely popular as they achieve state-of-the-art performance on a variety of important applications including image classification, image segmentation, language processing, and computer vision [1]. Deep neural networks typically have a set of linear components whose parameters are usually learned to fit the data, and a set of nonlinearities, which are pre-specified, typically in the form of a sigmoid, a tanh function, a rectified linear unit, or a max-pooling function. The presence of nonlinear activation functions at each neuron is essential to give the network the ability of approximate arbitrarily complex functions [2], and its choice affects net accuracy and sometimes the speed of training.
In this paper, we perform a large-scale empirical comparison of different activation functions across a variety of image classification and for an image segmentation problem. Starting from two of the best performing models, i.e. ResNet50 [3] for the classification task and DeepLabv3+ [4] for the segmentation task, we compare different approaches for replacing activation layers and different methods for building ensembles of CNNs obtained by varying the activation layers.
After presenting and comparing several activation functions, we propose a new model based on the use of different activation functions at different levels of the graph: to this aim, we propose a method for stochastic selection of activation functions to replace each activation layer of the starting network. The activation functions are randomly selected from a set of nine approaches, including the most effective ones. After training the new models on the target problem, they are fused together to build an ensemble of CNNs. It is well known in the literature [5] that networks trained using back propagation are unstable; this behavior can be used for building an ensemble of classifiers. These networks are partially independent, and their fusion permits to boost the performance of a stand–alone network.
The proposed framework for ensemble creation is evaluated on two different applications: image classification and image segmentation. In the image classification field, we deal with several medical problems by including in our benchmark 13 image classification datasets. Biomedical image retrieval is a challenging problem due to the varying contrast and size of structures in the images [6]. CNNs have already been used on several medical datasets reaching very high performance, including keratinocyte carcinomas and malignant melanomas detection [7], sub-cellular and stem cell image classification [8], thyroid nodules classification [9] from ultrasound images, or breast cancer recognition [10]. Our testing protocol includes a fine-tuning of each model in each dataset and a testing evaluation and comparison: our experiments show that the proposed ensembles work well in all the tested problems gaining state-of-the-art classification performance [11].
In the image segmentation field, we deal with the skin segmentation problem: the discrimination of skin and non-skin regions in a digital image has a wide range of applications including face detection [12], body tracking [13], gesture recognition [14], and objectionable content filtering [15]. Skin detection has great relevance also in the medical field, where it is employed as a component of face detection or body tracking: for example, in the remote photoplethysmography (rPPG) problem [16], it is a component of a system solving the problem of estimating the heart rate of a subject given a video stream of his/her face. In our experiment, we carry out a comparison of several approaches performing a single training on a small dataset including only 2000 labeled images, while testing is performed on 11 different datasets including images from very different applications. The reported results show that the proposed ensembles reach state-of-the-art performance [17] in most of the benchmark datasets even without ad-hoc tuning.
The code developed for this work will be available at https://github.com/LorisNanni.

2. Literature Reviews

In the last years, deep learning has gained increasing attention in several computer vision applications, such as image classification and retrieval, object detection, image segmentation, and many other applications [18]. CNNs are deep neural networks designed to work similarly to the human brain in visual perception: CNNs are able to distinguish meaningful features in an image in order to classify the image as a whole. They are constituted of several types of layers of neurons: i.e., convolutional layers, activation layers, subsampling layers, and fully connected layers [19].
Most recent architectures present a substantially higher number of layers and parameters, which gives much more representation learning capability to those models. However, many parameters can produce overfitting. This problem can be solved with the introduction of regularization techniques, data augmentation, and better performing activation functions.
In particular, the purpose of activation layers is to decide if a neuron would fire or not, according to a nonlinear transformation of the input signal. The design of new activation functions in order to improve training speed and network accuracy is an active area of research [20,21]. Recently, the sigmoid and hyperbolic tangent, which were the most widely used activations functions, have been replaced by Rectified Linear Units (ReLU) [22]: ReLU is a piecewise linear function equivalent to the identity for positive inputs and zero for negative ones. Thanks to the good performance of ReLU and the fact that it is fast, effective, and simple to evaluate, several alternatives to the standard ReLU function have been proposed in the literature. The most known “static” activation function are: Leaky ReLU [23], an activation function equal to ReLU for positive inputs but having a very small slope α > 0 for negative ones; ELU [21], which exponentially decreases to a limit point α in the negative space; and SELU [24], a scaled version of ELU (by a constant λ). Moreover, in [25], a randomized leaky rectified linear unit (RLReLU) is proposed, which uses nonlinear random coefficient instead of linear. The choice of optimal activation functions in a CNN is an important issue because it is directly related to the resulting success rates. Unfortunately, an analytical approach able to select optimal activation functions for a given problem is not available; therefore, several approaches try to determine them by trial and error. “Dynamic” activation functions are a class of function whose parameters, differently from “static” ones, are learned during training. Parametric ReLU (PReLU) [26] is a Leaky ReLU where the amount of the slope α is learned; Adaptive Piecewise Linear Unit (APLU) [20] is a piecewise linear activation function with learnable parameters: it calculates piecewise linear function independently for each neuron and learns them during the training process. Another “dynamic” function is proposed in [27], whose shape is learned by a linear regression model. In [28], two different variants are proposed: a “linear sigmoidal activation”, which is a fixed structure function whose function coefficients are static, and its “dynamic” variant, named “adaptive linear sigmoidal activation”, which can adapt itself according to the complexity of the given data. Two of the best performing functions are Swish [29], which is the combination of a sigmoid function and a trainable parameter, and the recent Mexican ReLU (MeLU) [30], which is a piecewise linear activation function that is the sum of PReLU and multiple Mexican hat functions.
The main difference between “static” and “dynamic” functions is that the first class of activations considers all the neurons and layers as identical, while second class learns parameters independently for each layer or even each neuron. Although the “dynamic” activation functions perform better than “static” in some applications, their parametric nature increases the number of trainable parameters and thus the possibility of overfitting. In this work, we propose a mixture of “static” and “dynamic” activation functions.

3. Activation Functions

This study considers 10 different activation functions (more details, and specific reference for each function, are given in [30]), namely the widely used ReLU and several variants. The functions used are summarized in Table 1, while in the following the analytical expression together with their derivatives are given. Several dynamic activation functions depend on a hyperparameter, named m a x I n p u t , which is a normalization factor to better deal with input images varying between [0,1] or [0,255].
The well-known ReLU activation function, for the generic couple of points ( x i ,   y i ) , is defined as:
y i = f ( x i ) = { 0 ,    x i < 0 x i ,    x i 0
and its derivative is easily evaluated as:
d y i d x i = f ( x i ) = { 0 ,    x i < 0 1 ,    x i 0
This work also considers several variants of the original ReLU function. The first variant is the Leaky ReLU function, defined as:
y i = f ( x i ) = { a x i ,   x i < 0 x i ,   x i 0
where the parameter a is a small real number (0.01 in this study). The main advantage of Leaky ReLU is that the gradient is always positive (no point has a zero gradient):
d y i d x i = f ( x i ) = { a ,   x i < 0 1 ,   x i 0
The second variant of the ReLU function considered in this work is the Exponential Linear Unit (ELU) [21], which is defined as:
y i = f ( x i ) = { a ( exp x i 1 ) ,   x i < 0 x i ,   x i 0
where a is a real number (1 in this study). ELU has a gradient that is always positive:
d y i d x i = f ( x i ) = { a   exp ( x i ) ,   x i < 0 1 ,   x i 0
The Parametric ReLU (PReLU) is the third variant that is considered here. It is defined by:
y i = f ( x i ) = { a c x i ,   x i < 0 x i ,   x i 0
where a c is a set of real numbers, one for each input channel. PReLU is similar to Leaky ReLU, the only difference being that the a c parameters are learned. The gradient of PReLU is:
d y i d x i = f ( x i ) = { a c ,     x i < 0 1 ,     x i 0   and   d y i d a c = { x i ,     x i < 0 0 ,     x i 0
S-Shaped ReLU (SReLU) is the fourth variant. It is defined as a piecewise linear function:
y i = f ( x i ) = { t l + a   l ( x i t l ) ,   x i < t l x i ,   t l x i t r t r + a   r ( x i t r ) ,   x i > t r
In this case, four learnable parameters are used, t l , t r , a l , and a r , expressed as real numbers. They are initialized to a l = 0 ,   t l = 0 , a r = 1 , and t r = m a x I n p u t . SReLU is highly flexible thanks to the rather large number of tunable parameters. The gradients are given by:
d y i d x i = f ( x i ) = { a   l ,   x i < t l 1 ,   t l x i t r a   r ,   x i > t r
d y i d a l = { x i t l ,   x i < t l 0 ,   x i t l ,   and
d y i d t l = { a l ,   x i < t l 0 ,   x i t l
The fifth variant is APLU (Adaptive Piecewise Linear Unit). As the name suggests, it is a linear piecewise function. It is defined as:
y i = ReLU ( x i ) + c = 1 n a c min ( 0 , x i + b c )
where n is an hyperparameter, set in advance, defining the number of functions (or hinges); and a c and b c are real numbers, one for each input channel. The gradients are evaluated as:
d f ( x , a ) d a c = { x + b c ,     x < b c 0 ,     x b c     and   d f ( x , a ) d b c = { a c ,     x < b c 0 ,     x b c  
In our tests, the parameters a c are initialized to 0, and the points are randomly chosen. We also added an L 2 -penalty of 0.001 to the norm of the parameters a c .
An interesting variant is the Mexican ReLU (MeLU), derived from the Mexican hat functions. These are defined as:
ϕ a ,   λ ( x ) = max ( λ · m a x I n p u t | x a · m a x I n p u t | , 0 )
where a and λ are real numbers. These functions are used to define the MeLU function, based on the definition of the PReLU detailed above:
y i = MeLU ( x i ) = PReLU c 0 ( x i ) + j = 1 k 1 c j   ϕ α j , λ j ( x i )
The parameter k represents the number of learnable parameters for each input channel, c j are the learnable parameters, c 0 is the parameter vector in PReLU, and α j and λ j are fixed parameters chosen recursively. The MeLU activation function has interesting properties, inherited from the Mexican hat functions, that are continuous and piecewise differentiable. ReLU can be seen as a special case of MeLU, when all the c i parameters are set to 0. This is important because pre-trained networks based on the ReLU function can be enhanced in a simple way using MeLU. Similar substitutions can be made when the source network is based on Leaky ReLU and PReLU.
As previously observed, MeLU is based on a set of learnable parameters. The number of parameters is sensibly higher with respect to SReLU and APLU, making MeLU more adaptable and with a higher representation power but more likely to overfit. The gradient is given by the Mexican hat functions. The MeLU activation function also has a positive impact on the optimization stage.
In our work, the learnable parameters are initialized to 0, meaning that the MeLU starts as a plain ReLU function; the peculiar properties of the MeLU function come into play at a later stage of the training. The first Mexican hat function has its maximum in 2 · m a x I n p u t and is equal to zero in 0 and 4 · m a x I n p u t . The next two functions are chosen to be zero outside the interval [0, 2 · m a x I n p u t ] and [ 2 · m a x I n p u t , 4 · m a x I n p u t ], with the requirement being they have their maximum in m a x I n p u t and 3 · m a x I n p u t . The parameters α and λ are chosen to fulfill this requirement.
In this work we test two values of k, the standard value is k = 4 for MeLU and a wider version of the function for k = 8 (wMeLU).
The Gaussian ReLU, also called GaLU, is the last activation function considered in our work. Its definition is based on the Gaussian type functions:
ϕ g a ,   λ ( x ) = max ( λ · m a x I n p u t | x a · m a x I n p u t | , 0 ) + + min   ( | x a · m a x I n p u t 2 λ · m a x I n p u t | λ · m a x I n p u t , 0 )
where a   and λ are real numbers. The GaLU activation function is defined as:
y i = G a L U ( x i ) = P R e L U c 0 ( x i ) + j = 1 k 1 c j   ϕ g a j , λ j ( x i )
which is a formulation similar to the one provided for MeLU, which again depends on the parameters a j and λ j . Again, the function is defined in this way to provide a good approximation of nonlinear functions. We use k = 4 for GaLU and k = 2 for its “smaller” version sGaLU.
Please note that, to avoid any overfitting, we use the same parameter setting suggested by the original authors for each activation function, as reported in Table 1.

4. Materials and Methods

In this section, we describe both the starting models and the stochastic method proposed to design new CNN models and create ensembles. In the literature, several CNN architectures have been proposed for image classification (AlexNet [32], GoogleNet [33], InceptionV3 [34], VGGNet [35], ResNet [3], and DenseNet [36]) and segmentation problems (SegNet [37], U-Net [38], and Deeplabv3+ [4]). In our experiments, we selected two of the best performing models: ResNet50 [3] for image classification and Deeplabv3+ [4] for segmentation. ResNet50 is a 50-layer network, which introduces a new “network-in-network” architecture using residual layers. ResNet50, which was the winner of ILSVRC 2015, is one of the best performing and most popular architectures used for image classification. In our experiments, all the models for image classification were fine-tuned on the training set of each classification problem according to the model training parameters reported in Table 2. Data augmentation includes random reflection on both axes and two independent random rescales of both axes by two factors uniformly sampled in [1,2].
For image segmentation purposes, we selected Deeplabv3+ [4], a recent architecture based on atrous convolution, in which the filter is not applied to all adjacent pixels of an image but rather to a spaced-out lattice of pixels. Deeplabv3+ uses four parallel atrous convolutions (each with differing atrous rates) followed by a “Pyramid Pooling” method. Since DeepLabv3+ is based on encoder–decoder structure, and it can be built on top of a powerful pre-trained CNN architecture: in this work, we selected again ResNet50 for this task, although our internal evaluation showed that ResNet101 and ResNet34 gained similar performance. All the models for skin segmentation were trained on a small dataset of 2000 images using class weighting and the same training parameters, as reported in Table 2.
Given a base model for each task, i.e. ResNet50 for image classification and DeepLabv3+ for skin segmentation, we designed several variants of the initial architecture by replacing all the activation layers (which were ReLU layers in both the starting models used in this work) by a different activation function. The stand-alone methods named leakyReLU, ELU, SReLU, APLU, GaLU, sGaLU, PReLU, MeLU, and wMeLU together with the original model (ReLU) are the 10 models tested in our experiments. Some of them depend on the training parameter m a x I n p u t , which was set to 1 if not specified (255, otherwise).
After comparing several activation functions, we propose to design a new model based on the use of different activation functions in different layers. According to the pseudo-code in Figure 1, a RandAct model is obtained using the function StochasticReplacement, applied to an input CNN and a set of activation functions, by randomly replacing all the activation layers of the input model. In our experiments, we considered ResNet50 as the input model for image classification and DeepLabv3+ for image segmentation. However, this method is general and it could be applied to any other model. The output models RandAct and RandAct(255) were obtained from input models using the set of 9 alternative activation functions with the m a x I n p u t parameter equal to 1 or 255. To create an ensemble, the function CreateEnsemble is used: first, StochasticReplacement is used to generate N RandAct models, then the models are fine-tuned on the training set, and finally they were fused together in an ensemble using the sum rule. The fusion of CNNs using the sum rule consists in summing the outputs of the last softmax layer. Then, the final decision is obtained applying an argmax function. In the segmentation task, we evaluated the sum of the output mask, which is equal to a vote rule at pixel level. The ensemble created and tested in the experimental section are the following:
  • FusRan10 and FusRan10(255) are ensembles obtained by the fusion of 10 RandAct or RandAct (255) models (i.e., fixing m a x I n p u t   = 1 or 255)
  • FusRan20 = FusRan10 + FusRan10(255)
  • FusRan3 and FusRan3(255) are the ensembles obtained by the fusion of 3 stochastic models as RandAct or RandAct(255).
Moreover, we also tested the following ensembles obtained by the sum rule of the above stand-alone models:
  • FusAct10 and FusAct10(255) are the ensembles obtained by the fusion of all the 10 non-random stand-alone models obtained by varying the activation functions: i.e. ReLU, leakyReLU, ELU, SReLU, APLU, GaLU, sGaLU, PReLU, MeLU, and wMeLU (fixing m a x I n p u t to 1 or 255)
  • FusAct3 is a lightweight ensemble obtained by the fusion of the best 3 stand-alone models (evaluated on the training set), FusAct3 = wMeLU + MeLU + PReLU for skin classification
  • FusAct3(255) is a lightweight ensemble obtained by the fusion of the best 3 stand-alone methods for image classification, FusAct3(255) = wMeLu(255) + MeLu(255) + SReLu(255)
Finally, we proposed two ensembles obtained mixing different types of selection for activation functions:
  • FusAR20 = FusAct10 + FusRan10
  • FusAR20(255) = FusAct10(255) + FusRan10(255)

5. Results

To evaluate the stand-alone models based on different activation functions, the stochastic method for model and ensemble creation and the other ensembles described in Section 4, we performed experiments on 13 well-known medical datasets for image classification and 11 datasets for skin segmentation. Table 3 summarizes the 13 datasets for image classification including a short abbreviation, the dataset name, the number of samples and classes, the size of the images, and the testing protocol. We used five-fold cross-validation (5CV) in 12 out of 13 datasets, while we maintained a three-fold division for the Laryngeal dataset (the same protocol in [39]). Table 4 summarizes the 11 datasets used for skin segmentation. All models were trained only on the first 2000 images of the ECU dataset [40]; therefore, the other skin datasets were used only for testing (for ECU, only the last 2000 images not included in the training set were used for testing).
The evaluation and comparison of the proposed approaches was performed according to two of the most used performance indicators in image classification and skin segmentation: accuracy and F1-measure, respectively. Accuracy is the ratio between the number of true predictions and the total number of samples, while the F1-measure is the harmonic mean of precision and recall and it is calculated according to the following formula F1 = 2 t p / ( 2 t p + f n + f p ) , where tn, fn, tp, and fp are the number of true negatives, false negatives, true positives, and false positives evaluated at pixel-level, respectively. According to other works on skin detection, F1 was calculated at pixel-level (and not at image-level) to be independent on the image size in the different databases. Finally, to validate the experiments, the Wilcoxon signed rank test [56] was used. For our experiments, all images were resized to the input size of the CNN models (i.e., 224 × 224 for ResNet50 and all our variants) before training and testing, and then the output mask for skin segmentation was resized back to original size.
In the first experiment, we evaluated the proposed methods for image classification on the datasets listed in Table 3. Table 5 reports the accuracy obtained by all the tested stand-alone models and ensembles: the last two columns report the average accuracy (Avg) and the rank (evaluated on Avg).
From the results in Table 5, we can draw the following conclusions:
  • All ensembles are ranked before the stand-alone methods: this demonstrates that changing the activation function is a viable method for creating diversity among models.
  • The method named ReLU, which is our baseline since it is the standard implementation of ResNet50, performs very well, but it is not the best performing activation function: many activation functions (with the m a x I n p u t   = 255) perform better than ReLU on average.
  • It is a very valuable result that methods such as wMeLU(255), MeLU(255), and some other stand-alone approaches strongly outperform ReLU. Starting from a pretrained model and changing its activation layers, we obtained a sensible error reduction. This means that our approaches permit boosting the performance of the original ResNet50 on a large set of problems.
  • It is difficult to select a function that wins in all problems. Therefore, a good method to improve performance is to create an ensemble of different models: both FusAct10 and FusAct10(255) work better than each of their single components.
  • Designing the models by means of stochastic activation functions (i.e., RandAct or RandAct(255)) gives valuable results: RandAct is ranked 12th, only two positions worse than the best stand-alone model (wMeLU(255) ranked 10th) and before the baseline ReLU (15th).
  • Moreover, the selection of stochastic activation functions is very valuable for the creation of ensembles: both FusRan10 and FusRan10(255) perform very well compared to all stand-alone models and other ensembles; their fusion FusRan20 = FusRan10 + FusRan10(255) is the first ranked method tested on these experiments.
  • The two small ensembles FusAct3(255) and FusRan3(255) perform very well; they strongly outperform stand-alone approaches and reach performance comparable with other heavier ensembles (composed of 10 or 20 models).
In the second experiment, we evaluated the proposed methods for skin segmentation on the 11 datasets listed in Table 4. In Table 6, the performance of all the tested stand-alone models and ensembles are reported in terms of F1-measure; the last two columns report the average F1-measure (Avg) and the rank (calculated on the average F1).
From the results in Table 6 it can be derived that:
  • ReLU is the standard DeepLabv3+ segmentation CNN based on ResNet50 encoder. This is our baseline, since it has shown state-of-the-art performance for skin segmentation [17]. Many stand-alone models based on different activation functions outperform ReLU: in this problem, the activation functions with m a x I n p u t = 1 work better than those initialized at 255; therefore, we set to 1 the m a x I n p u t for the ensembles with three models (FusAct3 and FusRan3).
  • Similar to the image classification experiment, all ensembles work better than any stand-alone approach: FusAR20 is the best ranked method in our experiments, but two “lighter” ensembles, namely FusAct3 and FusAct10, offer very good performance.
  • Similar to the classification problem, the proposed approaches outperform ReLU, i.e. the standard DeepLabv3+ based on ResNet50, a state-of-the-art approach for image segmentation.
  • The reported results show that all the proposed ensembles reach state of the art performance [17] in most of the benchmark datasets: all of them outperform our baseline ReLU.
To give a visual evidence of the performance improvement obtained by our ensemble FusRan20 with respect to the baseline ReLU, Figure 2 presents two graphics of performance on both datasets. Moreover, in Figure 3, sample output masks from the Pratheepan dataset obtained by our ensemble FusAR20 with respect to the baseline ReLU and the ground truth are shown. In all three sample images, the improvement of the ensemble with respect to our baseline stand-alone method is clearly visible.
Finally, we report some comparisons considering the Wilcoxon signed rank test. In Table 7 and Table 8, we compare the performance of some approaches for classification and segmentation: we selected the most interesting approach for each size of ensembles (of course, the approaches can be different in the two problems). The reported p-values confirm the conclusions drawn from Table 5 and Table 6. Moreover, the Wilcoxon signed rank test between FusRan10 and FusAct10 shows that the stochastic ensemble outperforms the other one with a p-value of 0.0166 on our 13 datasets for image classification. Similarly, FusRan10(255) outperforms FusAct10(255) with a p-value of 0.0713 in the image classification problem. This is an experimental demonstration that introducing a stochastic selection is a method to improve diversity of classifiers.
Finally, using a Titan Xp, the classification time of a ResNet50 is 0.018 s per image; this mean that, using an ensemble of 20 CNNs, it is possible to classify more than two images per second using a single Titan Xp.

6. Conclusions

In this study, we proposed a method for CNN model design based on changing all the activation layers of the best performing CNN models by stochastic layer replacement. We proposed to replace each activation layer of a CNN by a different activation function stochastically drawn from a given set. This means that the resulting model has different activation function layers. This generation process introduces diversity among models making them suitable for ensemble creation. Interestingly, this design approach has gained very strong performance for ensemble creation: a set of ResNet50-like models designed using stochastic replacement of all activation layers and combined by sum rule strongly outperforms both standard ResNet50 (i.e., with a static ReLU activation function) and a single stochastic model (i.e., RandAct) in our experiments. A large experimental evaluation was carried out on a wide set of benchmark problems both for image classification and image segmentation. Experimental results demonstrate that the proposed idea is very effective to build a high-performance ensemble of CNNs.
Even if these first results are limited to a single, albeit highly performing model, we plan as a future work to assess the proposed method on a larger class of models including lighter architectures suitable for mobile devices. The difficulty of studying ensembles of CNNs lies in the enormous speed and memory resources required to conduct such experiments.

Author Contributions

Conceptualization, A.L. and L.N.; methodology, L.N.; software, A.L., S.G., and L.N.; validation, G.M.; formal analysis, A.L.; writing—original draft preparation, A.L. and L.N.; and writing—review and editing, A.L., S.G., and G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We gratefully acknowledge the support of NVIDIA Corporation for the “NVIDIA Hardware Donation Grant” of a Titan Xp used in this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  2. Cho, Y.; Saul, L.K. Large-margin classification in infinite neural networks. Neural Comput. 2010, 22, 2678–2697. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  4. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  5. Hansen, L.K.; Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 993–1001. [Google Scholar] [CrossRef] [Green Version]
  6. Ghosh, P.; Antani, S.; Long, L.R.; Thoma, G.R. Review of medical image retrieval systems and future directions. In Proceedings of the IEEE Symposium on Computer-Based Medical Systems, Bristol, UK, 27–30 June 2011. [Google Scholar]
  7. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
  8. Paci, M.; Nanni, L.; Lahti, A.; Aalto-Setala, K.; Hyttinen, J.; Severi, S. Non-binary coding for texture descriptors in sub-cellular and stem cell image classification. Curr. Bioinform. 2013, 8, 208–219. [Google Scholar] [CrossRef]
  9. Chi, J.; Walia, E.; Babyn, P.; Wang, J.; Groot, G.; Eramian, M. Thyroid nodule classification in ultrasound images by fine-tuning deep convolutional neural network. J. Digit. Imaging 2017, 30, 477–486. [Google Scholar] [CrossRef] [PubMed]
  10. Byra, M. Discriminant analysis of neural style representations for breast lesion classification in ultrasound. Biocybern. Biomed. Eng. 2018, 38, 684–690. [Google Scholar] [CrossRef]
  11. Nanni, L.; Brahnam, S.; Ghidoni, S.; Lumini, A. Bioimage classification with handcrafted and learned features. IEEE ACM Trans. Comput. Biol. Bioinforma. 2018, 16, 874–885. [Google Scholar] [CrossRef]
  12. Hsu, R.L.; Abdel-Mottaleb, M.; Jain, A.K. Face detection in color images. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 696–706. [Google Scholar]
  13. Argyros, A.A.; Lourakis, M.I.A. Real-time tracking of multiple skin-colored objects with a possibly moving camera. Lect. Notes Comput. 2004, 3023, 368–379. [Google Scholar]
  14. Han, J.; Award, G.M.; Sutherland, A.; Wu, H. Automatic skin segmentation for gesture recognition combining region and support vector machine active learning. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, 10–12 April 2006; pp. 237–242. [Google Scholar]
  15. Lee, J.-S.; Kuo, Y.-M.; Chung, P.-C.; Chen, E.-L. Naked image detection based on adaptive and extensible skin color model. Pattern Recognit. 2007, 40, 2261–2270. [Google Scholar] [CrossRef]
  16. Paracchini, M.; Marcon, M.; Villa, F.; Tubaro, S. Deep Skin Detection on Low Resolution Grayscale Images. Pattern Recognit. Lett. 2020, 131, 322–328. [Google Scholar] [CrossRef]
  17. Lumini, A.; Nanni, L. Fair comparison of skin detection approaches on publicly available datasets. arXiv 2018, arXiv:1802.02531. [Google Scholar]
  18. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.S.; Asari, V.K. A State-of-the-Art Survey on Deep Learning Theory and Architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef] [Green Version]
  19. Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A survey of deep neural network architectures and their applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
  20. Agostinelli, F.; Hoffman, M.; Sadowski, P.; Baldi, P. Learning activation functions to improve deep neural networks. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Workshop Track Proceedings, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  21. Clevert, D.A.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (ELUs). In Proceedings of the 4th International Conference on Learning Representations, ICLR 2016—Conference Track Proceedings, San Juan, WA, USA, 2–4 May 2016. [Google Scholar]
  22. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, Fort Lauderdale, FL, USA, 11–13 April 2011. [Google Scholar]
  23. Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, Atlanta, GA, USA, 16 June 2013. [Google Scholar]
  24. Klambauer, G.; Unterthiner, T.; Mayr, A.; Hochreiter, S. Self-normalizing neural networks. In Proceedings of the NIPS, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  25. Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical evaluation of rectified activations in convolutional network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
  26. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Las Condes, Cile, 11–18 December 2015; pp. 1026–1034. [Google Scholar]
  27. Ertuğrul, Ö.F. A novel type of activation function in artificial neural networks: Trained activation function. Neural Networks 2018, 99, 148–157. [Google Scholar] [CrossRef] [PubMed]
  28. Bawa, V.S.; Kumar, V. Linearized sigmoidal activation: A novel activation function with tractable non-linear characteristics to boost representation capability. Expert Syst. Appl. 2019, 120, 346–356. [Google Scholar] [CrossRef]
  29. Ramachandran, P.; Barret, Z.; Le, Q.V. Searching for activation functions. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018—Workshop Track Proceedings, Vancouver, BC, Canada, 30 April 2018. [Google Scholar]
  30. Maguolo, G.; Nanni, L.; Ghidoni, S. Ensemble of convolutional neural networks trained with different activation functions. arXiv 2019, arXiv:1905.02473. [Google Scholar]
  31. Jin, X.; Xu, C.; Feng, J.; Wei, Y.; Xiong, J.; Yan, S. Deep learning with S-shaped rectified linear activation units. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  32. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 1, 1097–1105. [Google Scholar] [CrossRef]
  33. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  34. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), LasVegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  35. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference Learning Representations, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  36. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  37. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  38. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Munich, Germany, 5–9 October 2015. [Google Scholar]
  39. Moccia, S.; De Momi, E.; Guarnaschelli, M.; Savazzi, M.; Laborai, A.; Guastini, L.; Peretti, G.; Mattos, L.S. Confident texture-based laryngeal tissue classification for early stage diagnosis support. J. Med. Imaging 2017, 4, 34502. [Google Scholar] [CrossRef] [PubMed]
  40. Phung, S.L.; Bouzerdoum, A.; Chai, D. Skin segmentation using color pixel classification: Analysis and comparison. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 148–154. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Boland, M.V.; Murphy, R.F. A neural network classifier capable of recognizing the patterns of all major subcellular structures in fluorescence microscope images of HeLa cells. Bioinformatics 2001, 17, 1213–1223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Hamilton, N.A.; Pantelic, R.S.; Hanson, K.; Teasdale, R.D. Fast automated cell phenotype image classification. BMC Bioinformatics 2007, 8, 110. [Google Scholar] [CrossRef] [Green Version]
  43. Shamir, L.; Orlov, N.; Mark Eckley, D.; Macura, T.J.; Goldberg, I.G. IICBU 2008: A proposed benchmark suite for biological image analysis. Med. Biol. Eng. Comput. 2008, 46, 943–947. [Google Scholar] [CrossRef] [Green Version]
  44. Kather, J.N.; Weis, C.-A.; Bianconi, F.; Melchers, S.M.; Schad, L.R.; Gaiser, T.; Marx, A.; Zöllner, F.G. Multi-class texture analysis in colorectal cancer histology. Sci. Rep. 2016, 6, 27988. [Google Scholar] [CrossRef]
  45. Dimitropoulos, K.; Barmpoutis, P.; Zioga, C.; Kamas, A.; Patsiaoura, K.; Grammalidis, N. Grading of invasive breast carcinoma through Grassmannian VLAD encoding. PLoS ONE 2017, 12, e0185110. [Google Scholar] [CrossRef] [Green Version]
  46. Jones, M.J.; Rehg, J.M. Statistical color models with application to skin detection. Int. J. Comput. Vis. 2002, 46, 81–96. [Google Scholar] [CrossRef]
  47. Ruiz-Del-Solar, J.; Verschae, R. Skin detection using neighborhood information. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 19 May 2004; pp. 463–468. [Google Scholar]
  48. Schmugge, S.J.; Jayaram, S.; Shin, M.C.; Tsap, L.V. Objective evaluation of approaches of skin detection using ROC analysis. Comput. Vis. Image Underst. 2007, 108, 41–51. [Google Scholar] [CrossRef]
  49. Stöttinger, J.; Hanbury, A.; Liensberger, C.; Khan, R. Skin paths for contextual flagging adult videos. In Proceedings of the International symposium on visual computing (subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Las Vegas, NV, USA, 30 November–2 December 2009; Volume 5876 LNCS, pp. 303–314. [Google Scholar]
  50. Huang, L.; Xia, T.; Zhang, Y.; Lin, S. Human skin detection in images by MSER analysis. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 1257–1260. [Google Scholar]
  51. Tan, W.R.; Chan, C.S.; Yogarajah, P.; Condell, J. A fusion approach for efficient human skin detection. Ind. Inform. IEEE Trans. 2012, 8, 138–147. [Google Scholar] [CrossRef] [Green Version]
  52. Sanmiguel, J.C.; Suja, S. Skin detection by dual maximization of detectors agreement for video monitoring. Pattern Recognit. Lett. 2013, 34, 2102–2109. [Google Scholar] [CrossRef] [Green Version]
  53. Casati, J.P.B.; Moraes, D.R.; Rodrigues, E.L.L. SFA: A human skin image database based on FERET and AR facial images. In Proceedings of the IX workshop de Visao Computational, Rio de Janeiro, Brazil, 3–5 June 2013. [Google Scholar]
  54. Kawulok, M.; Kawulok, J.; Nalepa, J.; Smolka, B. Self-adaptive algorithm for segmenting skin regions. EURASIP J. Adv. Signal Process. 2014, 2014, 170. [Google Scholar] [CrossRef] [Green Version]
  55. Abdallah, A.S.; El-Nasr, M.A.; Abbott, A.L. A new color image database for benchmarking of automatic face detection and human skin segmentation techniques. Proc. World Acad. Sci. Eng. Technol. 2007, 20, 353–357. [Google Scholar]
  56. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
Figure 1. Pseudo-code of the two procedures for stand-alone random model and ensemble creation.
Figure 1. Pseudo-code of the two procedures for stand-alone random model and ensemble creation.
Sensors 20 01626 g001
Figure 2. Comparison among the baseline ReLU and the ensemble FusRan20 in both the problems.
Figure 2. Comparison among the baseline ReLU and the ensemble FusRan20 in both the problems.
Sensors 20 01626 g002
Figure 3. Visual comparison of segmentation results on images from Pratheepan dataset (from left to right): original input image, ground truth, ReLU output mask, and FusAR20 output mask.
Figure 3. Visual comparison of segmentation results on images from Pratheepan dataset (from left to right): original input image, ground truth, ReLU output mask, and FusAR20 output mask.
Sensors 20 01626 g003
Table 1. Summary of the activation functions evaluated in this work: name, parameter settings or initialization (if they are learned), learned parameters, and reference.
Table 1. Summary of the activation functions evaluated in this work: name, parameter settings or initialization (if they are learned), learned parameters, and reference.
NameParameters Setting/InitializationLearned ParametersRef
ReLu----[22]
Leaky ReLU a   = 0.1--[23]
ELU a = 1 --[21]
PReLU a c   = 0 a c [26]
SReLU a l = 0 ,   t l = 0 , a r = 1 t r = m a x I n p u t a l , t l , a r , t r [31]
APLUN = 3, a c = 0 ,   b c = r a n d ( ) * m a x I n p u t , a c , b c [20]
MeLU K = 4 ,   α = [ 2 , 1 , 3 ] ,   λ = [ 2 , 1 , 1 ] c R k [30]
wMeLU K = 8 ,   α = [ 2 , 1 , 3 , 0.5 , 1.5 , 2.5 , 3.5 ] ,   λ = [ 2 , 1 , 1 , 0.5 , 0.5 , 0.5 , 0.5 ] c R k [30]
GaLUK = 4, α = [ 1 , 0.5 , 2.5 ] ,   λ = [ 1 , 0.5 , 0.5 ] c R k
sGaLUK = 2, α = [ 1 ] ,   λ = [ 1 ] c R k
Table 2. Model training parameters used for image classification and skin segmentation.
Table 2. Model training parameters used for image classification and skin segmentation.
ParameterImage ClassificationSkin Segmentation
batch size3232
learning rate0.00010.001
max epoch3050
data augmentationYesyes (30 epoch)
Table 3. Summary of the Medical Datasets for image classification: short same (ShortN), name, number of classes (#C), number of samples (#S), image size, testing protocol, and reference.
Table 3. Summary of the Medical Datasets for image classification: short same (ShortN), name, number of classes (#C), number of samples (#S), image size, testing protocol, and reference.
ShortNName#C#SImage SizeProtocolRef
CHChinese hamster ovary cells5327512 × 3825CV[41]
HE2D HELA10862512 × 3825CV[41]
LOLocate Endogenous10502768 × 5125CV[42]
TRLocate Transfected11553768 × 5125CV[42]
RNFly Cell102001024 × 10245CV[43]
TBTerminal bulb aging7970768 × 5125CV[43]
LYLymphoma33751388 × 10405CV[43]
MAMuscle aging42371600 × 12005CV[43]
LGLiver gender22651388 × 10405CV[43]
LALiver aging45291388 × 10405CV[43]
COHuman colorectal cancer85000150 × 1505CV[44]
BGRBreast grading carcinoma33001280 × 9605CV[45]
LARLaryngeal dataset413201280 × 960Tr-Te[39]
Table 4. Summary of the Skin detection datasets with ground truth used for image segmentation: short name (ShortN), name, number of images (#S), quality of the ground truth, and reference.
Table 4. Summary of the Skin detection datasets with ground truth used for image segmentation: short name (ShortN), name, number of images (#S), quality of the ground truth, and reference.
ShortNName#SGround TruthRef
CMQCompaq4675Semi-supervised[46]
UCUChile DB-skin103 Medium Precision[47]
ECUECU Face and Skin Detection4000Precise[40]
SchSchmugge dataset845Precise (3 classes)[48]
FVFeeval Skin video DB8991Low quality, imprecise[49]
MCGMCG-skin1000Imprecise[50]
PratPratheepan 78Precise[51]
VMD5 datasets for human activity recognition285Precise[52]
SFASFA1118Medium Precision[53]
HGRHand Gesture Recognition1558Precise[54]
VTVT-AAST66Precise[55]
Table 5. Performance of the proposed approaches in the medical image datasets (accuracy).
Table 5. Performance of the proposed approaches in the medical image datasets (accuracy).
DatasetAvgRank
MethodCHHELOTRRNTBLYMALGLACOBGLAR
ReLU93.589.995.690.055.058.577.990.093.085.194.988.787.184.5515
leakyReLU89.287.192.884.234.057.170.979.293.782.595.790.387.380.3022
ELU90.286.794.085.848.060.865.385.096.090.195.189.389.982.8021
SReLU91.485.692.683.330.055.969.375.088.082.195.789.089.579.0224
APLU92.387.193.280.925.054.167.276.793.082.795.590.388.978.9925
GaLU92.988.492.290.441.557.873.689.292.788.894.990.390.083.2820
sGaLU92.387.993.291.152.060.072.590.095.387.495.487.788.884.1317
PReLU92.085.491.481.633.557.168.876.388.382.195.788.789.679.2623
MeLU91.185.492.884.927.555.468.577.190.079.495.389.387.278.7627
wMeLU92.986.491.882.925.556.367.576.391.082.594.889.788.878.9526
SReLU(255)92.389.493.090.756.559.773.391.798.389.095.589.787.985.1513
APLU(255)95.189.293.690.747.556.975.289.297.387.195.789.789.584.3516
GaLU(255)92.987.292.091.347.560.174.187.996.086.995.689.387.783.7319
sGaLU(255)93.587.895.689.855.063.176.090.495.085.395.189.789.885.0914
MeLU(255)92.990.295.091.857.059.878.487.597.385.195.789.388.385.2611
wMeLU(255)94.589.394.292.254.061.975.789.297.088.695.687.788.785.2710
RandAct90.290.094.291.654.562.077.390.895.790.595.189.087.185.2312
RandAct(255)93.288.594.491.651.559.173.988.394.089.195.186.788.084.1118
FusAct1093.590.797.292.756.063.977.690.896.391.496.490.090.086.678
FusAct10(255)95.191.396.294.263.064.978.792.597.787.696.589.789.887.466
FusRan1095.491.395.895.163.064.278.993.898.791.196.590.390.288.025
FusRan10(255)96.991.296.896.258.566.679.792.598.391.696.689.791.188.132
FusRan2097.591.496.695.860.565.879.794.299.090.596.689.790.788.301
FusAR2095.790.897.094.461.564.179.593.898.391.496.691.090.588.044
FusAR20(255)96.391.296.695.362.064.979.593.898.390.196.690.390.888.123
FusAct3(255)93.991.594.893.158.563.577.691.398.388.096.389.089.486.559
FusRan3(255)96.390.995.695.154.062.978.792.598.790.996.290.090.587.107
Table 6. Performance of the proposed approaches in the skin datasets (F1-measure).
Table 6. Performance of the proposed approaches in the skin datasets (F1-measure).
DatasetAvgRank
MethodFVPratMCGUCCMQSFAHGRSchVMDECUVT
ReLU0.7590.8310.8720.8810.7990.9460.9500.7630.5920.9170.7450.82318
leakyReLU0.7530.8530.8760.8750.8040.9440.9550.7620.6060.9210.7160.82414
ELU0.6820.8380.8700.8340.7910.9410.9440.7630.5400.9180.6770.80027
SReLU0.7220.8390.8670.8600.8070.9500.9580.7430.6100.9190.7090.81725
APLU0.7740.8400.8740.8800.7960.9420.9450.7610.5930.9140.7450.82416
GaLU0.7590.8270.8670.8720.7950.9410.9330.7550.5620.9130.7310.81426
sGaLU0.7790.8340.8720.8670.7980.9460.9510.7660.5970.9150.7390.82415
PReLU0.7850.8520.8780.8860.8090.9470.9530.7700.6330.9240.7400.83410
MeLU0.7680.8610.8780.8790.8190.9470.9530.7680.6430.9270.7250.83411
wMeLU0.7680.8690.8780.8880.8210.9450.9560.7710.6160.9290.7060.83212
SReLU(255)0.7580.8310.8720.8790.7970.9460.9490.7640.5920.9160.7440.82319
APLU(255)0.7550.8390.8730.8730.7970.9400.9470.7600.5840.9090.7440.82022
GaLU(255)0.7760.8320.8700.8690.7900.9380.9400.7580.5660.9110.7560.81924
sGaLU(255)0.7690.8450.8760.8860.7970.9440.9510.7640.6170.9190.7410.82813
MeLU(255)0.7570.8360.8740.8720.7920.9430.9440.7670.5700.9130.7440.81923
wMeLU(255)0.7590.8320.8730.8800.7990.9460.9500.7630.5990.9170.7420.82417
RandAct0.7570.8520.8760.8890.8040.9370.9470.7640.5690.9200.7300.82220
RandAct(255)0.7320.8440.8730.8780.7970.9440.9370.7580.5950.9140.7510.82021
FusAct100.7960.8640.8840.8990.8210.9510.9590.7760.6710.9290.7480.8453
FusAct10(255)0.7910.8540.8810.8970.8130.9490.9550.7740.6540.9250.7610.8418
FusRan100.7950.8640.8830.9010.8180.9490.9580.7750.6670.9270.7520.8447
FusRan10(255)0.8000.8670.8840.9060.8190.9500.9580.7790.6550.9270.7490.8455
FusRan200.8000.8670.8840.9050.8190.9500.9580.7780.6630.9270.7520.8462
FusAR200.7990.8650.8840.9010.8200.9510.9590.7760.6730.9290.7510.8461
FusAR20(255)0.7980.8620.8830.9030.8170.9500.9570.7770.6600.9270.7580.8456
FusAct30.7900.8740.8840.8960.8250.9510.9610.7760.6690.9330.7370.8454
FusRan30.7830.8700.8830.9020.8180.9510.9590.7780.6350.9300.7170.8399
Table 7. P-value of the comparison among some tested approaches in the medical image classification experiment (< denotes that the method in row wins, ^ denotes that the method in column wins, = denotes that there are were no statistically significant differences).
Table 7. P-value of the comparison among some tested approaches in the medical image classification experiment (< denotes that the method in row wins, ^ denotes that the method in column wins, = denotes that there are were no statistically significant differences).
ClassificationReLUwMeLU(255)FusRan3(255)FusRan10(255)FusRan20
ReLu---^0.0046^0.0210^0.002^0.002
wMeLu(255) ---^0.0024^0.004^0.002
FusRan3(255) ---^0.004^0.002
FusRan10(255) ---=0.7148
FusRan20 ---
Table 8. P-value of the comparison among some tested approaches in the skin segmentation experiment.
Table 8. P-value of the comparison among some tested approaches in the skin segmentation experiment.
Skin Segmentation.ReLUPReLUFusAct3FusAct10FusAR20
ReLu---^0.0059^0.0029^0.001^0.001
preLu ---^0.0020^0.001^0.001
FusAct3 ---=0.9844=0.6797
FusAct10 ---^0.0938
FusAR20 ---

Share and Cite

MDPI and ACS Style

Nanni, L.; Lumini, A.; Ghidoni, S.; Maguolo, G. Stochastic Selection of Activation Layers for Convolutional Neural Networks. Sensors 2020, 20, 1626. https://doi.org/10.3390/s20061626

AMA Style

Nanni L, Lumini A, Ghidoni S, Maguolo G. Stochastic Selection of Activation Layers for Convolutional Neural Networks. Sensors. 2020; 20(6):1626. https://doi.org/10.3390/s20061626

Chicago/Turabian Style

Nanni, Loris, Alessandra Lumini, Stefano Ghidoni, and Gianluca Maguolo. 2020. "Stochastic Selection of Activation Layers for Convolutional Neural Networks" Sensors 20, no. 6: 1626. https://doi.org/10.3390/s20061626

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop