Next Article in Journal
A Crucial Role of Attention in Lateralisation of Sound Processing?
Next Article in Special Issue
An Efficient Essential Secret Image Sharing Scheme Using Derivative Polynomial
Previous Article in Journal
Anti-Cavitation Design of the Symmetric Leading-Edge Shape of Mixed-Flow Pump Impeller Blades
Previous Article in Special Issue
Optimal Rescue Ship Locations Using Image Processing and Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Symmetry of Binary Classification Performance Metrics

1
Ingeniería del Diseño; Escuela Politécnica Superior. Universidad de Sevilla, 41011 Sevilla, Spain
2
Tecnología Electrónica; Escuela Ingeniería Informática. Universidad de Sevilla, 41012 Sevilla, Spain
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(1), 47; https://doi.org/10.3390/sym11010047
Submission received: 14 November 2018 / Revised: 13 December 2018 / Accepted: 24 December 2018 / Published: 3 January 2019
(This article belongs to the Special Issue Symmetry in Computing Theory and Application)

Abstract

:
Selecting the proper performance metric constitutes a key issue for most classification problems in the field of machine learning. Although the specialized literature has addressed several topics regarding these metrics, their symmetries have yet to be systematically studied. This research focuses on ten metrics based on a binary confusion matrix and their symmetric behaviour is formally defined under all types of transformations. Through simulated experiments, which cover the full range of datasets and classification results, the symmetric behaviour of these metrics is explored by exposing them to hundreds of simple or combined symmetric transformations. Cross-symmetries among the metrics and statistical symmetries are also explored. The results obtained show that, in all cases, three and only three types of symmetries arise: labelling inversion (between positive and negative classes); scoring inversion (concerning good and bad classifiers); and the combination of these two inversions. Additionally, certain metrics have been shown to be independent of the imbalance in the dataset and two cross-symmetries have been identified. The results regarding their symmetries reveal a deeper insight into the behaviour of various performance metrics and offer an indicator to properly interpret their values and a guide for their selection for certain specific applications.

1. Introduction

Symmetry has played and continues playing, a highly significant role in the way of how humans perceive the world [1]. In the scientific fields, symmetry plays a key role as it can be discovered in nature [2,3], society [4] and mathematics [5]. Moreover, symmetry also provides an intuitive way to attain faster and deeper insights into scientific problems.
In recent years, an increasing interest has arisen in detecting and taking advantage of symmetry in various aspects of theoretical and applied computing [6]. Several studies involving symmetry have been published in network technology [7], human interfaces [8], image processing [9], data hiding [10] and many other applications [11].
On the other hand, pattern recognition and machine learning procedures are becoming key aspects of modern science [12] and the hottest topics in the scientific literature on computing [13]. Furthermore, in this field, symmetry is playing an interesting role either as a subject of study, in the form of machine learning algorithms to discover symmetries [14] or as a means to improve the results obtained by automatic recognition systems [15]. Let us emphasize this point: not only can knowing the symmetry of a certain computer algorithm be intrinsically rewarding since it sheds light on the behaviour of the algorithm but it can also be very useful for its interpretation, its optimization or as a criterion for the selection among various competing algorithms. As an example, in recent research, we have employed a symmetric criterion to select the best feature-extraction procedures (Discrete Cosine Transform versus Discrete Fourier Transform) [16] in an application of the classification of sounds [17,18] effectively deployed in a Wireless Sensor Network as shown in Figure 1. Another examples of industrial applications using classification of sounds can be found in Refs. [19,20].
In the broad field of machine learning, the study of how to measure the performance of various classifiers has attracted continued attention [21,22,23]. Classification performance metrics play a key role in the assessment of the overall classification process in the test phase, in the selection from among various competing classifiers in the validation phase and are even sometimes used as the loss function to be optimized in the process of model construction during the classification training phase.
However, to the best of our knowledge, no systematic study into the symmetry of these metrics has yet been undertaken. By discovering their symmetries, we would reach a better understanding of their meaning, we could obtain useful insights into when their use would be more appropriate and we would also gain additional and meaningful indicators for the selection of the best performance metric.
Although several dozen performance metrics can be found in the literature, we will focus on those which are probably the most commonly used: the metrics based on the confusion matrix [24]. Accuracy, precision and recall (sensitivity) are undoubtedly some of the most popular metrics. On the other hand, our research will be focused on the cases where there are only two classes (binary classifiers). Although this is certainly a limitation, it does provide a solid ground base for further research. Moreover, multiclass performance metrics are usually obtained by decomposing the multiclass problem into several binary classification sub-problems [25].

2. Materials and Methods

2.1. Definitions

Let us first consider an original (baseline) experiment E B , defined by the duple E B = ( C B , D B ) composed of a set of n B classifiers, C B = { c i B } and a set of their corresponding n B datasets, D B = { D i B } ,     i = 1 , , n B . The elements in every dataset belong to either of two classes, G 1 and G 2 , which are called Positive ( P ) and Negative ( N ) classes, respectively. The i -th classifier c i B operates on the corresponding D i B dataset, thereby obtaining a resulting classification which can be defined by its binary confusion matrix c m i B and hence D i B c i B c m i B . The set of confusion matrices are denominated C M B = { c m i B } . The baseline experiment can therefore be defined as the set of classifiers operating on the set of datasets to obtain a set of confusion matrices, E B :   D B C B C M B   .
This paper will explore the behaviour of binary classification performance metrics when the original experiment is subject to n E different types of transformations. Let us define the k -th transformed experiment E k = ( C k , D k ) composed of a set of n k classifiers, C k = { c i k } and a set of their corresponding n k datasets, D k = { D i k } , whose result is a set of confusion matrices C M k = { c m i k } . Hence, E k :   D k C k C M k , where k = { B , 1 , 2 , , n E } , indicates the type of transformation. In the k -th experiment, when the i -th classifier c i k operates on its corresponding D i k dataset, the result is summarized in the binary confusion matrix defined as
c m i k = [ a i k f i k g i k b i k ] ,
where
  • a i k is the number of positive elements in D i k correctly classified as positive;
  • b i k is the number of negative elements in D i k correctly classified as negative;
  • f i k is the number of positive elements in D i k incorrectly classified as negative; and
  • g i k is the number of negative elements in D i k incorrectly classified as positive.
Let us call P i k , N i k   a n d   M i k the positive, negative and total number of elements in D i k . Therefore M i k = P i k + N i k , a i k + f i k = P i k and g i k + b i k = N i k . The confusion matrix can then be described as
c m i k = [ a i k P i k a i k N k b i k b i k ] .
Let us now define α i k as the ratio of positive elements in D i k correctly classified as positive; and β i k as the ratio of negative elements in D i k correctly classified as negative. That is,
α i k a i k P i k ,       β i k b i k N i k .
The confusion matrix can therefore be rewritten as
c m i k = [ α i k P i k P i k α i k P i k N i k β i k N i k β i k N i k ] = [ α i k P i k ( 1 α i k ) P i k ( 1 β i k ) N i k β i k N i k ] .
On the other hand, a dataset D i k is called imbalanced if it has a different number of positive and negative elements, that is, P i k N i k . Classification on the presence of imbalanced datasets is a challenging task requiring specific considerations [26]. To quantify the imbalance, several indicators have been proposed, such as the dominance [27,28], the proportion between positive and negative instances (formalized as 1 : X ) [29] and the imbalance ratio ( I R ) defined as P i k / N i k [30], which is also called skew [31]. This value lies within the [ 0 , ] range and has a value I R = 1 in the balanced case. We prefer to use an indicator showing a value 0 in the balanced case, a value + 1 when all the elements in the dataset are positive and 1 if all the elements are negative. We define the imbalance coefficient δ i k , which is an indicator that has these characteristics, as
δ i k 2 P i k M i k 1.
The imbalance coefficient is graphically shown in Figure 2 (solid blue cline) as a function of the proportion of positive elements in the dataset. For the sake of comparison, that figure also shows the I R imbalance ratio (dashed green line).
Based on the imbalance coefficient, the number of positive and negative elements in the dataset can be rewritten as
P i k = 1 + δ i k 2 M i k .
N i k = M i k P i k = M i k ( 1 1 + δ i k 2 ) = 1 δ i k 2 M i k .
By substituting these expressions into Equation (4), the confusion matrix becomes
c m i k = [ α i k 1 + δ i k 2 M i k ( 1 α i k ) 1 + δ i k 2 M i k ( 1 β i k ) 1 δ i k 2 M i k β i k 1 δ i k 2 M i k ] = λ i k M i k ,
where λ i k is the unitary confusion matrix defined as
λ i k [ α i k 1 + δ i k 2 ( 1 α i k ) 1 + δ i k 2 ( 1 β i k ) 1 δ i k 2 β i k 1 δ i k 2 ] .
It can be seen that λ i k is a function of 3 variables: the ratio of positive ( α i k ) and negative ( β i k ) correctly classified elements and the imbalance coefficient ( δ i k ) , that is, λ i k = λ i k ( α i k , β i k , δ i k ) .
In order to measure the performance of the classification process, m metrics are used. In this paper we focus on metrics that are based on the unitary confusion matrix and, for the sake of much easier comparison, all these metrics are converted within the range [ 1 , 1 ] . Let us define γ i k j as the j -th of such metrics for the c i k classifier operating on the D i k dataset, where j = 1 , , m . Since it is based on the unitary confusion matrix, γ i k j = γ i k j ( λ i k ) = γ i k j ( α i k , β i k , δ i k ) .
Let us now define μ j k as the set of the j -th metric values corresponding to the k -th experiment E k = ( C k , D k ) , that is, μ j k { γ i k j } ,   i = 1 , 2 , , n k . Additionally, the sets α k { α i k } , β k { β i k }   a n d   δ k { δ i k } are also defined.

2.2. Representation of Metrics

With these definitions, it is clear that the metric μ j k = μ j k ( α k , β k , δ k ) and hence it is a 4-dimensional function since μ j k (one dimension) depends on α k , β k   a n d   δ k (three independent dimensions). To depict their values, a first approach could involve a 3D representation space where each ( α i k , β i k , δ i k ) point is color-coded according to the value γ i k j ( α i k , β i k , δ i k ) .
To show the different types of representations, let us define an arbitrary metric function
μ j k ( α k , β k , δ k ) = sin ( 2 π α k δ k ) + sin ( 2 π β k ) 2 .
This function is only used as an example, corresponds to no specific classification metric and has been selected for its aesthetic results. Figure 3 depicts the 3D representation for said example function. The n E   = 1000 pairs of classifiers and datasets used in the experiment E k = ( C k , D k ) are selected in such a way that the space ( α k , β k , δ k ) is covered with equally spaced points. The above figure may cause confusion, mainly when the number of points ( n E ) increases. An alternative is to slice the 3D graphic by a plane corresponding to a certain value of the imbalance coefficient. Figure 4a depicts such a slice in the 3D graphic for an arbitrary value δ = 0.75 and Figure 4b shows the slice on a 2D plane.
In the previous figure, the slice contains 100 values of the metric. However, to obtain a clearer understanding of the metric behaviour, a much larger number of points is recommended. For this purpose, the experiment is designed by selecting a set of virtual pairs of classifiers and datasets ( C k , D k ) in such a way that the plane ( α k , β k ) is fully covered. The result, as shown in Figure 5, appears as a heat map for a certain value of the imbalance coefficient ( δ = 0.75 in the example).
In order to analyse the behaviour of the metric for different values of the imbalance coefficient, a panel of heat maps can be used, as depicted in Figure 6.

2.3. Transformations

The original baseline experiment E B = ( C B , D B ) is subject to various types of transformations. As a result of the k -th transformation, the metrics related to the baseline experiment μ j B ( α B , β B , δ B ) are transformed into μ j k ( α k , β k , δ k ) , which can be written either as μ j k ( α k , β k , δ k ) = T k [ μ j B ( α B , β B , δ B ) ] or as
μ j B ( α B , β B , δ B ) T k μ j k ( α k , β k , δ k ) .
It is said that the metric μ j is symmetric under the transformation T k if μ j k = μ j B . Conversely, μ j is called antisymmetric under T k (or symmetric under the complementary transformation T ¯ k ) if μ j k = μ j B . Analogously, it is said that the metrics μ u and μ v are cross-symmetric under the transformation T k if μ u k = μ v B . Conversely, μ u and μ v are called anti-cross-symmetric under T k (or cross-symmetric under the complementary transformation T ¯ k ) if μ u k = μ v B .

2.3.1. One-Dimensional Transformations

One-dimensional transformations is the name given to those mirror reflections with respect to a single (one and only one) dimension of the 4-dimensional performance metric. Type α transformation implies that the i -th transformed classifier ( c i α ) shows a ratio of correctly classified positive elements ( α i α ) which has the symmetric value of the ratio ( α i B ) obtained by the baseline classifier ( c i B ) . Since the values of such ratios lie within the range [ 0 , 1 ] , the symmetry exists with respect to the hyperplane α = 0.5 and can be stated as α i α = 1 α i B . An example of this transformation is depicted in Figure 7.
Analogously, type β transformation implies that the i -th transformed classifier ( c i β ) shows a ratio of correctly classified negative elements ( β i β ) , which has the symmetric value of the ratio ( β i B ) obtained by the baseline classifier ( c i B ) . Since the value of such ratios also lie within the range [ 0 , 1 ] , the symmetry exists with respect to the hyperplane β = 0.5 and can be stated as β i β = 1 β i B . An example of this transformation is depicted in Figure 8.
Conversely, type δ transformation, which, instead of operating on classifiers, operates on datasets, implies that the i -th transformed dataset ( D i δ ) has an imbalance ratio ( δ i δ ) , which has the symmetric value of the imbalanced ratio ( δ i B ) in the baseline corresponding to dataset ( δ i B ) . Since the value of such imbalance ratios lie within the range [ 1 , 1 ] , the symmetry exists with respect to the hyperplane δ = 0 and can be stated as δ i δ = δ i B . An example of this transformation is depicted in Figure 9.
Finally, type μ transformation jointly operates on classifiers and datasets in such a way that the j -th of performance metrics γ i μ j for the c i μ classifier operating on the D i μ dataset has the symmetric value of the performance metric in the baseline experiment ( γ i B j ) . Since the value of such metrics lie within the range [ 1 , 1 ] , the symmetry exists with respect to the hyperplane μ = 0 and can be stated as γ i μ j = γ i B j . An example of this transformation is depicted in Figure 10 where it should be noted that the μ dimension is shown by the colour code of each point. Therefore, an inversion in μ is shown as a colour inversion.

2.3.2. Multidimensional Transformations

Let us now consider transformations that exchange two or more dimensions of the 4-dimensional performance metric. Firstly, let us define type σ transformation as that which exchanges α and β dimensions. This implies that the i -th transformed classifier/dataset pair ( c i σ , D i σ ) shows a ratio of correctly classified positive elements ( α i σ ) which has the same value as the ratio of correctly classified negative elements ( β i B ) obtained by the baseline classifier/dataset pair ( c i B , D i B ) . This exchange can be seen as the symmetry with respect to the hyperplane α = β (main diagonal of the α , β plane) and can be stated as α i σ = β i B ;   β i σ = α i B . An example of this transformation is depicted in Figure 11.
Although the four axes in these plots remain dimensionless, not all of them have the same meaning. So, α and β are both ratios of correctly classified elements. It would be nonsensical, for instance, to rescale α without also rescaling β . However, δ has a completely different meaning and its scale can and in fact does, differ from α and β . The same reasons can be applied to the axes μ . Therefore, all the exchanges of multidimensional axes are meaningless, except the interchange of α and β . All the other remaining exchanges are dismissed in our study.
The one- and two-dimensional transformations described above are called basic transformations and are summarized in Table 1.

2.3.3. Combined Transformations.

More complex transformations can be obtained by concatenating basic transformations. For instance, applying basic transformation α ( T α ) and then basic transformation β ( T β ) produces a new combined transformation T α β =   T α · T β featured by α α β = 1 α B ;   β α β = 1 β B . As each of the one-dimensional transformations operates on an independent axis, they have the commutative and associative properties, that is, given 3 one-dimensional transformations, T U , T V   a n d   T W , it is true that T U · T V = T V · T U and that that ( T U · T V ) · T W = T U · ( T V · T W ) .
However, bi-dimensional type σ transformation T σ operates on the same axis as T α and T β . In this case, the order of transformation matters, as they do not have the commutative property. For instance, T α σ [ μ j B ] = T σ { T α [ μ j B ( α B , β B , δ B ) ] } = T σ { μ j B ( 1 α B , β B , δ B ) } = μ j B ( β B , 1 α B , δ B ) . On the other hand, T σ α [ μ j B ] = T α { T σ [ μ j B ( α B , β B , δ B ) ] } = T α { μ j B ( β B , α B , δ B ) } = μ j B ( 1 β B , α B , δ B ) . Therefore, it is clear that T α σ T σ α .
Having 5 basic transformations and not initially considering their order, any combined transformation can be binary coded in terms of the presence/absence of each basic component. Therefore 2 5 = 32 combinations are possible; only 31 if the identity transformation (coded 00000) is dismissed. In order to code a combined transformation, the order μ , σ , δ , β , α is used where transformation μ indicates the Most Significant Bit (MSB) and the transformation α specifies the Least Significant Bit (LSB). An example of this code is shown in Table 2. With this selection, codes greater than 15 contain a transformation type μ , that is, they are useful in exploring antisymmetric behaviour. In the cases where the order of transformations matters, σ = 1 and ( α = 1 or β = 1 ), then their corresponding codes refer to various different combined transformations.
A first example of combined transformations is that of the inverse labelling of classes. As stated above, the elements in every dataset belong to either of two classes, G 1 and G 2 , which are called Positive ( P ) and Negative ( N ) classes, respectively. The inverse labelling transformation ( T L ) explores the classification metric behaviour when the labelling of the classes is inverted, that is, when G 2 is called the Positive class and G 1 the Negative class. Let us consider the i -th classifier c i L operating on its corresponding D i L dataset. In the baseline experiment, the ratio of correctly classified positive elements ( α i B ) refers to class G 1   a n d conversely ( β i B ) refers to class G 2 . In the T L transformed experiment, the ratio of correctly classified positive elements ( α i L ) refers to class G 2   a n d conversely ( β i L ) refers to class G 1 , which means that α i L = β i B and β i L = α i B . That is, the first step of this transformation implies interchanging the axes α and β , which is equivalent to reflection symmetry with respect to the main diagonal, formerly defined as the basic transformation of type σ (Figure 12b).
Additionally, in the baseline experiment, the number of positive elements ( P i B ) refers to class G 1 , while in the T L transformed experiment, the number of positive elements ( P i L ) refers to class G 2 , which means that P i L = N i B and N i L = P i B , while the total number of elements remains unaltered: M i L = M i B . Therefore, by recalling Equation (5),
δ i L 2 P i L M i L 1 = 2 N i B M i B 1 = 2 M i B P i B M i B 1 = ( 2 P i B M i B 1 ) = δ i B .
Hence, the second step of this transformation also implies reflection symmetry with respect to the hyperplane δ = 0 , previously defined as the basic transformation of type δ (Figure 12c).
Finally, the complementary transformation T ¯ L involves a third and final step of inverting the sign of the metric, which is equivalent to reflection symmetry with respect to the hyperplane μ = 0 , formerly defined as the basic transformation of type μ (Figure 12d).
Therefore, the inverse labelling transformation can be defined as T L = T σ δ = T σ · T δ and its complementary as T ¯ L   = T σ δ μ = T σ · T δ · T μ , where
T L :   μ j L ( α L , β L , δ L ) = μ j B ( β B , α B , δ B ) .
A second example of combined transformations is given by the inverse-scoring transformation ( T S ) which explores classification metric behaviour when the scoring of the classification results are inverted. In the baseline experiment, let us consider the i -th classifier c i B operating on its corresponding D i B dataset, thereby obtaining a ratio α i B of correctly classified positive elements and a ratio β i B in the negative case. The j -th metric assigns a score of γ i B j ( α i B , β i B , δ i B )   to   this   result   . High values of the score γ i B j usually correspond to high ratios α i B , β i B . In the inverted score transformation ( T S ), the i -th classifier c i S operating on its corresponding D i S dataset obtains a ratio α i S of correctly classified positive elements which is equal to the ratio of positive elements incorrectly classified in the baseline experiment, that is, α i S = 1 α i B , which implies a type α transformation. Analogously, for the negative class, β i S = 1 β i B , which implies a type β transformation. If α i B , β i B have high values, then α i S ,   β i S will have low values and, to be consistent, the result should be marked with a low score. For that reason, the inverse scoring transform also implies a transformation type μ , that is, it uses the symmetric value of the metric γ i S j = γ i B j . Therefore, the inverse labelling transformation can be defined as T S = T α β μ = T α · T β · T μ where
T S :   μ j S ( α S , β S , δ S ) = μ j B ( 1 α B , 1 β B , δ B ) .
The results are depicted in Figure 13.
A third example is that of the full inversion ( T F ), which explores the classification metric behaviour when both the labelling ( T L ) and the scores ( T S ) are inverted. This transformation can be featured by the concatenation of their two components, which can be written as
T F = T L · T S = T σ δ · T α β μ = T σ δ α β μ = T α β δ σ μ .
T F :   μ j F ( α F , β F , δ F ) = μ j B ( 1 β B , 1 α B , δ B ) .
The results are depicted in Figure 14.
Finally let us consider the T α σ β transformation
T α σ β [   μ j B ( α B , β B , δ B ) ] = T σ β [   μ j B ( 1 α B , β B , δ B ) ] = T β [   μ j B ( β B , 1 α B , δ B ) ] = μ j B ( β B , α B , δ B ) ,
that is, T α σ β = T σ . Analogously, it can be shown that T β σ α = T σ .

2.4. Performance Metrics

Based on the binary confusion matrix, numerous performance metrics have been proposed [32,33,34,35,36]. For our study, the focus is placed on 10 of these metrics, which are summarized in Table 3. The terms used in that table are taken from the elements of a generic confusion matrix which can be stated as
c m = [ a f g b ] ,
The last three metrics ( M C C , B M and M K ) take values within the [ 1 , 1 ] range, while the ranges for the first seven lie within the [ 0 , 1 ] interval. For comparison purposes, these metrics are used herein in their normalized version ( [ 1 , 1 ] interval). By naming a metric defined within the [ 0 , 1 ] interval as μ , it can be normalized within the [ 1 , 1 ] range by the expression
μ n 2 μ 1.
It can easily be shown that all these metrics can be expressed as a function μ = μ ( α , β , δ ) .
Although only performance metrics based on the confusion matrix are considered, a marginal approach to Receiver Operating Characteristics (ROC) analysis [37] can also be carried out. In this analysis, the Area Under Curve ( A U C ) is commonly used as a performance metric. However, for classifiers offering only a label (and not a set of scores for each label) or when a single threshold is used on scores, the value of A U C n and B M are the same [38]. Therefore, in the forthcoming sections, whenever B M is mentioned it could also be understood as A U C n .

2.5. Exploring Symmetries

In order to determine the existence of any symmetric or cross-symmetric behaviour on the 10 classification performance metrics described in the previous section, we should explore whether, for each metric (or pair of metrics), its baseline and any of the 31 combinations of transformations obtain the same result as that of the baseline of the same metric (symmetry) or any other metric (cross-symmetry). Moreover, many of these combined transformations must take the order into account. Therefore, several thousands of different analyses have to be undertaken. Although performing this task using analytical derivations is not an impossible assignment (preferably using some kind of symbolic computation), it is certainly arduous.
An alternative approach is to identify the distance of two metrics. More formally, for the U -th transformation, let us consider the i -th combination of classifier c i U operating on the D i U dataset. The classification result is measured using the r -th metric, γ i U r . Similarly, for the V -th transform and the i -th combination of classifier c i V operating on the D i V dataset, let us measure its performance using the s -th metric, γ i V s . The distance between these measures is defined as d i s t ( γ i U r , γ i V s ) | γ i U r γ i V s | . The distance between the r -th metric μ r U = { γ i U r } and the s -th metric μ s V = { γ i V s } can then be defined as
d i s t ( μ r U , μ s V ) 1 n i = 1 n d i s t ( γ i U r , γ i V s ) = 1 n i = 1 n | γ i U r , γ i V s | .
Therefore, symmetric or cross-symmetric behaviour can be identified by a distance equal to zero.
It should be noted that if the r -th metric is symmetric under the U -th transformation, that is, μ r U = T U ( μ r B ) = μ r B and also under the V -th transformations, μ r V = T V ( μ r B ) = μ r B , it will also be symmetric under the concatenation of the two transformations. In effect,
μ r U V = T V [ T U ( μ r B ) ] = T V ( μ r B ) = μ r B .
Conversely, this is not true for cross-symmetries. If the r -th and s -th metric are cross-symmetric under the U -th transformation, that is, μ r U = T U ( μ r B ) = μ s B and also under the V -th transformations, μ r V = T V ( μ r B ) = μ s B , they are not necessarily cross-symmetric under the concatenation of the two transformations. In effect,
μ r U V = T V [ T U ( μ r B ) ] = T V ( μ s B ) = μ r B μ s B   .

2.6. Statistical Symmetries

The symmetries of the performance metrics can also be explored from a statistical point of view. Let us recall that D i k is the i -th dataset in the k -th experiment with an imbalance described by its imbalance coefficient δ i k . The elements in D i k are processed by the c i k classifier in order to obtain a ratio of correctly classified positive α i k and negative β i k elements. The j -th metric γ i k j is based on these values and hence γ i k j = γ i k j ( α i k , β i k , δ i k ) . Let us also recall that the set of all these values for i = 1 , , n k , are denoted μ j k = { γ i k j } , α k = { α i k } , β k = { β i k } and δ k = { δ i k } and therefore μ j k = μ j k ( α k , β k , δ k ) .
Let us now suppose that the elements c i k , D i k in the experiments are randomly selected in such a way that α k ,   β k   a n d   δ k are uniformly distributed within their respective ranges. Therefore, μ j k becomes a random variable, which can be statistically described.
First of all, the probability density function (pdf) of μ j k : p d f ( μ j k ) is obtained and its symmetry (or lack thereof) is ascertained. A more precise assessment of the statistical symmetry can be obtained by computing the skewness, which is defined as
ξ j k s k e w ( μ j k ) = E [ ( μ j k μ ¯ j k v a r ( μ j k ) ) 3 ]   ,
where μ ¯ j k is the mean of μ j k and v a r ( μ j k ) is its variance.

3. Results

3.1. Identifying Symmetries

The symmetric behaviour of the 10 metrics is first determined by means of computing the distance between the baseline and each of the 31 possible transformations, in accordance with Equation (20). The results are depicted in Figure 15. Each row shows the symmetries of a metric. In the columns are the 31 different transformations. Any given metric-transformation pair (small rectangles in the graphic) is shown in yellow if it has zero-distance with the metric baseline. The right-hand-side of the plot (whose code is greater than or equal to 16) corresponds to a combined transformation where the μ axis has been inverted, that is, where the transformation type μ is present. This is therefore the area for antisymmetric behaviour.
Let us first analyse each metric in terms of the accuracy ( A C C n ), the Matthews correlation coefficient ( M C C ) and the markedness ( M K ). These three metrics present a symmetric behaviour for the combined transformations shown in Table 4. For instance, the first row indicates that the three metrics are symmetric for a combination of the transformations δ and σ taken in any order ( δ σ or σ δ ), which corresponds to the code 12 (01100) for a coding scheme ( μ , σ , δ , β , α ) where μ represents the Most Significant Bit and α represents the Least Significant Bit.
The first case (code 12) corresponds to the transformation T σ δ , or, in other words, to the inverse labelling transformation T L = T σ δ which can be formulated for accuracy as
μ A C C n ( α , β , δ ) = μ A C C n ( β , α , δ ) .
The results are depicted in Figure 16.
The second case (code 15) corresponds to 4 transformations ordered in two different ways. In the first ordering, we have T α σ β δ = T α σ β · T δ . Recalling Equation (17), T α σ β = T σ . It can therefore be written that T α σ β δ = T σ · T δ = T σ δ = T L , that is, it is equivalent to the inverse labelling transformation. The same result is obtained for T β σ α δ . Hence, code 15 is the same case as code 12.
The third case (code 19) corresponds to the transformation T α β μ , or, in other words, to the inverse scoring transformation T S = T α β μ , which can be formulated for accuracy as
μ A C C n ( α , β , δ ) = μ A C C n ( 1 α , 1 β , δ ) .
The results are depicted in Figure 17.
Finally, code 31 corresponds to 5 transformations ordered in 4 different ways. In the first ordering we have T α β σ δ μ but, by considering that the order of T δ and T μ are not relevant, it can also be written as T α β σ δ μ = T α β δ σ μ = T α β δ · T σ μ = T L · T S = T F , that is, it is equivalent to the full transformation. The same result is obtained for the 3 remaining orderings which can be formulated for accuracy as
μ A C C n ( α , β , δ ) = μ A C C n ( 1 β , 1 α , δ ) .
The results are depicted in Figure 18.
Let us now focus on precision ( P R C n ) and the negative prediction value ( N P V n ). These two metrics present a symmetric behaviour for the combined transformations shown in Table 5.
These two metrics present symmetric behaviour for only the combined transformations code 31 ( 11111 ) which, in any of its ordering, is equivalent to the full inversion T F = T L · T S = T α β δ σ μ and can be formulated for precision as
μ P R C n ( α , β , δ ) = μ P R C n ( 1 β , 1 α , δ ) .
In other words, precision is symmetric with respect to the concatenation of inverse labelling and the inverse scoring transformations. The results are depicted in Figure 19.
Let us now analyse the geometric mean ( G M n ), which presents symmetric behaviour for the combined transformations shown in Table 6.
In first place, code 4 corresponds to T δ . In fact, this metric is not only symmetric with respect to δ but also independent of δ , as it can be seen in Table 3. Secondly, combined transformations coded as 8 and 11 are equivalent to the T σ transformation, that is, G M n is symmetric with respect to the diagonal in the α , β plane. This can be formulated as
μ G M n ( α , β ) = μ G M n ( β , α ) .
Finally, codes 12 and 15 imply concatenating T δ to T σ but as the metric is independent of δ , it is again equivalent to T σ , that is, T σ δ = T σ · T δ = T σ . These results are depicted in Figure 20.
In the case of bookmaker informedness ( B M ), the symmetric behaviour is obtained for the combined transformations shown in Table 7.
Again code 4 corresponds to T δ as a consequence that this metric is independent of δ (see Table 3). Secondly, combined transformations coded as 8 and 11 are equivalent to the T σ transformation, that is, B M is symmetric with respect to the diagonal in the α , β plane. This can be formulated as
μ B M ( α , β ) = μ B M ( β , α ) .
Additionally, codes 12 and 15 imply concatenating T δ to T σ but since the metric is independent of δ , it is again equivalent to T σ , that is, T σ δ = T σ · T δ = T σ . These results are depicted in Figure 21.
Code 19 and also code 23 since the metric does not depend on δ , correspond to the transformation T α β μ or, in other words, to the inverse scoring transformation T S = T α β μ , which can be formulated for bookmaker informedness as
μ B M ( α , β ) = μ B M ( 1 α , 1 β ) = μ B M ( 1 β , 1 α ) .
The results are depicted in Figure 22.
In other words, the bookmaker informedness is symmetric with respect to the inverse labelling and to the inverse scoring transformations. This implies that it is also symmetric with respect to the concatenations of these two transforms, which occurs in codes 27 and 31 (recall that the latter is independent of δ ) corresponding to the full inversion T F = T L + T S = T α β δ σ μ , which can be formulated as
μ B M ( α , β ) = μ B M ( 1 β , 1 α ) .
The results are depicted in Figure 23.
In the case of sensitivity ( S N S n ), the symmetric behaviour is found for the combined transformations shown in Table 8.
Codes 2 and 4 correspond to T β and T δ as a consequence of this metric being independent of β and δ (see Table 3). Code 19 (and also codes 17, 21 and 23 since the metric does not depend on β nor δ ) corresponds to the transformation T α β μ , or, in other words, to the inverse scoring transformation T S = T α β μ , which can be formulated as
μ S N S n ( α ) = μ S N S n ( 1 α ) .
This result is depicted in Figure 24.
On considering the specificity ( S P C n ), its symmetric behaviour is shown in Table 9.
Codes 1 and 4 corresponds to T α and T δ as a consequence of this metric being independent of α and δ (see Table 3). Code 19 (and also codes 18, 22 and 23 as the metric depends neither on α nor on δ ) corresponds to the transformation T α β μ , that is, to the inverse scoring transformation T S = T α β μ , which can be formulated as
μ S P C n ( β ) = μ S P C n ( 1 β ) .
This result is depicted in Figure 25.
Finally, it can be observed that the F 1 n score metric is not symmetric under any transformation. The results for each metric are summarized in Table 10.

3.2. Identifying Cross-Symmetries

In order to explore whether any cross-symmetry can be identified among the 10 metrics, we have computed the distance (using Equation (20)) of the baseline of each metric (and its 31 possible transformations), to the remaining baseline metrics. The results are depicted in Figure 26. Each row corresponds to the baseline of a metric and each column to the baseline and its 31 transformations of the other metric. Any given metric-metric pair (small squares in the graphic) is shown in yellow if it has zero-distance for any possible transformation.
The diagonal presents a summary of the results explored in the previous section, that is, every metric, except for the F 1 n score, presents some kind of symmetry under some transformation. The cases of cross-symmetries appear in the elements off diagonal. Two cross-symmetries arise: the S N S n S P C n and the P R C n N P V n .
In order to attain a deeper insight into these cross-symmetries, let us consider, for each of the two pairs, the distances between the baseline of the first metric in the pair and the full set of transformations (including the baseline) of the second metric. The results are depicted in Figure 27. Each row shows the cross-symmetries of a pair of metrics. In the columns are the 32 different transformations (including the baseline) of the second metric in the pair. Any given (second-metric transformation) pair (small squares in the graphic) is shown in yellow if it has zero-distance with the first metric baseline. As in Figure 15, the right-hand-side of the plot (with code greater than or equal to 16) corresponds to combined transformation where the μ axis has been inverted, that is, where the transformation type μ is present. This is therefore the area for antisymmetric behaviour.
Let us first analyse each pair of metrics in terms of the P R C n N P V n or P R C n N P V n pair, which present cross-symmetric behaviour for the combined transformations shown in Table 11.
Codes 12 and 15 correspond to the transformation T σ δ or, in other words, to the inverse labelling transformation T L = T σ δ , which can be formulated as
μ P R C n ( α , β , δ ) = μ N P V n ( β , α , δ ) .
The results are depicted in Figure 28.
Code 19 corresponds to the transformation T α β μ or, in other words, to the inverse scoring transformation T S = T α β μ , which can be formulated as
μ P R C n ( α , β , δ ) = μ N P V n ( 1 α , 1 β , δ ) .
The results are depicted in Figure 29.
Although the P R C n N P V n pair is cross-symmetric with respect to the inverse labelling and to the inverse scoring transformations, this does not imply that it is also cross-symmetric with respect to the concatenations of these two transforms (see Equation (22)). This is the reason why code 31 (corresponding to the full inversion T F = T L + T S = T α β δ σ μ is not present in Table 11.
The results for the pair N P V n P R C n are exactly the same. Therefore,
μ N P V n ( α , β , δ ) = μ P R C n ( β , α , δ ) = μ P R C n ( 1 α , 1 β , δ ) .
Let us now consider the pair of metrics S N S n S P C n and its cross-symmetric behaviour, which is found for the combined transformations shown in Table 12.
Since specificity remains independent from δ (see Table 3), codes 25, 11, 12 and 15 correspond to T σ δ , that is, to the inverse labelling which can be formulated as
μ S N S n ( α , β ) = μ S P C n ( β , α ) .
Additionally, since specificity is also independent of α , then codes 9 ( T α σ ) and 13 ( T α σ δ ) are equivalent to T σ δ . Moreover, after a T σ transformation, the resulting metric has no dependence on β (due to the axis inversion) and hence codes 10 ( T σ β ) and 14 ( T σ β δ ) are also equivalent to T σ δ . These results are depicted in Figure 30.
On the other hand, code 31 corresponds to full inversion transformation T F = T σ δ α β μ , which can be formulated as
μ S N S n ( α , β ) = μ S P C n ( 1 β , 1 α ) .
It can be shown that the remaining codes (25, 26, 27, 29 and 30) are also equivalent to T F . Moreover, after a T σ transformation, the resulting metric does not depend on β (due to the axis inversion) and hence codes 10 ( T σ β ) and 14 ( T σ β δ ) are also equivalent to T σ δ . These results are depicted in Figure 31.
The results for the pair S P C n S N S n are exactly the same, so
μ S P C n ( α , β ) = μ S N S n ( β , α ) = μ S N S n ( 1 β , 1 α ) .
The results for every pair of cross-symmetric metrics are summarized in Table 13.

3.3. Skewness of the Statistical Descriptions of the Metrics

In order to explore the symmetric behaviour of the statistical descriptions of the metrics, let us recall that, for the baseline experiment, μ j B = μ j B ( α B , β B , δ B ) can be considered a statistical variable. First of all, let us select a subset of the μ j B corresponding to a certain value δ 0 of the imbalance coefficient, that is, μ j B ( α B , β B , δ 0 ) and obtain its probability density function (pdf) which will be called local pdf (since it is obtained solely for a value of δ B ). The results p d f ( μ j k , δ 0 ) for every metric with δ B = 0.5 are shown in Figure 32.
This result can be generalized for various values of the imbalance coefficient δ B by obtaining the p d f ( μ j k , δ B ) depicted in Figure 33 as a set of heatmap plots. In every plot, the horizontal axis represents the imbalance coefficient while the value of the metric is drawn in the vertical axis. The value of the p d f is colour-coded.
In Figure 32 and Figure 33, the symmetry of the statistical descriptions of the metrics can easily be observed. However, in order to achieve a more precise insight, the local skewness ξ j B of every p d f is obtained in accordance with Equation (23) and its value ξ j B ( δ B ) is shown in Figure 34 for every metric. It can be observed that 6 metrics ( S N S n , S P C n , A C C n , M C C ,   B M   a n d   M K ) have a symmetric p d f ; one metric ( G M n ) has a p d f slightly asymmetric but its asymmetry does not depend on δ B ; 2 metrics ( P R C n and N P V n ) have a clearly asymmetric p d f but their skewness is symmetric with respect to the origin; and finally, the F 1 n metric has a p d f and a skewness that are both asymmetric.
Let us now examine the μ j B for all the values of the imbalance coefficient δ B , that is, μ j B ( α B , β B ,   δ B ) and obtain its probability density function (pdf) which will be called global pdf (as it is obtained for every δ B ). The resulting p d f ( μ j k ) is shown in Figure 35 for every metric.
It can be observed that all the metrics show a symmetric p d f except for G M n and F 1 n . The global pdf for G M n maintains the slight asymmetry of local pdf (global skewness of 0.18) since G M n does not depend on δ . In the cases of P R C n and N P V n , the symmetry of the local skewness compensates for their values and hence they show a symmetric global pdf. Finally, the positive values of F 1 n local skewness partially compensate for its negative values (see Figure 34), which results in an almost uniform global pdf except for their extreme values (global skewness of 0.14). These results are summarized in Table 14.

4. Discussion

From the previous results, summarized in Table 10, Table 13 and Table 14, it can be seen that although several thousands of combined transformations have been tested, the performance metrics only present three types of symmetries: under labelling inversion; under scoring inversion; and under full inversion (the sequence of labelling and scoring inversion).
For a certain performance metric to be symmetric under labelling inversion means that it pays attention to or focuses on, positive and negative classes with the same intensity and therefore classes can be exchanged without affecting the value of the metric. These metrics should be used in applications where the cost of misclassification is the same for each class. This is the case for 5 out of the 10 metrics tested: A C C n , M C C , B M , M K   and   G M n .
Other metrics, however, are more focused on the classification results obtained for the positive class. This is the case of 3 metrics: S N S n , which only depends on α ; P R C n , which measures the ratio of success on the elements classified as positive; and the F 1 score, which is a combination of S N S n and P R C n . These metrics found their main applications when the cost of misclassifying the positive class is higher than the cost of misclassifying the negative class, for instance, in the case of disease detection in medical diagnostics. Finally, other metrics are more focused on the classification results obtained for the negative class. This is the case of 2 metrics: S P C n , which only depends on β ; and N P V n , which measures the ratio of success on the elements classified as negative. These 2 metrics are mainly applied if the most important issue is the misclassification of negative classes, for instance, in the case of identification of non-reliable clients in granting loans.
On the other hand, if a metric shows symmetric behaviour under scoring inversion it means that the good classifiers are positively scored to the same extent as bad classifiers are negatively scored. For instance, let us consider a first classifier which correctly classifies 80% of positive elements and also 70% of negative elements. Additionally, a second classifier obtains a ratio of 20% for positive and 30% for negative elements. A scoring-inversion symmetric-performance metric would have a value of, for example, +0.5 for the first classifier and a value of −0.5 for the second classifier. Therefore, the scoring symmetry indicates the relative importance assigned by the metric to the good and bad classifiers. This is the case for 6 out of the 10 metrics tested: A C C n , M C C , B M , M K , G M n , S N S n   and   S P C n . Conversely, G M n is more demanding as regards scoring good results than scoring bad results. This feature can be useful if the objective of the classification is focused on obtaining excellent results (and not just good results). Finally, on 3 of the metrics tested ( P R C n , N P V n   a n d   F 1 n ), awarding good results differs from scoring bad results in that it depends on the relative values of the parameters ( α , β and δ ).
Additionally, it can be seen that metrics showing both labelling and scoring symmetries also show symmetry for the full inversion (concatenation of the two symmetries). This is the case for 4 out of the 10 metrics tested: A C C n , M C C , B M   and   M K . An interesting result is that for P R C n and N P V n , although they have no labelling nor scoring symmetry, they do have full inversion symmetry. This fact means that swapping the positive and negative class labels also inverts how the good and bad classifiers are scored. An example of all these symmetries can be found in Table 15.
A particular degenerate case of symmetry arises when a metric depends on none of the variables. For example, from the results obtained in this research, several metrics have shown themselves to be independent of the imbalance coefficient δ . This is the case for 4 out of the 10 metrics tested: S N S n , S P C n , G M n and B M . This is a particularly interesting result, since these metrics have no kind of bias if the classes are imbalanced. Conversely, the interpretation of classification metrics which do depend on δ should be carefully considered since they can be misleading as to what a good classifier is.
Additionally, some other metrics appear to be independent from the classification success ratios: S N S n , which only depends on α ; and S P C n ,   which only depends on β . This can be interpreted as a sort of one-dimensionality of these metrics, that is, S N S n is only focused on the positive class, while S P C n is only concerned about the negative class.
On the other hand, the two pairs of cross-symmetries found can be straightforwardly interpreted: when the labelling of classes are inverted, S N S n becomes S P C n and P R C n becomes N P V n . Moreover, by exchanging the scoring procedure of good and bad classifiers, P R C n becomes N P V n .
Let us now focus on the interpretation of the results of statistical symmetries. Statistical local symmetry means that, for a certain dataset, that is, for a certain value of the imbalance coefficient, the probability that a random classifier obtains a good score is the same as the probability that it obtains a bad score. This is the case for 6 out of the 10 metrics tested: A C C n , M C C , B M , M K , G M n , S N S n   and   S P C n . They coincide with the metrics in that they have scoring symmetry, which shows that both concepts are closely related. Conversely, G M n has a greater probability of having a bad result than a good result, which is consistent with the fact that it is more demanding on obtaining excellent results (and not just good results). Additionally, P R C n obtains good results with a higher probability (lower probability in the case of N P V n ) if the positive class is the majority class and vice versa if it is the minority class. Awarding good results differs from scoring bad ones in a way that depends on the relative values of the parameters ( α , β   a n d   δ ). Finally, in the case of balanced classes, the probability of obtaining good F 1 n scores is greater than obtaining bad scores for, which shows some sort of indulgent judgment. However, the detailed behaviour of F 1 n scores for different values of δ is more complex.
On the other hand, statistical global symmetry means that the probability that a random classifier operating on a random dataset obtains a good score is the same as obtaining a bad score. This is the case for 8 out of the 10 metrics tested: A C C n , M C C , B M , M K , G M n , S N S n , S P C n , P R C n and N P V n . Conversely, G M n and F 1 n are more likely to have a bad result than a good result, which can be interpreted as meaning that they are slightly tough judges.
On considering all these results and their meanings, the ten metrics can be organized into 5 clusters that show the features described in Table 16.
In Table 16, the identification of clusters has been carried out by means of informal reasoning. To formalize these analyses, every metric has been described with a set of features corresponding to the columns in Table 16. Most of the columns are binary valued (yes or no), while others admit several values. For instance, labelling symmetry value can be yes, no, S N S n S P C n cross-symmetry or P R C n N P V n cross-symmetry. In these cases, a one-hot coding mechanism (also called 1-of-K scheme) is employed [39]. The result is that each metric is defined using a set of 14 features. Although regular or advanced clustering techniques can be used [40,41,42,43], the reduced number of elements in the dataset (10 performance metrics) invites to address the problem using more intuitive methods. Using Principal Component Analysis (PCA) [44], the problem can be reduced to a bi-dimensional plane and its result is depicted in Figure 36. The 5 clusters mentioned in this section clearly appear therein.
Another way to represent how performance metrics are grouped according to their symmetries is by drawing a dendrogram [45]. To this end, the 14 features are employed to characterize each performance metric. The distances between the metrics are then computed in the space of the 14 features. These distances are employed to gauge how much the metrics are separated, as shown in Figure 37. Once again, this result is consistent with the 5 previously identified clusters.

5. Conclusions

Based on the results obtained in our analysis, it can be stated that the majority of the most commonly used classification performance metrics present some type of symmetry. We have identified 3 and only 3 types of symmetric behaviour: labelling inversion, scoring inversion and the combination of the two inversions. Additionally, several metrics have been revealed as being robust under imbalanced datasets, while others do not show this important feature. Finally two metrics has been identified as one-dimensional, in that they focus exclusively on the positive (sensitivity) or on the negative class (specificity). The metrics have been grouped into 5 clusters according to their symmetries.
Selecting one performance metric or another is mainly a matter of its application, depending on issues such as whether the dataset is balanced, misclassification has the same cost in either class and whether good scores should only be reserved for very good classification ratios. None of the studied metrics can be universally applied. However, according to their symmetries, two of these metrics appear especially worthy in general-purpose applications: the Bookmaker Informedness ( B M ) and the Geometric Mean ( G M ). Both of these metrics are robust under imbalanced datasets and treat both classes in the same way (labelling symmetry). The former metric ( B M ) also has scoring symmetry while the latter ( G M ) is slightly more demanding in terms of scoring good results over bad results.
In future research, the methodology for the analysis of symmetry developed in this paper can be extended to other classification performance metrics, such as those derived from multiclass confusion matrix or some ranking metrics (i.e. Receiver Operating Characteristic curve).

Author Contributions

A.L. conceived and designed the experiments; A.L., A.C., A.M. and J.R.L. performed the experiments, analysed the data and wrote the paper.

Funding

This research was funded by the Telefónica Chair “Intelligence in Networks” of the University of Seville.

Conflicts of Interest

The authors declare there to be no conflict of interest. The founding sponsors played no role: in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; and in the decision to publish the results.

References

  1. Speiser, A. Symmetry in science and art. Daedalus 1960, 89, 191–198. [Google Scholar]
  2. Wigner, E. The Unreasonable Effectiveness of Mathematics. In Natural Sciences–Communications in Pure and Applied Mathematics; Interscience Publishers Inc.: New York, NY, USA, 1960; Volume 13, p. 1. [Google Scholar]
  3. Islami, A. A match not made in heaven: On the applicability of mathematics in physics. Synthese 2017, 194, 4839–4861. [Google Scholar] [CrossRef]
  4. Siegrist, J. Symmetry in social exchange and health. Eur. Rev. 2005, 13, 145–155. [Google Scholar] [CrossRef] [Green Version]
  5. Varadarajan, V.S. Symmetry in mathematics. Comput. Math. Appl. 1992, 24, 37–44. [Google Scholar] [CrossRef] [Green Version]
  6. Garrido, A. Symmetry and Asymmetry Level Measures. Symmetry 2010, 2, 707–721. [Google Scholar] [CrossRef] [Green Version]
  7. Xiao, Y.H.; Wu, W.T.; Wang, H.; Xiong, M.; Wang, W. Symmetry-based structure entropy of complex networks. Phys. A Stat. Mech. Appl. 2008, 387, 2611–2619. [Google Scholar] [CrossRef] [Green Version]
  8. Magee, J.J.; Betke, M.; Gips, J.; Scott, M.R.; Waber, B.N. A human–computer interface using symmetry between eyes to detect gaze direction. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2008, 38, 1248–1261. [Google Scholar] [CrossRef]
  9. Liu, Y.; Hel-Or, H.; Kaplan, C.S.; Van Gool, L. Computational symmetry in computer vision and computer graphics. Found. Trends Comput. Gr. Vis. 2010, 5, 1–195. [Google Scholar] [CrossRef]
  10. Tai, W.L.; Chang, Y.F. Separable Reversible Data Hiding in Encrypted Signals with Public Key Cryptography. Symmetry 2018, 10, 23. [Google Scholar] [CrossRef]
  11. Graham, J.H.; Whitesell, M.J.; II, M.F.; Hel-Or, H.; Nevo, E.; Raz, S. Fluctuating asymmetry of plant leaves: Batch processing with LAMINA and continuous symmetry measures. Symmetry 2015, 7, 255–268. [Google Scholar] [CrossRef]
  12. Bishop, C.M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: New York, NY, USA, 2006. [Google Scholar]
  13. Top 10 Technology Trends for 2018: IEEE Computer Society Predicts the Future of Tech. Available online: https://www.computer.org/web/pressroom/top-technology-trends-2018 (accessed on 18 October 2018).
  14. Brachmann, A.; Redies, C. Using convolutional neural network filters to measure left-right mirror symmetry in images. Symmetry 2016, 8, 144. [Google Scholar] [CrossRef]
  15. Zhang, P.; Shen, H.; Zhai, H. Machine learning topological invariants with neural networks. Phys. Rev. Lett. 2018, 120, 066401. [Google Scholar] [CrossRef] [PubMed]
  16. Luque, A.; Gómez-Bellido, J.; Carrasco, A.; Barbancho, J. Optimal Representation of Anuran Call Spectrum in Environmental Monitoring Systems Using Wireless Sensor Networks. Sensors 2018, 18, 1803. [Google Scholar] [CrossRef]
  17. Romero, J.; Luque, A.; Carrasco, A. Anuran sound classification using MPEG-7 frame descriptors. In Proceedings of the XVII Conferencia de la Asociación Española para la Inteligencia Artificial (CAEPIA), Granada, Spain, 23–26 October 2016. [Google Scholar]
  18. Luque, A.; Romero-Lemos, J.; Carrasco, A.; Barbancho, J. Non-sequential automatic classification of anuran sounds for the estimation of climate-change indicators. Exp. Syst. Appl. 2018, 95, 248–260. [Google Scholar] [CrossRef]
  19. Glowacz, A. Fault diagnosis of single-phase induction motor based on acoustic signals. Mech. Syst. Signal Process. 2019, 117, 65–80. [Google Scholar] [CrossRef]
  20. Glowacz, A. Acoustic-Based Fault Diagnosis of Commutator Motor. Electronics 2018, 7, 299. [Google Scholar] [CrossRef]
  21. Caruana, R.; Niculescu-Mizil, A. Data mining in metric space: An empirical analysis of supervised learning performance criteria. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, 22–25 August 2004. [Google Scholar]
  22. Ferri, C.; Hernández-Orallo, J.; Modroiu, R. An experimental comparison of performance measures for classification. Pattern Recognit. Lett. 2009, 30, 27–38. [Google Scholar] [CrossRef] [Green Version]
  23. Hossin, M.; Sulaiman, M.N. A review on evaluation metrics for data classification evaluations. Int. J. Data Min. Knowl. Manag. Process 2015, 5, 1. [Google Scholar]
  24. Ting, K.M. Confusion matrix. In Encyclopedia of Machine Learning and Data Mining; Springer: Boston, MA, USA, 2017; p. 260. [Google Scholar]
  25. Aly, M. Survey on multiclass classification methods. Neural Netw. 2005, 19, 1–9. [Google Scholar]
  26. Tsai, M.F.; Yu, S.S. Distance metric based oversampling method for bioinformatics and performance evaluation. J. Med. Syst. 2016, 40, 159. [Google Scholar] [CrossRef]
  27. García, V.; Mollineda, R.A.; Sánchez, J.S. Index of balanced accuracy: A performance measure for skewed class distributions. In Iberian Conference on Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2009; pp. 441–448. [Google Scholar]
  28. López, V.; Fernández, A.; García, S.; Palade, V.; Herrera, F. An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics. Inf. Sci. 2013, 250, 113–141. [Google Scholar] [CrossRef]
  29. Daskalaki, S.; Kopanas, I.; Avouris, N. Evaluation of classifiers for an uneven class distribution problem. Appl. Artif. Intell. 2006, 20, 381–417. [Google Scholar] [CrossRef]
  30. Amin, A.; Anwar, S.; Adnan, A.; Nawaz, M.; Howard, N.; Qadir, J.; Hussain, A. Comparing oversampling techniques to handle the class imbalance problem: A customer churn prediction case study. IEEE Access 2016, 4, 7940–7957. [Google Scholar] [CrossRef]
  31. Jeni, L.A.; Cohn, J.F.; De La Torre, F. Facing imbalanced data--recommendations for the use of performance metrics. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland, 2–5 September 2013. [Google Scholar]
  32. Powers, D.M. Evaluation: From Precision, Recall and F-measure to ROC, Informedness, Markedness and Correlation; Technical Report SIE-07-001; School of Informatics and Engineering, Flinders University: Adelaide, Australia, 2011. [Google Scholar]
  33. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  34. Matthews, B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta Protein Struct. 1975, 405, 442–451. [Google Scholar] [CrossRef]
  35. Jurman, G.; Riccadonna, S.; Furlanello, C. A comparison of MCC and CEN error measures in multi-class prediction. PLoS ONE 2012, 7, e41882. [Google Scholar] [CrossRef] [PubMed]
  36. Gorodkin, J. Comparing two K-category assignments by a K-category correlation coefficient. Comput. Biol. Chem. 2004, 28, 367–374. [Google Scholar] [CrossRef] [PubMed]
  37. Flach, P.A. The geometry of ROC space: Understanding machine learning metrics through ROC isometrics. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), Washington, DC, USA, 21–24 August 2003. [Google Scholar]
  38. Sokolova, M.; Japkowicz, N.; Szpakowicz, S. Beyond accuracy, F-score and ROC: A family of discriminant measures for performance evaluation. In Australasian Joint Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1015–1021. [Google Scholar]
  39. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  40. Chakraborty, S.; Das, S. k—Means clustering with a new divergence-based distance metric: Convergence and performance analysis. Pattern Recognit. Lett. 2017, 100, 67–73. [Google Scholar] [CrossRef]
  41. Wang, Y.; Lin, X.; Wu, L.; Zhang, W.; Zhang, Q.; Huang, X. Robust subspace clustering for multi-view data by exploiting correlation consensus. IEEE Trans. Image Process. 2015, 24, 3939–3949. [Google Scholar] [CrossRef]
  42. Wang, Y.; Zhang, W.; Wu, L.; Lin, X.; Zhao, X. Unsupervised metric fusion over multiview data by graph random walk-based cross-view diffusion. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 57–70. [Google Scholar] [CrossRef]
  43. Wu, L.; Wang, Y.; Shao, L. Cycle-Consistent Deep Generative Hashing for Cross-Modal Retrieval. IEEE Trans. Image Process. 2019, 28, 1602–1612. [Google Scholar] [CrossRef] [PubMed]
  44. Jolliffe, I. Principal component analysis. In International Encyclopedia of Statistical Science; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1094–1096. [Google Scholar]
  45. Earle, D.; Hurley, C.B. Advances in dendrogram seriation for application to visualization. J. Comput. Gr. Stat. 2015, 24, 1–25. [Google Scholar] [CrossRef]
Figure 1. Node of the Wireless Sensor Network where the symmetry of classification performance metrics has been primarily applied.
Figure 1. Node of the Wireless Sensor Network where the symmetry of classification performance metrics has been primarily applied.
Symmetry 11 00047 g001
Figure 2. Imbalance coefficient (solid blue line) and imbalance ratio (dashed green line) vs. the proportion of positive elements in the dataset.
Figure 2. Imbalance coefficient (solid blue line) and imbalance ratio (dashed green line) vs. the proportion of positive elements in the dataset.
Symmetry 11 00047 g002
Figure 3. 3D representation of a 4-dimension metric value μ j k ( α k , β k , δ k ) . The value of the metric μ j k is colour-coded for every point in the ( α k , β k , δ k ) 3D space.
Figure 3. 3D representation of a 4-dimension metric value μ j k ( α k , β k , δ k ) . The value of the metric μ j k is colour-coded for every point in the ( α k , β k , δ k ) 3D space.
Symmetry 11 00047 g003
Figure 4. Representation of a metric value μ j k ( α k , β k ) for δ = 0.75 . (a) Slice of the 3D graphic by a plane corresponding to δ = 0.75 ; (b) 2D representation of the slice.
Figure 4. Representation of a metric value μ j k ( α k , β k ) for δ = 0.75 . (a) Slice of the 3D graphic by a plane corresponding to δ = 0.75 ; (b) 2D representation of the slice.
Symmetry 11 00047 g004
Figure 5. Heat map of a metric value μ j k ( α k , β k ) for δ = 0.75 .
Figure 5. Heat map of a metric value μ j k ( α k , β k ) for δ = 0.75 .
Symmetry 11 00047 g005
Figure 6. Panel of heat maps representing the metric μ j k ( α k , β k , δ k ) .
Figure 6. Panel of heat maps representing the metric μ j k ( α k , β k , δ k ) .
Symmetry 11 00047 g006
Figure 7. Transformation type α of a metric. (a) Baseline metric. (b) Reflection symmetry with respect to the hyperplane α = 0.5 .
Figure 7. Transformation type α of a metric. (a) Baseline metric. (b) Reflection symmetry with respect to the hyperplane α = 0.5 .
Symmetry 11 00047 g007
Figure 8. Transformation type β of a metric. (a) Baseline metric; (b) Reflection symmetry with respect to the hyperplane β = 0.5 .
Figure 8. Transformation type β of a metric. (a) Baseline metric; (b) Reflection symmetry with respect to the hyperplane β = 0.5 .
Symmetry 11 00047 g008
Figure 9. Transformation type δ of a metric. (a) Baseline metric. (b) Reflection symmetry with respect to the hyperplane δ = 0 .
Figure 9. Transformation type δ of a metric. (a) Baseline metric. (b) Reflection symmetry with respect to the hyperplane δ = 0 .
Symmetry 11 00047 g009
Figure 10. Transformation type μ of a metric. (a) Baseline metric. (b) Reflection symmetry with respect to the hyperplane μ = 0 .
Figure 10. Transformation type μ of a metric. (a) Baseline metric. (b) Reflection symmetry with respect to the hyperplane μ = 0 .
Symmetry 11 00047 g010
Figure 11. Transformation type σ of a metric. (a) Baseline metric; (b) Reflection symmetry with respect to the hyperplane α = β .
Figure 11. Transformation type σ of a metric. (a) Baseline metric; (b) Reflection symmetry with respect to the hyperplane α = β .
Symmetry 11 00047 g011
Figure 12. Transformation by inverse labelling of classes ( T L ). (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ); (c) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 12. Transformation by inverse labelling of classes ( T L ). (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ); (c) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g012
Figure 13. Transformation by inverse scoring ( T S ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 13. Transformation by inverse scoring ( T S ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g013
Figure 14. Transformation by full inversion scoring ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) . (c) Reflection symmetry with respect to the main diagonal ( T σ ); (d) Reflection symmetry with respect to the plane δ = 0 ( T δ ) . (e) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 14. Transformation by full inversion scoring ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) . (c) Reflection symmetry with respect to the main diagonal ( T σ ); (d) Reflection symmetry with respect to the plane δ = 0 ( T δ ) . (e) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g014
Figure 15. Symmetric behaviour of performance metrics for any combined transformation.
Figure 15. Symmetric behaviour of performance metrics for any combined transformation.
Symmetry 11 00047 g015
Figure 16. Symmetry of accuracy with respect to inverse labelling ( T L ). (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ); (c) Reflection symmetry with respect to the plane δ = 0 ( T δ ) .
Figure 16. Symmetry of accuracy with respect to inverse labelling ( T L ). (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ); (c) Reflection symmetry with respect to the plane δ = 0 ( T δ ) .
Symmetry 11 00047 g016
Figure 17. Symmetry of accuracy with respect to the inverse scoring ( T S ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 17. Symmetry of accuracy with respect to the inverse scoring ( T S ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g017
Figure 18. Symmetry of accuracy with respect to the full inversion ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the main diagonal ( T σ ); (e) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (f) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 18. Symmetry of accuracy with respect to the full inversion ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the main diagonal ( T σ ); (e) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (f) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g018
Figure 19. Symmetry of precision with respect to the full inversion ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the main diagonal ( T σ ); (e) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (f) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 19. Symmetry of precision with respect to the full inversion ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the main diagonal ( T σ ); (e) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (f) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g019
Figure 20. Symmetry of geometric mean with respect to T σ . (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ).
Figure 20. Symmetry of geometric mean with respect to T σ . (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ).
Symmetry 11 00047 g020
Figure 21. Symmetry of bookmaker informedness with respect to T σ . (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ).
Figure 21. Symmetry of bookmaker informedness with respect to T σ . (a) Baseline metric; (b) Reflection symmetry with respect to the main diagonal ( T σ ).
Symmetry 11 00047 g021
Figure 22. Symmetry of bookmaker informedness with respect to the inverse scoring ( T S ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 22. Symmetry of bookmaker informedness with respect to the inverse scoring ( T S ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (d) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g022
Figure 23. Symmetry of bookmaker informedness with respect to the full inversion ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (c) Reflection symmetry with respect to the main diagonal ( T σ ); (d) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (e) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 23. Symmetry of bookmaker informedness with respect to the full inversion ( T F ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane β = 0 ( T β ) ; (c) Reflection symmetry with respect to the main diagonal ( T σ ); (d) Reflection symmetry with respect to the plane δ = 0 ( T δ ) ; (e) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g023
Figure 24. Symmetry of sensitivity with respect to the combined transformation ( T α μ ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 24. Symmetry of sensitivity with respect to the combined transformation ( T α μ ). (a) Baseline metric; (b) Reflection symmetry with respect to the plane α = 0 ( T α ); (c) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g024
Figure 25. Symmetry of specificity with respect to the combined transformation ( T β μ ) (a) Baseline metric; (b) Reflection symmetry with respect to the plane β = 0 ( T β ); (c) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 25. Symmetry of specificity with respect to the combined transformation ( T β μ ) (a) Baseline metric; (b) Reflection symmetry with respect to the plane β = 0 ( T β ); (c) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g025
Figure 26. Cross-symmetric behaviour of performance metrics for any combined transformation.
Figure 26. Cross-symmetric behaviour of performance metrics for any combined transformation.
Symmetry 11 00047 g026
Figure 27. Cross-symmetric behaviour for any combined transformation.
Figure 27. Cross-symmetric behaviour for any combined transformation.
Symmetry 11 00047 g027
Figure 28. Cross-symmetry of the P R C n N P V n pair with respect to the inverse labelling ( T L ). (a) Baseline P R C n metric; (b) Baseline N P V n metric; (c) Reflection symmetry of N P V n with respect to the main diagonal ( T σ ); (d) Reflection symmetry of N P V n with respect to the plane δ = 0 ( T δ ) .
Figure 28. Cross-symmetry of the P R C n N P V n pair with respect to the inverse labelling ( T L ). (a) Baseline P R C n metric; (b) Baseline N P V n metric; (c) Reflection symmetry of N P V n with respect to the main diagonal ( T σ ); (d) Reflection symmetry of N P V n with respect to the plane δ = 0 ( T δ ) .
Symmetry 11 00047 g028
Figure 29. Cross-symmetry of the P R C n N P V n pair with respect to the inverse scoring ( T S ). (a) Baseline P R C n metric; (b) Baseline N P V n metric. (c) Reflection symmetry of N P V n with respect to the plane α = 0 ( T α ); (d) Reflection symmetry of N P V n with respect to the plane β = 0 ( T β ) ; (e) Reflection symmetry of N P V n with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 29. Cross-symmetry of the P R C n N P V n pair with respect to the inverse scoring ( T S ). (a) Baseline P R C n metric; (b) Baseline N P V n metric. (c) Reflection symmetry of N P V n with respect to the plane α = 0 ( T α ); (d) Reflection symmetry of N P V n with respect to the plane β = 0 ( T β ) ; (e) Reflection symmetry of N P V n with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g029
Figure 30. Cross-symmetry of the S N S n S P C n pair with respect to the inverse labelling ( T L ). (a) Baseline S N S n metric; (b) Baseline S P C n metric; (c) Reflection symmetry of S P C n with respect to the main diagonal ( T σ ); (d) Reflection symmetry of S P C n with respect to the plane δ = 0 ( T δ ) .
Figure 30. Cross-symmetry of the S N S n S P C n pair with respect to the inverse labelling ( T L ). (a) Baseline S N S n metric; (b) Baseline S P C n metric; (c) Reflection symmetry of S P C n with respect to the main diagonal ( T σ ); (d) Reflection symmetry of S P C n with respect to the plane δ = 0 ( T δ ) .
Symmetry 11 00047 g030
Figure 31. Cross-symmetry of the S N S n S P C n pair with respect to the full inversion ( T F ). (a) Baseline S N S n metric. (b) Baseline S P C n metric. (c) Reflection symmetry of S P C n with respect to the main diagonal ( T σ ). (d) Reflection symmetry of S P C n with respect to the plane δ = 0 ( T δ ) . (e) Reflection symmetry with respect to the plane α = 0 ( T α ). (f) Reflection symmetry with respect to the plane β = 0 ( T β ) . (g) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Figure 31. Cross-symmetry of the S N S n S P C n pair with respect to the full inversion ( T F ). (a) Baseline S N S n metric. (b) Baseline S P C n metric. (c) Reflection symmetry of S P C n with respect to the main diagonal ( T σ ). (d) Reflection symmetry of S P C n with respect to the plane δ = 0 ( T δ ) . (e) Reflection symmetry with respect to the plane α = 0 ( T α ). (f) Reflection symmetry with respect to the plane β = 0 ( T β ) . (g) Reflection symmetry with respect to the plane μ = 0 (colour inversion, T μ ).
Symmetry 11 00047 g031
Figure 32. Local probability density function of every metric and δ = 0 .
Figure 32. Local probability density function of every metric and δ = 0 .
Symmetry 11 00047 g032
Figure 33. Local probability density function of every metric as a function of δ . The value of pdf is colour coded.
Figure 33. Local probability density function of every metric as a function of δ . The value of pdf is colour coded.
Symmetry 11 00047 g033
Figure 34. Skewness of the statistical description for every metric as a function of δ .
Figure 34. Skewness of the statistical description for every metric as a function of δ .
Symmetry 11 00047 g034
Figure 35. Global probability density function of every metric and δ = 0 .
Figure 35. Global probability density function of every metric and δ = 0 .
Symmetry 11 00047 g035
Figure 36. Bi-dimensional representation of performance metrics according to their symmetries.
Figure 36. Bi-dimensional representation of performance metrics according to their symmetries.
Symmetry 11 00047 g036
Figure 37. Dendrogram of performance metrics according to their symmetries.
Figure 37. Dendrogram of performance metrics according to their symmetries.
Symmetry 11 00047 g037
Table 1. Summary of basic transformations.
Table 1. Summary of basic transformations.
Transformation α k β k δ k μ k
α 1 α B β B δ B μ B
β α B 1 β B δ B μ B
δ α B β B δ B μ B
μ α B β B δ B μ B
σ β B α B δ B μ B
Table 2. Example of the coding of combined transformations.
Table 2. Example of the coding of combined transformations.
Transformation
Code
μ σ δ β α
2811100
Table 3. Definition of classification performance metrics.
Table 3. Definition of classification performance metrics.
SymbolMetricScoring
S N S Sensitivity a a + f
S P C Specificity b b + g
P R C Precision a a + g
N P V Negative Predictive Value b b + f
A C C Accuracy a + b a + f + b + g
F 1 F 1   score 2 P R C · S N S P R C + S N S
G M Geometric Mean S N S · S P C
M C C Matthews Correlation Coefficient a · b g · f ( a + g ) ( a + f ) ( b + g ) ( b + f )
B M Bookmaker Informedness S N S + S P C 1
M K Markedness P P V + N P V 1
Table 4. Symmetric transformations of A C C n , M C C and M K .
Table 4. Symmetric transformations of A C C n , M C C and M K .
Code μ σ δ β α Specific OrderAny Order
1201100 δ σ
1501111 α σ β   ( = σ )
β σ α   ( = σ )
δ
1910011 α β μ
3111111 α β σ
β α σ
σ α β
σ β α
δ μ
Table 5. Symmetric transformations of P R C n and N P V n .
Table 5. Symmetric transformations of P R C n and N P V n .
Code μ σ δ β α Specific OrderAny Order
3111111 α β σ
β α σ
σ α β
σ β α
δ μ
Table 6. Symmetric transformations of G M n .
Table 6. Symmetric transformations of G M n .
Code μ σ δ β α Specific OrderAny Order
400100 δ
801000 σ
1101011 α σ β   ( = σ )
β σ α   ( = σ )
1201100 δ σ
1501111 α σ β   ( = σ )
β σ α   ( = σ )
δ
Table 7. Symmetric transformations of B M .
Table 7. Symmetric transformations of B M .
Code μ σ δ β α Specific OrderAny Order
400100 δ
801000 σ
1101011 α σ β   ( = σ )
β σ α   ( = σ )
1201100 δ σ
1501111 α σ β   ( = σ )
β σ α   ( = σ )
δ
1910011 α β μ
2310111 α β δ μ
2711011 α β σ
β α σ
σ α β
σ β α
μ
3111111 α β σ
β α σ
σ α β
σ β α
δ μ
Table 8. Symmetric transformations of sensitivity.
Table 8. Symmetric transformations of sensitivity.
Code μ σ δ β α Specific OrderAny Order
200010 β
400100 δ
600110 β δ
1710001 α μ
1910011 α β μ
2110101 α δ μ
2310111 α β δ μ
Table 9. Symmetric transformations of specificity.
Table 9. Symmetric transformations of specificity.
Code μ σ δ β α Specific OrderAny Order
100001 α
400100 δ
500101 α δ
1810010 β μ
1910011 α β μ
2210110 β δ μ
2310111 α β δ μ
Table 10. Summary of symmetries.
Table 10. Summary of symmetries.
MetricIndependent ofSymmetry (under Inversion of)
α β δ LabellingScoringFull
S N S n
S P C n
P R C n
N P V n
A C C n
F 1 n
G M n
M C C
B M
M K
Table 11. Cross-symmetric transformations of the P R C n N P V n pair.
Table 11. Cross-symmetric transformations of the P R C n N P V n pair.
Code μ σ δ β α Specific OrderAny Order
1201100 δ σ
1501111 α σ β   ( = σ )
β σ α   ( = σ )
δ
1910011 α β μ
Table 12. Cross-symmetric transformations of the S N S n S P C n pair.
Table 12. Cross-symmetric transformations of the S N S n S P C n pair.
Code μ σ δ β α Specific OrderAny Order
801000 σ
901001 α σ
1001010 σ β
1101011 α σ β   ( = σ )
σ β α   ( = σ )
1201100 σ δ
1301101 α σ δ
1401110 σ β δ
1501111 α σ β   ( = σ )
σ β α   ( = σ )
δ
2511001 σ α μ
2611010 β σ μ
2711011 α β σ
β α σ
σ α β
σ β α
μ
2911101 σ α δ μ
3011110 β σ δ μ
3111111 α β σ
β α σ
σ α β
σ β α
δ μ
Table 13. Summary of cross-symmetries.
Table 13. Summary of cross-symmetries.
MetricCross-Symmetry (under Inversion of)
LabellingScoringFull
S N S n S P C n ( S N S n ) S P C n
S P C n S N S n ( S P C n ) S N S n
P R C n N P V n N P V n ( P R C n )
N P V n P R C n P R C n ( N P V n )
Table 14. Summary of statistical symmetry.
Table 14. Summary of statistical symmetry.
MetricStatistical Symmetry
LocalGlobal
(Skewness)
S N S n
S P C n
P R C n
N P V n
A C C n
F 1 n (0.14)
G M n (0.18)
M C C
B M
M K
Table 15. Examples of symmetric behaviour of metrics under several transformations (for balanced classes). Numbers in bold represent cases of asymmetric behaviour.
Table 15. Examples of symmetric behaviour of metrics under several transformations (for balanced classes). Numbers in bold represent cases of asymmetric behaviour.
MetricBaseline
α : 0.8
;   β : 0.7
Labelling
Inversion
α : 0.7 ;   β : 0.8
Scoring
Inversion
α : 0.2
;   β : 0.3
Full
Inversion
α : 0.3
;   β : 0.2
A C C n 0.500 0.500 0.500 0.500
M C C 0.503 0.503 0.503 0.503
B M 0.500 0.500 0.500 0.500
M K 0.505 0.505 0.505 0.505
G M n 0.497 0.497 0.510 0.510
S N S n 0.600 0.400 0.600 0.400
S P C n 0.400 0.600 0.400 0.600
P R C n 0.455 0.556 0.556 0.455
N P V n 0.556 0.455 0.455 0.566
F 1 n 0.524 0.474 0.579 0.429
Table 16. Summary of symmetric behaviour.
Table 16. Summary of symmetric behaviour.
ClusterMetricIndependent ofSymmetry (under Inversion of)Statistical Symmetry
α β δ LabellingScoringFullLocalGlobal
(Skewness)
Ia A C C n
M C C
M K
b B M
II S N S n S P C n S P C n
S P C n S N S n S N S n
III P R C n N P V n N P V n
N P V n P R C n P R C n
IV G M n (0.18)
V F 1 n (0.14)

Share and Cite

MDPI and ACS Style

Luque, A.; Carrasco, A.; Martín, A.; Lama, J.R. Exploring Symmetry of Binary Classification Performance Metrics. Symmetry 2019, 11, 47. https://doi.org/10.3390/sym11010047

AMA Style

Luque A, Carrasco A, Martín A, Lama JR. Exploring Symmetry of Binary Classification Performance Metrics. Symmetry. 2019; 11(1):47. https://doi.org/10.3390/sym11010047

Chicago/Turabian Style

Luque, Amalia, Alejandro Carrasco, Alejandro Martín, and Juan Ramón Lama. 2019. "Exploring Symmetry of Binary Classification Performance Metrics" Symmetry 11, no. 1: 47. https://doi.org/10.3390/sym11010047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop