Next Article in Journal / Special Issue
Application of Symmetry Methods to Low-Dimensional Heisenberg Magnets
Previous Article in Journal / Special Issue
Lie Symmetries of Differential Equations: Classical Results and Recent Contributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetry and Asymmetry Level Measures

Departamento de Matemáticas Fundamentales, Facultad de Ciencias de la UNED, Paseo Senda del Rey, 9. 28040 Madrid, Spain
Symmetry 2010, 2(2), 707-721; https://doi.org/10.3390/sym2020707
Submission received: 23 November 2009 / Revised: 30 March 2010 / Accepted: 6 April 2010 / Published: 8 April 2010
(This article belongs to the Special Issue Feature Papers: Symmetry Concepts and Applications)

Abstract

:
Usually, Symmetry and Asymmetry are considered as two opposite sides of a coin: an object is either totally symmetric, or totally asymmetric, relative to pattern objects. Intermediate situations of partial symmetry or partial asymmetry are not considered. But this dichotomy on the classification lacks of a necessary and realistic gradation. For this reason, it is convenient to introduce “shade regions”, modulating the degree of Symmetry (a fuzzy concept). Here, we will analyze the Asymmetry problem by successive attempts of description and by the introduction of the Asymmetry Level Function, as a new Normal Fuzzy Measure. Our results (both Theorems and Corollaries) suppose to be some new and original contributions to such very active and interesting field of research. Previously, we proceed to the analysis of the state of art.

1. Symmetry Measures

Symmetry is a fundamental concept and also a useful tool in almost every scientific and artistic field [1]. For instance, it is a cornerstone not only of Modern Physics, but also of apparently less related areas such as Music. In fact, these two particular fields do intersect in the Physics of Sound.
If we assume that physical systems have a high degree of (at least approximate) symmetry, then it is possible to simplify the equations describing them. Also, in the search for a unified description of elementary particles, the clue is in the equivalence between the valid theory and the most symmetrical among the possible theories.
Usually, Symmetry, and in parallel, Asymmetry, are considered as two faces of the same object [2]. So, the object is either totally symmetric, or totally asymmetric, relative to a pattern. No intermediate situations of partial symmetry or partial asymmetry are considered. But this dichotomical classification is too simple, and lacks of a necessary and realistic gradation. So, defining symmetry as a continuous feature, we get at a more complex definition, but more useful in many essential fields, as Computer Vision. Its interest is therefore not only theoretical, but also applied in Artificial Intelligence.
When we consider an isolated physical system, its symmetry properties are closely related to the conservation laws which characterize such a system.
Emmy Noether gives a clear description of this relation, in two theorems, establishing that (first theorem) “each symmetry of a physical system implies that some physical property of that system is conserved”. And conversely (in the second theorem), “each conserved quantity (into a system) has a corresponding symmetry”.
Naturally, a definition of symmetry as a continuous feature is much more complex than the discrete one. We may attempt three ways for climbing the summit of the symmetry/asymmetry measure. First, the geometrical characterization of Symmetry through group theory tools [2]. Second, by statistical machinery, through distribution or density functions, or also by characteristic functions for instance, measuring the symmetry degree and the skewness of different probability distributions. And third, by applying Measure Theory, in its more recent fuzzy version [3,4,5,6,7,8]; in this way, we may quantify the distance departure from Symmetry in shape, as a continuous feature, instead of a discrete feature. Hence, we look not only to neither full coincidence nor absolute difference, but for gradual coincidence of the shape with its Symmetrical shape.
Concerning the concept of Symmetry, its applications and consequences, the research by Shu-Kun Lin [17,18,19,20,21] is very remarkable. Quoting his work,
-
“symmetry is in principle ugly, because it is related to entropy and information loss”,
-
“the highest level of symmetry is total chaos”,
-
“a gas has more symmetry than a liquid and a liquid more symmetry than a solid”.
Although being reasonable affirmations, those are not the kind appearing in most usual textbooks.
Shu-Kun Lin also proposes the Similarity Principle, according to which: “If all the other conditions remain constant, the higher the similarity among the components of an ensemble (or a considered system) is, the higher value of entropy of the mixture (for fluid phases) or the assemblage (for a static structure or a system of solid phase) or any other structure (such as an ensemble of quantum states in quantum mechanics) will be, the more stable the mixture or the assemblage will be, and the more spontaneous the process leading to such a mixture or assemblage will be.” This is a proposal [9] very useful for characterizing structural stability and process spontaneity.
Shu-Kun Lin also defines the Information as the amount of data after data compression. Because the more usual definition on entropy as a measure of information may be confuse.
Lin also proposes three Information Theory Laws based on the mutual relationship between entropy and information measures,
  • First Law: the total amount of data of an isolated system remains unchanged.
  • Second Law: the information of an isolated system decreases to a minimum at equilibrium.
  • Third Law: for a solid structure of perfect symmetry (e. g., a perfect crystal), the information is null, and the entropy is at the maximum.

2. Symmetry and Causality

David Kellogg Lewis (1941–2001) was a prominent mathematical logician and analytical philosopher. He worked in a number of fields: Modal Logic, the plausibility of multiplicity of possible worlds and, to greater success, developing the Counterfactual Theory [10].
Counterfactual Theory has an early origin in the work of David Hume (1711–1776), who said in 1748: “We may define a cause to be an object followed by another, and where all the objects, similar to the first, are followed by an object similar to the second. Or, in other words, where, if the first object had not been, the second never had existed” [11].
The first sentence reflects the Regularity Criteria and the second the known Criteria of Counterfactuals
“A has caused B”
(counterfactual notation A→B)
Equivalent to
“B would not have occurred, if it were not for A”
Such initial Counterfactual Theory was taken up again by John Stuart Mill: “… we conclude [that] because a resembles to b, on one or more properties, that it does so in a certain other property” (1874).
But criticism appeared against the explanation given by Lewis, as in Horwich [12], and Hausman [13]. Recall now the Properties of Causality Relation, or simply Causation. Suppose A, B and C are three different events in a world, W. We have Transitivity: If A is cause of B, and B is cause of C, then A is cause of C. Asymmetry or Anti-Symmetry: If A is cause of B, then B cannot be the cause of A. Irreflexivity or Anti-Reflexivity: A cannot possibly be (ever) its own cause.
One of the main arguments of the critics is based on supposing that Lewis’ explanation suffers from a certain psychological implausibility. This can be found in Horwich [12].
D. K. Lewis [10] admits that this asymmetry is possibly a contingent characteristic of the actual world, not present in other worlds. So, in a world populated by only one atom such asymmetry on the over-determination does not hold. For this reason, there exists a possible discontinuity problem in the boundary. Because if we consider a contractive sequence of sub-worlds, each of them asymmetric, converging to the monatomic world, denoted by W, where asymmetry does hold, we would have a possible weakness in the theory.

3. Our Geometrical Construction

Mathematically [7], the situation (relative to the symmetric character) should depart from a contractive set, or decreasing collection, of sub-worlds, each one inserted into the precedent, where each one but the last, shows asymmetries, whereas in the limit, finally, the symmetry appears. To solve the problem, we can admit the symmetry is a discontinuous function, and so we see the subsequent tendency
ASYM → ASYM → ASYM → … → ASYM → SYMMETRY
Or we may assign a certain value, Ls, as a level of symmetry or asymmetry with a definition suggested by the belonging degree of elements to fuzzy sets; or as a level of satisfaction of some condition. So defined, in the limit case, it is possible to obtain a state of complete symmetry,
A ⊇ A2 ⊇ A3 ⊇ … ⊇ An ⊇ … ⊇ A = {a}
For instance, with the contractive condition taken from the concept of cardinality, here denoted by c,
c(A) ≥ c(A2) ≥ c(A3) ≥ … ≥ c(An) ≥ … ≥ c(A) = 1
Also, we can suppose that each world has a cardinal number one less than its precedent world. Once classified in decreasing order, reaching some degree of homogeneity among its elements, it is possible to introduce the function “symmetry level”, (or asymmetry level). They will be denoted as Ls and La.
With an increasing sequence of values in such succession, dependent on the cardinality of the selected world at each step, converging to one from the left. So, in the limit situation is closer to A in every step. Hence, {An}n∈N →A.
Frequently, the causal relation is taken to be intrinsically asymmetric, because in the world of our experience it is so. However, the fundamental physical laws are symmetric. Any other temporal asymmetries are accounted for in terms of the Principle of the Common Cause (PCC), due to Hans Reichenbach, which says: “If an improbable coincidence has occurred, there exists a common cause”.
Through such Principle, it is possible to explain the arrows (of entropy, experience and so on) by Causal Theory. And at the same time, the PCC results as Corollary of the Probabilistic Theory of Causation.
The Entropic Theory works in two phases: first, reducing any other arrow (causation, radiation, experience…) to the entropic arrow; and second, explaining entropic asymmetry in terms of boundary conditions on the universe.
Leyton [14,15] investigated the psychological relationships between shape and time, arguing that shape is used, by mind, to recover the past, and it forms a basis of the memory. And then, symmetry is the means by which shape is transformed into memory.
Symmetry is an intrinsic property, which causes it to remain invariant under some classes of transformations, as Rotation, Reflection, Inversion or more abstract mathematical operations. For instance, it can be represented in the form of coefficients of equations.
We start from an object, shape or form F, where generally we refer to its boundary, when it is a 3-D construct. We know that symmetry is never perfect in the real world. Therefore, perfect symmetry is an imaginary, an ideal reference, only a product of mathematically creative minds [16,17]. So, we are considering the actual symmetry, Ga, corresponding to an imperfect form, Fa, as opposed to ideal symmetry, Gi, associated to its “perfect” form, Fi. In fact, Ga is a subgroup of Gi. When we say that “the form F has symmetry G”, we are expressing that the form F belongs to the set S (G), which contains all the invariant shapes under transformations of the symmetry group, G. This can be denoted by F ∈ S (G).
We may define a space of all the possible objects, or shapes, denoted by X = {Xi}i∈N. So, we can assign to each element of X a crisp set containing all objects which fulfill all the conditions of G. We have the mapping G → S(G). For this purpose, we may introduce a membership function,
μG: X → [0,1]
by
X → μG (X) ≡ μ (G,X)
This characterizes the membership degree of the shape X to the set S(G), i.e. its degree of fulfillment of symmetry requirements which contains G. Hence, we have some different situations,
-
full membership, when μG (X) = 1.
-
null membership (or not membership at all), if μG (X) = 0.
-
partial membership, when 0 < μG (X) < 1.
On the 2-D case, an also on 3-D and higher dimensions, we may consider the forms and their boundaries closed surfaces in R3. Therefore, it is feasible to describe them by selecting a convenient coordinate system.
Given an object O, we can define
Oε = {Oi: SD (Oi, O)}
By this, we obtain a new collection of nearest shapes appears, Θ = {Oε}ε > 0. This is the set of nearest neighboring shapes to the symmetrical O, relative to the Symmetry Distance (SD) of the shape Oi to its reference pattern, O. Note that if 0 < ε ≤ ε´, then Oε ⊆ Oε´, because if Oi ∈ Oε, then SD (Oi, O) ≤ SD (O, O). So, we are now quantifying the distance departure from Symmetry in shape, as a continuous feature, instead of a discrete feature. We no longer consider only the total coincidence or the absolute difference, but the gradual “similarity” of an object to its Symmetrical shape.
This Distance from Symmetry in shape will be defined as the minimum mean squared distance required for the displacement of points from the original shape, in order to obtain a symmetrical shape. So, SD is the minimum effort required to turn a given shape into a symmetrical shape.
Every pair of such shapes (V and W, for instance) will be represented by their respective sequence of points,
V = {Vj}j = 0, 1, …, n−1, and W = {Wj}j = 0, 1, …, n−1
Let
Ψ = {space of all the shapes, in a given dimension}
Then, the aforementioned metric, m, will be defined as
m: Ψ × Ψ → R₊ ∪ {0}
by
m (V, W) = m ({Vj}j = 0,1,…,n−1, {Wj}j = 0, 1, …, n−1) = (∑ ‖Vj − Wj3)/n
Also we will define the Symmetric Transform of V, denoted ST (V), as the closest symmetric shape to V, relative to such metric.
For each vertex or node, representing a random variable in the graph, we have the probability distribution value associated with its position. So, each possible situation of such a node, in the corresponding slice, must possess a numerical image of the random variable, that jointly with the value of the symmetry distance to the corresponding node in the pattern object, O, will provide us a pair:
(pi, SD (Vi, O))
Describing probabilistically its position and how far it is of its symmetrical final place. Because we do not know previously the exact position of each node of the graph, in each slice, as we advance through the evolving structure. But what we know is the probability distribution, pi, associated with the position, that is, with what non-deterministic value such node goes to fill certain place.

4. Describing A Markov Process

It is possible to define a Markov Decision Process from this model, as a sequential chain of steps [8]. In the randomized Markov process, each node only depends on the corresponding node, belonging to shapes in the same or the nearest slice (according to the Markov property).
We can take as Total Expectancy Reward (TER), for the minimization process, the previously defined Symmetry Distance (SD) between the successive shapes. Also, it is possible to introduce a new reward function as inversely proportional to such SD translated to a value equal to one:
TER = 1/(1 + SD (Oi, O))
In such case, it would be natural to apply the procedure of maximization, avoiding the final problem of discontinuity.
According to be observable the system states, we construct a FOMDP (Fully Observable Markov Decision Process), being described without hidden variables. Associated with each step of this process, we have the “transition probabilities”. In the temporal instant t, the system will be in the state Si, after taking the action, or decision, ai: do (X = xi}), when it was in the state Si−1. The transition probability will be expressed as Pt (Si/Si−1, ai). But omitting the typical restriction of Markov Process, we arrive to Bayesian Nets (BNs). These will be expanded to Dynamic Bayesian Nets (DBNs), by the modeling explicit of the time. So, it generalizes many other models, as the Hidden Markov Models (HMMs).
The essential idea is the replication of shapes on a sequence of temporal points. Because starting from the random variables we may produce, as the process evolves, successive shapes. Then, we can reach a Foliation of Bayesian Nets, F, where each BN belongs to a temporal slice, and so, the total construct will be a Dynamic Bayesian Net,
Foliation of BN = S (T) = ∪t∈T S (t)
It contains its corresponding slices. So, we can consider each shape immersed in its parallel plate (when we consider the 2-D particular case), into the global Foliation defined on BNs. So, this is a Dynamic Model, composed by a sequence of temporal BNs. Note that we are allowing the possible existence of arcs between the nodes of different slices, as temporal edges. Such slices are not necessarily each one only connected to the nearest (as it is the case in first order Markov chains). There is also another type of arcs possible; namely, the classical synchronal arcs, connecting nodes of BNs that belong to the same slice. Also we need to comment that such directed edges never will be pointing to the past, because of their dynamical character.

5. Shape Measures

Our purpose is to open the way to introduce some measures of asymmetry and skewness. Our aim is to classify, within a determinate standard distribution, its variations with respect to the model selected as totally symmetrical.
We analyze the Symmetry as it is related to the more general case, i.e. multivariate probability distributions [16,17]. So, the univariate case may result as a mere simplification.
Let
X = (X, X2,…, Xn) ∈ Rn
be a random vector. And let
α = (α, α2,…, αn) ∈ Rn
be the usual representation of mean, mode or median, very well-known centralization measures of the distribution. So,
Xα = (X − α, X2 − α2,…, Xn − αn) ∈ Rn
Therefore, there are at least three n-dimensional vectors, corresponding to the aforementioned three measures.
There exist many examples of multivariate symmetry, according to the invariance of such “centered” random vector, Xα, under an appropriate family of transformations. For instance, and in increasing order of generality: spherical, elliptical, central and angular symmetry.
A random vector, X, is said symmetric of degree m, if there exists a vector
α = (α, α2,…, αm, 0, 0, …, 0)´ ∈ Rn
And a orthogonal transformation, T, such that
T (X − α) = C⋅C2⋅…⋅Cm [T (X − α)]
This means that the distribution shows symmetries about m mutually orthogonal (n−1)-dimensional hyperplanes. Therefore, they will show up about their (n−m)-dimensional intersections. So, the distribution shows m orthogonal directions of symmetry.

6. Shape Parameters

The Shape parameters (denoted SP) are a class of numerical parameters that corresponds to a parametric family of probability distributions (PD). So, SP is any parameter of a PD that is neither a location parameter, nor a scale parameter. Such a parameter must affect the shape, rather than simply shifting or stretching the distribution.
Some distributions have shape parameters, as for instance, the Γ distribution, the β distribution… But many others do not have such SP, as the Normal, Exponential, Uniform or Distributions. For these continuous distributions with no SP, shape will be fixed. Therefore, only location and/or scale can change. The Skewness and Kurtosis of such distributions remains constant, because they are independent of location and scale parameters.
It is interesting to characterize the Skewness, or departure from symmetry. One approach is to model the skewness parametrically. Various extensions to the multivariate case have been proposed so far. Skewness is a measure of asymmetry of the probability distribution of a random variable with values in the real line. We can classify the shapes according to the sign of their measure of Skewness, Sk. How is it possible to measure this feature? The answer is through the third standardized moment, or third about the mean,
Sk = (μ3)/(σ3)
For n-valued samples, we express this as
Sk = (m3)/(m23/2) = ((1/n) ∑(xi − x)3)/([(1/n) ∑ (xi − x)3]3/2)
With x the standard sample mean.
If Y is the sum of n independent random variables, {Xi}i = 1, 2,…, n, i. e. Y = ∑ Xi, all them being the same distribution as X, then
Sk [Y] = (Sk [X])/(√n)
The more usual asymmetry indices are due to Pearson and Fisher.
The Index of Pearson is denoted here by AP, and it will be based on the relation between mean (x) and mode (Mo). It will be defined by
AP = (x − Mo)/σx
When the distribution is symmetric, then AP = 0. It is of positive asymmetry when AP > 0. And it is of negative asymmetry when AP < 0.
Nevertheless, being easier to calculate than the Fisher index, it results very unusual in practice, because it is only true when the distribution show certain features, as unimodal character, bell-shaping and only slightly asymmetric shape, etc.
The Index of Fisher is denoted here by AF, and it is based on the data difference relative to the mean. It will be defined by
AF = ((1/n) ∑ (xi − x)3)/(σx3)
But it shows some disadvantages, because it will be very influenced by atypical values.
In the case of Kurtosis (Kurt), it will be expressed as the fourth cumulant divided by the fourth power of the square root of the second cumulant, i.e.
Kurt = (k4)/(√(k24))
But it is more useful to introduce the Coefficient of Kurtosis (ck), by reducing three units in the precedent value,
cK = ((m4)/(σ4)) − 3 = ((1/n) ∑(xi − xm)4 ni)/([(1/n) ∑(xi − xm)2 ni]2)) − 3
The arithmetic operation (to take -3) is based in that the Kurtosis value for the Gaussian distribution is three, and so, we are measuring the deviation respect to the Normal, i.e. its “anti-gaussianity degree”. Therefore, for the Gaussian distribution the Kurtosis coefficient is null, i.e.
cK (N) = 0
According with the sign of such coefficient, we can classify the distributions as
-
Mesokurtic, if cK = 0
-
Leptokurtic, if cK > 0
-
Platykurtic, if cK > 0
This means that the distribution shows a concentration degree around the central values of the variable. When the data distribution is symmetric, then Mean, Mode and Median coincide. So, the distribution presents the same shape to the right as to the left of the center.
Therefore, the features of the shape can be analyzed by these shape statistics,
-
Skewness, describing the amount of asymmetry;
-
Kurtosis, measuring the concentration of data around the “peak” of the distribution; and in its tails versus the concentration in its flanks.

7. Chirality Measure

The first question coming to mind is about its name: What is Chirality? Let us start with a well known quotation of Lord Kelvin [18]. “I call any geometrical figure, or group of points, chiral, and say that it has chirality, if its image, in a plane mirror, ideally realized, cannot be brought to coincide with itself”.
This opinion is supported by the classic and dichotomous division, totally symmetric versus totally asymmetric, without intermediate terms, in Euclidean sets.
A system is called chiral, if it differs from its mirror image, and such mirror image cannot be superposed on the original system. It is the famous case of our hands, our ears, and so on: it is impossible to make coincidence on our left hand over the mirror image of our right hand. For this simple reason, we need two different gloves, in order to cover our hands. Therefore, we say that an object is Chiral when it is non-isomorphic to its mirror-image. Its symmetry group only contains pure translations, pure rotations, and also screw rotations.
When a system or object is not chiral, we say that is achiral (or also amphichiral).For instance, the Helix and the Möbius string are 3-D chiral objects. Many other familiar objects exhibit the same chiral symmetry, as the human body. To see more details on Chirality, and on Symmetry in general, see the books and papers by Petitjean [19,20,21] and Rosen [1].
Both elements of the pair (original chiral object, and its mirror image) are denominated mutually Enantiomorphs, from the old Greek “opposite forms”. Its mutual relationship is named an Enantiomorphism. When it refers to molecules, we said Enantiomers.
The degree of such feature is measured by the Chiral Index (here denoted Chi, or simply by the symbol χ). In the univariate case, it will be expressed from the lower bound of the correlation coefficient (ρ),
Rmin = lower bound ρ
between the distribution and itself. Its mathematical expression will be
χ = (1 + Rmin)/2
As a previous step, we must suppose the existence of two statistical parameters, variance and mean.
Obviously, if the object is Achiral (A), then its chiral index will be null, i.e. χ(A) = 0.
This property is very important in many fundamental scientific fields, as for instance, studying the geometry of the molecular structure in chemical compounds. It is possible to define such Chirality Measure for a space having any dimension, for which probability distributions may be very useful. Recall that considering the n-dimensional Euclidean space, a finite number of equally weighted points can be considered as n-dimensional distribution.
From a geometrical viewpoint, a figure is achiral if and only if its symmetry group contains at least one orientation-reversing isometry.
Recall that any isometry can be written, in Euclidean geometry, as
v → Av + b
with A orthogonal matrix, and b a vector.
If det (A) = 1, then the isometry is orientation-preserving. Otherwise, if det (A) = − 1, then the isometry is orientation-reversing.
In 2-D, every figure which has an axis of symmetry is achiral, and every bounded achiral figure must have an axis of symmetry. In 3-D, every figure (solid) that possesses a center of symmetry, or a plane of symmetry, is achiral.
Two more basic aspects are necessary. First, the Chiral Index may be invariant under isometric transformations applied to the probability distribution. And second, it may be independent of which particular mirror we have selected. The Chiral Index is definite for multivariate distributions, being derived from a probability metric, and having formal relations with the Monge-Kantorovich transportation problem.
The upper bound of χ for a multivariate distribution lies in the interval [0.5, 1], for any value. When n = 2, it is in the interval [1−(1/π), 1+(1/2π)]. But in general, the Chiral Index of a distribution is a real number in the closed unit interval [0, 1]. The value zero characterizes an achiral distribution. The value χ of the distribution of a random vector is indeed a measure of its degree of Skewness.
An achiral object may be superimposed on its mirror image, and then, its symmetry group possesses certain operations reversing its geometry, as can be then applied glide reflections, not being so possible by a direct movement on a rigid body.
Note: The first to observe the importance of Chirality in Chemistry was Louis Pasteur (1882–1895). Also, it is worth mentioning J. B. Biot (1774–1862), who found the connection between the chirality of crystals and the defection of the plane of polarization light passing through them.

8. Fuzzy Measure Theory

Recall some necessary definitions [3,4,5,6,7,8] for more details, on definitions, results and proofs, from such very important new mathematical theory.
Def. 1:
Let U be the universe of discourse, with ℘ σ-algebra on U. Given a function
m: ℘ →[0, 1]
it is possible to describe m as a Fuzzy Measure, when it verifies
I)
m (∅) = 0
II)
m (U) = 1
III)
If A, B ∈ ℘, with A ⊆ B ⇒ m (A) ≤ m (B) [monotonicity]
When we take the Entropy concept, we attempt to measure the fuzziness, that is, the degree of being fuzzy for each element in ℘.
Def. 2:
The Entropy can be designed as the function
H: ℘ → [0, 1]
Verifying
I)
If A is a crisp set ⇒ H (A) = 0.
II)
If H (x) = 1/2, for all x ∈ A ⇒ H (A) is maximal (total uncertainty).
III)
If A is less fuzzified than B ⇒ H (A) ≤ H (B).
IV)
H (A) = H (U∖A)
Note: It will be possible to define some type of Upper and Lower Entropy, according to Torra and Narukawa paper [22].
As an illustrative example of usefulness of Fuzzy Entropy (FE) concept, many situations may be explained. So, for instance, we can to use the FE as a Cost Function in Image Processing. For this purpose, Pasha et al. [23] have introduced a threshold value in the image denoising problem.
Also FE may be applied on Fuzzy Regression Analysis using fuzzy linear models, with symmetric triangular fuzzy numbers. This was introduced by Tanaka et al. [24].
Def. 3:
The Specificity Measure will be introduced as a measure of the confidence when we take decisions. Such Specificity Measure is a function
S p: [0, 1] U → [0, 1]
where
I)
S p (∅) = 0.
II)
S p (ϰ) = 1 ⇔ ϰ is a unitary set (singleton).
III)
If ς and τ are normal fuzzy sets in U, with ς ⊂ τ ⇒ S p (ς) ≥ S p (τ).
Remark. 
[0, 1]U denotes the class of all the fuzzy sets on U.

9. Asymmetry and Symmetry Level Functions

Let (E, d) be a fuzzy metric space. Nevertheless, our results [3,4,5,6,7,8] may be generalized to some different spaces. We define a new fuzzy measure. Such function would be defined as one of the kind Li, with i ∈ {a, s}, where s denotes symmetry, and a, asymmetry. Suppose that from here we denote by c (A) the cardinal of a fuzzy set, A.
Theorem 1:
Let (E, d) be a fuzzy metric space, A being a subset of E, and let H and Sp be the above fuzzy measures defined on (E, d). Then, the function Ls, operating on A by
Ls (A) = Sp (A) [|1 − c (A)|)/(|1 + c (A)|)] + (1 + H (A))−1
it will be a fuzzy measure.
Such measure would be called Symmetry Level Function.
Dually,
Theorem 2:
Let (E, d) be a fuzzy metric space, A being a subset of E, and let H and Sp be the above fuzzy measures defined on (E, d). Then the function La, acting on A by
La (A) = 1 − {Sp (A) [|1 − c (A)|)/(|1 + c (A)|] + (1 + H (A))−1}
it will be also another fuzzy measure.
This measure would be called Asymmetry Level Function.
Corollary 1:
On the above conditions, or hypothesis, the Symmetry Level Function will be a Normal Fuzzy Measure.
Corollary 2:
On the above conditions, or hypothesis, the Asymmetry Level Function will be a Normal Fuzzy Measure.
Recall that it is possible to introduce the “integer part” function, denoted by
INT(x) = [x]
The values of the fuzzy measure Sp decrease as the size of the considered set increases.
Also recall that the range of such Specificity Measure, Sp, will be the closed unit interval, [0, 1].
Corollary 3:
Let (E, d) be a fuzzy metric space, and let {Ai}i = 1,2,…,n be a contractive chain of enchained subsets as sub-worlds of the universe U = A, all them containing the fuzzy set A, i.e.
Ai+1 ⊂ Ai, for all i ∈ {1, 2, …, n}
with
lim i→∞ Ai = A
Then, it holds
[Ls (Ai)] = 1, in the monatomic world
[Ls (Ai)] = 0, in other worlds
Therefore,
[La (Ai)] = 0, in the monatomic world
[La (Ai)] = 1, in other worlds.
Corollary 4:
On the same above mentioned hypotheses, we may obtain the composition of the initial asymmetry level with the integer part function (INT). So,
la (Ai) = INT{La (Ai)} = [La (Ai)] = [1 − |(1-c)/(1+c)|]
ls (Ai) = INT{Ls (Ai)} = [Ls (Ai)] = [|(1-c)/(1+c)|]
la (Ai) = [La (Ai)] = 1, if Ai ≠ A
or
la (Ai) = 0, if Ai = A
Also,
ls (Ai) = [Ls (Ai)] = 0, if Ai ≠ A
or
ls (Ai) = 1, if Ai = A

10. Conclusions

We dispose from now of a new measure quantifying the asymmetry level of shapes, useful for fuzzy sets. For this, we need to use a combination of fuzzy measures, derived from some related functions, as may be Entropy and Specificity Measures. Hence, the fundamental direction working on Symmetry and its properties may be geometrical, on problems of different fields.
Let us mention the analysis of crystalline structures by the Crystallographic Planar or Spatial Groups, as example. Also, it is possible as direct application of the classical Group Theory, on physical problems: Quantum Mechanics, Penrose tiles, Fractals, Chaos Theory, and so on. And closer to Computer Science, it is be related to Artificial Vision, Pattern Recognition, or analyzing symmetrical structures in Computational Linguistics or similar tasks on AI.
Basically, the precedent work related to these aspects was on Symmetry Groups, with the papers of Hermann Weyl and its very famous book, Symmetry [2]. About its application to Pattern Recognition, Artificial Vision and so on, the papers and presentations of Y. Liu on Computational Symmetry [22] are recommended. In his paper, Liu said that “symmetry is an essential mathematical concept, as well as a ubiquitous, observable phenomenon in nature, science and art. Either by evolution or by design, symmetry implies a potential structural efficiency gain that makes it universally appealing to computational science. Recognition and categorization of both, symmetry and regularity, may be the first step towards capturing the essential skeleton of a real world problem, while at the same time minimizing computational redundancy” [25].
We have also considered the question of Symmetrical Patterns. Future research needs to focus on questions derived from the versatility of the real world, surpassing the relatively coarse and rigid old geometry (group theory included), which only permits a first approximation to more difficult problems on Artificial Intelligence.

Acknowledgements

I wish to express my gratefulness to Shu-Kun Lin, and to Joel Ratsaby, which have proposed my collaboration through this paper. And also my gratitude to the anonymous reviewers, because their very wise advice.

References

  1. Rosen, J. Symmetry in Science: An Introduction to the General Theory; Springer-Verlag: New York, NY, USA, 1995. [Google Scholar]
  2. Weyl, H. Symmetry; Princeton University Press: Princeton, NJ, USA, 1952. [Google Scholar]
  3. Garrido, A. Searching Methods in Fuzzy Optimization. In Proceedings of the International Conference-EpsMsO (International Conference on Experiments/Process/System Modeling/Simulation/Optimization), Athens, Greece, 4–7 July 2007; pp. 904–910. [Google Scholar]
  4. Garrido, A. Symmetry versus Antisymmetry. Acta Univ. Apul. 2009, 17, 69–75. [Google Scholar]
  5. Garrido, A. Fusion modeling to analyze the asymmetry as a continuous feature. Electron. Int. J. Adv. Model. Opt. 2008, 10, 135–146. [Google Scholar]
  6. Garrido, A. Analysis of Asymmetry Measures. Electron. Int. J. Adv. Model. Opt. 2008, 10, 199–211. [Google Scholar]
  7. Garrido, A. Additivity and Monotonicity in Fuzzy Measures. Studii si Cercetari Stiintifice Universitatea din Bacau, Seria Matematica 2006, 16, 445–457. [Google Scholar]
  8. Garrido, A. Classifying Fuzzy Measures. Acta Univ. Apul. 2007, 14, 23–32. [Google Scholar]
  9. Lin, S.-K. Correlation of Entropy with Similarity and Symmetry. J. Chem. Inform. Comput. Sci. 1996, 36, 367–376. [Google Scholar] [CrossRef]
  10. Lewis, D. Counterfactuals; Wiley-Blackwell: Hoboken, NJ, USA, 2001. [Google Scholar]
  11. Hume, D. An Enquiry concerning Human Understanding; Oxford Classics Series; Oxford University Press: New York, NY, USA, 2008. [Google Scholar]
  12. Horwich, P. Asymmetries in Time: Problems in the Philosophy of Sciences; Series Bradford Books; The MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
  13. Hausman, D.M. Causal Asymmetries; Cambridge University Press: Cambridge, MA, USA, 1998. [Google Scholar]
  14. Leyton, M. Symmetry, Causality, Mind; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  15. Leyton, M. A Generative Theory of Shape; Springer: New York, NY, USA, 2001. [Google Scholar]
  16. Zabrodsky, H.; Avnir, D. Measuring Symmetry in Structural Chemistry. Adv. Molec. Struct. Res. 1995, 1, 1–31. [Google Scholar]
  17. Zabrodsky, H.; Peleg, S.; Avnir, D. Continuous Symmetry Measures. J. Am. Chem. Soc. 1995, 117, 462–473. [Google Scholar] [CrossRef]
  18. Kelvin, L.; Thomson, W. Baltimore Lectures 1884. In Kelvin´s Baltimore Lectures and Modern Theoretical Physics Historical and Philosophical Perspectives; Kargon, R.H., Achinstein, P., Eds.; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
  19. Petitjean, M. Chirality and Symmetry Measures: A Transdisciplinary Review. Entropy 2003, 5, 271–312. [Google Scholar] [CrossRef]
  20. Petitjean, M. Chiral Mixtures. J. Math. Phys. 2002, 43, 4147–4157. [Google Scholar] [CrossRef]
  21. Petitjean, M. The Mathematical Theory of Chirality, Displayed online. 2009.
  22. Torra, V.; Narukawa, Y. Modeling Decision: Information Fusion and Aggregation Operators; Springer-Verlag: New York, NY, USA, 2007. [Google Scholar]
  23. Pasha, E.; Farnoosh, R.; Fatemi, A. Fuzzy Entropy as Cost Function in Image Processing. In Proceedings of the 2nd IMT-GT Regional Conference on Mathematics, Statistics and Applications, Penang, Malaysia, 13–15 June 2006; Universiti Sains: Penang, Malaysia; pp. 1–8. [Google Scholar]
  24. Tanaka, H.; Hayashi, I.; Watada, J. Possibilistic linear regression analysis for fuzzy data. EJOR 1989, 40, 389–396. [Google Scholar] [CrossRef]
  25. Liu, Y. Computational Symmetry. In Proceedings of the Symmetry 2000, Wenner-Gren International Series, Stockholm, Sweden, 2000; Portland Press: London, UK, January 2002; pp. 231–245. [Google Scholar]

Share and Cite

MDPI and ACS Style

Garrido, A. Symmetry and Asymmetry Level Measures. Symmetry 2010, 2, 707-721. https://doi.org/10.3390/sym2020707

AMA Style

Garrido A. Symmetry and Asymmetry Level Measures. Symmetry. 2010; 2(2):707-721. https://doi.org/10.3390/sym2020707

Chicago/Turabian Style

Garrido, Angel. 2010. "Symmetry and Asymmetry Level Measures" Symmetry 2, no. 2: 707-721. https://doi.org/10.3390/sym2020707

Article Metrics

Back to TopTop