Next Article in Journal
An Efficient Bi-Parametric With-Memory Iterative Method for Solving Nonlinear Equations
Previous Article in Journal
Assessing Antithetic Sampling for Approximating Shapley, Banzhaf, and Owen Values
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Max-C and Min-D Projection Auto-Associative Fuzzy Morphological Memories: Theory and an Application for Face Recognition

by
Alex Santana dos Santos
1,† and
Marcos Eduardo Valle
2,*,†
1
Centro de Ciências Exatas e Tecnológica (CETEC), Universidade Federal do Recôncavo da Bahia, Cruz das Almas 44380-000, Brazil
2
Instituto de Matemática, Estatística e Computação Científica (IMECC), Universidade Estadual de Campinas (UNICAMP), Campinas 13083-970, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
AppliedMath 2023, 3(4), 989-1018; https://doi.org/10.3390/appliedmath3040050
Submission received: 13 October 2023 / Revised: 28 November 2023 / Accepted: 4 December 2023 / Published: 8 December 2023

Abstract

:
Max-C and min-D projection auto-associative fuzzy morphological memories (max-C and min-D PAFMMs) are two-layer feedforward fuzzy morphological neural networks designed to store and retrieve finite fuzzy sets. This paper addresses the main features of these auto-associative memories: unlimited absolute storage capacity, fast retrieval of stored items, few spurious memories, and excellent tolerance to either dilative or erosive noise. Particular attention is given to the so-called Zadeh’ PAFMM, which exhibits the most significant noise tolerance among the max-C and min-D PAFMMs besides performing no floating-point arithmetic operations. Computational experiments reveal that Zadeh’s max-C PFAMM, combined with a noise masking strategy, yields a fast and robust classifier with a strong potential for face recognition tasks.

1. Introduction

Associative memory (AM) is a knowledge-based system inspired by the human brain’s ability to store and recall information by association [1,2]. Apart from a large storage capacity, an ideal AM model should exhibit a certain tolerance to noise. In other words, we expect to retrieve a stored item not only from presenting the original stimulus but also from a similar input stimulus [1]. We speak of auto-associative memory if stimulus and response coincide. For instance, our memory acts as an auto-associative model when we recognize a friend wearing sunglasses or a scarf. In other words, we obtain the desired output (recognize a friend) from a partial or noise input (their occluded face). We speak of a hetero-associative memory model if at least one stimulus differs from its corresponding response.
Several associative memory models have been introduced in the literature, and their applications range from optimization [3,4] and prediction [5,6,7] to image processing and analysis [8,9,10]. Associative memory models have also been applied for pattern classification [11,12,13,14], including face recognition [15]. Moreover, the interest in associative memory models increased significantly in the last few years due to their relationship with the attention mechanism used in transformer models [16,17,18].
An AM model designed for storing and recalling fuzzy sets on a finite universe of discourse is called fuzzy associative memory (FAM) [19]. On the one hand, fuzzy sets can be interpreted as elements from a complete lattice [20]. On the other hand, mathematical morphology can be viewed as a theory on mappings between complete lattices [21]. Thus, many important FAM models from the literature belong to the broad class of fuzzy morphological associative memories (FMAMs) [5,22]. Briefly, FMAMs are implemented by fuzzy morphological neural networks. A morphological neural network is equipped with neurons that perform an elementary operation from mathematical morphology, possibly followed by a non-linear activation function [23]. The class of FMAMs includes, for example, the max-minimum and max-product FAMs of Kosko [19], the max-min FAM of Junbo et al. [24], the max-min FAM with a threshold of Liu [25], the fuzzy logical bidirectional associative memories of Belohlavek [26], and the implicative fuzzy associative memories (IFAMs) of Sussner and Valle [9].
The max-C and min-D auto-associative fuzzy morphological memories (AFMMs), synthesized using fuzzy learning by adjunction (FLA), are fuzzy versions of the matrix-based auto-associative morphological memories (AMMs) proposed by Ritter et al. [5,22,27]. The main features of max-C and min-D AFMMs are
  • unlimited absolute storage capacity,
  • one-step convergence when employed with feedback,
  • excellent tolerance to either erosive or dilative noise.
On the downside, matrix-based AFMMs with FLA have many spurious memories [5]. A spurious memory is an item that is unintentionally stored in the memory [1]. Furthermore, the information stored on an AFMM with FLA is distributed on a synaptic weight matrix. Consequently, these auto-associative fuzzy memories consume lots of computational resources when designed for storing and recalling large items [5,28].
Many auto-associative fuzzy memory models have been proposed in the literature to improve the noise tolerance or reduce the computational cost of AFMMs with FLA. For example, to increase the noise tolerance of the IFAMs, Bui et al. introduced so-called content-association associative memory (ACAM) [29]. Using a fuzzy preorder relation, Perfilieva and Vajgl proposed a novel theoretical justification for IFAMs [30]. They also introduced a fast algorithm for data retrieval based on an IFAM model with a binary fuzzy preorder [31]. Moreover, Vajgl reduced the computational cost of an IFAM by replacing its synaptic weight matrix with a sparse matrix [32]. Quantale-based associative memories (QAMs) generalize several lattice-based auto-associative memories. They have been effectively applied for storing and recalling large color images [33]. Li et al. increased the storage capacity of fuzzy associative memories using piecewise linear transformations [34]. Sussner and Schuster proposed interval-valued fuzzy morphological associative memories (IV-FMAMs) designed for storing and retrieving interval-valued fuzzy sets [6]. The novel IV-FMAMs have been effectively applied for time-series prediction.
Apart from the distributed models like the FMAMs with FLA and their variations, non-distributed associative memory models have received considerable attention partly due to their low computational effort and extraordinary successes in pattern recognition and image restoration tasks. Examples of non-distributed associative memories include models based on Hamming distance [35] and kernels [15] as well as subsethood and similarity measures [11,12,14]. In the context of non-distributed models, we introduced max-plus and min-plus projection auto-associative morphological memories (max-plus and min-plus PAMMs), which can be viewed as non-distributed versions of the auto-associative morphological memories of Ritter et al. [36]. Max-plus and min-plus PAMMs have fewer spurious memories than their corresponding distributed models. Thus, they are more robust to dilative or erosive noise than the original auto-associative morphological memories. Computational experiments revealed that PAMMs and their compositions are competitive with other methods from the literature on classification tasks [36].
In our conference paper [37], we introduced max-C projection auto-associative fuzzy morphological memories (max-C PAFMMs) as an alternative to AFMMs, building on the success of max-plus and min-plus PAMMs. Max-C PAFMMs were further discussed in [38], where some results concerning their implementation and storage capacity were given without proof. In a few words, a max-C PAFMM projects the input into the family of all max-C combinations of the stored items. Furthermore, we developed the dual version of max-C PAFMMs, the class of min-D PAFMMs which projects the input into the set of all min-D combinations of the stored items, in conference paper [39]. Although we addressed some theoretical results concerning both max-C and min-D PAFMM models in [39], the theoretical part of this paper can be viewed as an extended version of our previous papers [37,38,39]. In particular, this paper provides the mathematical background for developing the new PAFMM models, which are obtained by enriching a complete lattice with a residuated binary operation with a left identity. Also, we show in this paper that the most robust max-C PAFMM for inputs corrupted by dilative noise is based on Zadeh’s inclusion measure. The resulting model is referred to as Zadeh’s max-C PAFMM. Accordingly, the dual of Zadeh’s max-C PAFMM is the min-D PAFMM most robust to erosive noise. Finally, inspired by the work of Urcid and Ritter [40], the frail tolerance of the max-C and min-D PAFMMs for mixed noise can be improved significantly by masking the noise contained in the input [41]. Although some preliminary experiments can be found in conference paper [41], in this paper, we provide conclusive computational experiments concerning the application of Zadeh’s max-C PAFMM for face recognition.
The paper is organized as follows. The mathematical background, including some basic concepts on lattice theory and fuzzy logic, is presented in the next section. Section 2 also introduces an algebraic structure called residuated complete lattice-ordered groupoid with left identity (R-clogli), which provides the mathematical background for developing PAFMM models. Section 3 briefly reviews max-C and min-D AFMMs with FLA. Max-C and min-D PAFMMs are addressed subsequently in Section 4. Zadeh’s PAFMMs and the noise masking strategy are discussed in Section 5 and Section 6, respectively. The performance of Zadeh’s max-C PAFMM for face recognition tasks is addressed in Section 7. The paper finishes with some concluding remarks in Section 8 and Appendix A containing proofs of the main theorems.

2. Some Basic Concepts on Fuzzy Systems

This section reviews the mathematical background necessary for developing the new max-C and min-D projection associative memories, including lattice theory, residuated lattices, and related concepts. Readers familiar with complete lattice and mathematical morphology may skip Section 2.1 or read it later. Similarly, readers familiar with fuzzy logic operations may skip Section 2.3.

2.1. Complete Lattice and Mathematical Morphology

A non-empty partially ordered set ( L , ) is called a complete lattice, denoted by L , , , if every subset X L has an infimum and a supremum L [42]. The infimum and the supremum of X L are denoted by X and X , respectively. The least and the greatest elements of complete lattice L are denoted, respectively, by 0 L = L and 1 L = L . When X = { x 1 , , x n } is a finite subset of L , we write the infimum and the supremum of X as i = 1 n x i and i = 1 n x i , respectively.
The unit interval [ 0 , 1 ] is an example of a complete lattice with the usual order. Cartesian product L n = L × L × × L of complete lattice L is also a complete lattice with the component-wise ordering defined as follows for x = [ x 1 , , x n ] T L n and y = [ y 1 , , y n ] T L n :
x y x i y i , i = 1 , , n .
Mathematical morphology is a non-linear theory widely used for image processing and analysis [21,43,44,45]. The elementary operations from mathematical morphology are dilations and erosions. Dilations and erosions are operators that commute with the supremum and infimum operations, respectively [21,44]. Formally, given complete lattices L and M , operators δ : M L and ε : L M represent, respectively, a dilation and an erosion if
δ Y = y Y δ ( y ) a n d ε X = x X ε ( x ) ,
for all X L and Y M . We recall that dilation δ and erosion ε satisfy δ ( 0 M ) = 0 L and ε ( 1 L ) = 1 M [43].
One key concept of mathematical morphology on complete lattices is the notion of adjunction [43]. Adjunctions arise naturally on complete lattices and are closely related to Galois connection and residuation theory [42,46,47,48]. We consider operators δ : M L and ε : L M between complete lattices L and M . We say that ( ε , δ ) is an adjunction between L and M if
δ ( y ) x y ε ( x ) , x L , y M .
Adjunctions can be used to define the elementary operations from mathematical morphology. In fact, if ( ε , δ ) forms an adjunction, then ε is an erosion, and δ is a dilation. Conversely, given dilation δ : M L , residual operator
ε ( x ) = { y M : δ ( y ) x } , x L
is unique erosion ε : L M such that ( ε , δ ) is an adjunction [43,47]. Dually, given erosion ε : L M , residual operator
δ ( y ) = { x L : ε ( x ) y } , y M
is unique dilation δ : M L such that ( ε , δ ) is an adjunction.

2.2. Residuated Complete Lattice-Ordered Groupoids with Left Identity

This section reviews mathematical structures obtained by enriching a complete lattice with binary operations. Let us begin by introducing the following definition:
Definition 1 
(Residuated complete lattice-ordered groupoid with left identity). A residuated complete lattice-ordered groupoid with left identity (R-clogli), denoted by L , , , , , is an algebra such that
1. 
L , , is a complete lattice.
2. 
L , is a groupoid (or magma) where the binary operation has a left identity, that is, there exists e L such that e x = x for all x L .
3. 
Operations and ∖ satisfy the following adjunction relationship:
x y z x y z , x , y , z L .
Similarly, a dual R-clogli denoted by L , , , , is an algebra such that L is a complete lattice equipped with a binary operation that has a left identity, and and satisfy
x y z x y z , x , y , z L .
We speak of an associative R-clogli and an associative dual R-clogli if binary operations and are associative.
From the adjunction relationship, we conclude that is a dilation while is an erosion in the first argument. Specifically, given fixed element a in L , operators δ a , ε a : L L defined by δ a ( x ) = x a and ε a ( x ) = x a for all x L are a dilation and an erosion, respectively. As a consequence, we have
0 L a = 0 L a n d 1 L a = 1 L , a L .
Moreover, the following identities hold for all y , z L :
y z = { x L : x y z } a n d y z = { x L : x y z } .
In a word, ∖ and are the residuals of and , respectively.
The following briefly addresses the relationship between R-clogli and other mathematical structures from the literature.
First, an R-clogli is a residuated complete lattice-ordered dilative monoid, R-clodim for short, if the binary operation is associative and has a two-sided identity. Sussner introduced the notion of R-clodim as an appropriate mathematical background for L -fuzzy mathematical morphology [49], and it is closely related to Maragos’ complete lattice-ordered double monoid [50].
Residuated lattices, introduced by Ward and Dilworth [51] and widely taken as an appropriate mathematical background for fuzzy logic [52,53], are also R-cloglis. In fact, a complete residuated lattice, denoted by L , , , , , 0 L , 1 L , is an algebra such that
  • L is a complete lattice with least element 0 L and greatest element 1 L .
  • Operation is commutative, associative, and 1 L x = x for all x L .
  • Operations and → are adjoint, that is, x y z if and only if x ( y z ) .
Therefore, a complete residuated lattice is equivalent to an R-clogli in which is associative and commutative, and the greatest element of L is its identity element.
Finally, an R-clogli in which is associative and performs a dilation in both arguments (not only in the first) is a quantale. Quantales, introduced by Mulvey to provide a constructive formulation for the logic of quantum mechanics [54], have been used to develop quantale-based auto-associative memories (QAMs). QAMs include the original auto-associative morphological memories and auto-associative fuzzy implicative memories as particular instances [33].

2.3. Fuzzy Sets and Fuzzy Logic Operations

The associative memories considered in this paper are based on fuzzy set theory and operations from fuzzy logic. This section briefly reviews the most essential concepts of fuzzy systems. The reader is invited to consult [55,56,57,58,59] for a detailed review of fuzzy logic and fuzzy set theory.
Fuzzy set A on a universe of discourse X is defined by
A = x , μ A ( x ) : x X a n d μ A : X [ 0 , 1 ] ,
where μ A is the membership function of fuzzy set A. The family of all fuzzy sets on X is denoted by F ( X ) . When the universe of discourse X = { x 1 , , x n } is finite, fuzzy set A can be identified with vector a = [ a 1 , a 2 , , a n ] T [ 0 , 1 ] n , where a i = μ A ( x i ) for all i = 1 , , n [19]. In this paper, we focus only on fuzzy sets defined in finite universes of discourse. Moreover, we identify them with vectors on hypercube [ 0 , 1 ] n .
Definition 2 
(Fuzzy conjunction and disjunction). Increasing mapping C : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] is a fuzzy conjunction if it satisfies C ( 0 , 1 ) = C ( 1 , 0 ) = 0 and C ( 1 , 1 ) = 1 . A fuzzy disjunction is increasing operator D : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] such that D ( 0 , 1 ) = D ( 1 , 0 ) = 1 and D ( 0 , 0 ) = 0 .
Definition 3 
(Fuzzy implication and co-implication). A fuzzy implication is mapping I : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] decreasing in the first argument and increasing in the second argument that satisfies identities I ( 0 , 0 ) = I ( 1 , 1 ) = 1 and I ( 1 , 0 ) = 0 . A fuzzy co-implication is operator J : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] decreasing in the first argument and increasing in the second argument such that J ( 0 , 0 ) = J ( 1 , 1 ) = 0 and J ( 0 , 1 ) = 1 .
We note that fuzzy logic connectives are generalizations of classical connectives. For the development of fuzzy morphological associative memories, we focus our attention on fuzzy conjunctions, which have a left identity and satisfy the following adjunction relationship:
C ( x , y ) z x R ( y , z ) , x , y , z [ 0 , 1 ]
with respect to binary operation R : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] called the residual of C. From the mathematical point of view, we assume that [ 0 , 1 ] , , , C , R is an R-clogli. We remark that R, the residual of a fuzzy conjunction C, is a fuzzy implication if and only if C ( x , 1 ) = 0 implies x = 0 . Indeed, we suppose there exists a > 0 such that C ( a , 1 ) = 0 . From the adjunction relationship, we have R ( 1 , 0 ) = { x [ 0 , 1 ] : C ( x , 1 ) 0 } a > 0 , and thus R is not a fuzzy implication.
In a similar fashion, we focus on fuzzy disjunctions which have a left identity and satisfy
D ( x , y ) z x S ( y , z ) , x , y , z [ 0 , 1 ]
for binary operation S : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] called the residual of D. Throughout the paper, we assume that [ 0 , 1 ] , , , D , S is a dual R-clogli. We remark that residual S, derived from D using (12), is a fuzzy co-implication if and only if D ( x , 0 ) = 1 implies x = 1 .
Examples of R-clogli and dual R-clogli include unit interval [ 0 , 1 ] equipped with the following:
  • Minimum fuzzy conjunction C M ( x , y ) = x y and Gödel’s implication I M .
  • Lukasiewicz’s fuzzy conjunction C L ( x , y ) = 0 ( x + y 1 ) and fuzzy implication I L ( x , y ) = 1 ( 1 x + y ) .
  • Gaines’ fuzzy conjunction and fuzzy implication defined, respectively, by
    C G ( x , y ) = 0 , x = 0 , y , o t h e r w i s e , a n d I G ( x , y ) = 1 , x y , 0 , x > y .
  • Maximum fuzzy disjunction D M ( x , y ) = x y and Gödel’s fuzzy co-implication
    J M ( x , y ) = 0 , x y , y , x < y .
  • Lukasiewicz’s disjunction D L ( x , y ) = 1 ( x + y ) and co-implication J L ( x , y ) = 0 ( y x ) .
  • Gaines’ fuzzy disjunction and fuzzy co-implication defined as follows:
    D G ( x , y ) = 1 , x = 1 , y , o t h e r w i s e , a n d J G ( x , y ) = 0 , x y , 1 , x < y .
We note that C G and D G are neither commutative nor have a two-sided identity. Thus, these fuzzy logical operators do not yield a complete residuated lattice nor a residuated complete lattice-ordered dilative monoid (R-clodim). Notwithstanding, algebra [ 0 , 1 ] , , , C G , I G is an R-clogli while [ 0 , 1 ] , , , D G , J G is a dual R-clogli.
Apart from the adjunction relationship, fuzzy logical operators can be connected through a strong fuzzy negation. A strong fuzzy negation is a nonincreasing mapping η : [ 0 , 1 ] [ 0 , 1 ] such that η ( 0 ) = 1 , η ( 1 ) = 0 , and η η ( x ) = x for all x [ 0 , 1 ] . Standard fuzzy negation η S ( x ) = 1 x is a strong fuzzy negation.
A fuzzy logic operator A : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] can be connected to a fuzzy logic operator B : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] by means of a strong fuzzy negation η as follows:
η B ( x , y ) = A η ( x ) , η ( y ) .
In this case, we say that pair ( A , B ) is dual with respect to η . For example, the pairs of fuzzy conjunction and fuzzy disjunction, ( D G , C G ) , ( D M , C M ) , and ( D L , C L ) , are duals with respect to standard fuzzy negation η S . Pairs ( I G , J G ) , ( I M , J M ) , and ( I L , J L ) of fuzzy implication and fuzzy co-implication are also dual with respect to standard fuzzy negation. The commutative diagram shown in Figure 1 establishes the relationship between adjunction and negation [21,60].
Fuzzy logic operators can be combined with either maximum or minimum operations to yield matrix products. For instance, the max-C and the min-D matrix products of A     [ 0 , 1 ] m × k by B     [ 0 , 1 ] k × n , denoted, respectively, by G = A B and H = A B , are defined by the following equations for all i = 1 , , m and j = 1 , , n :
g i j = ξ = 1 k C ( a i ξ , b ξ j ) a n d h i j = ξ = 1 k D ( a i ξ , b ξ j ) .
In analogy to the concept of linear combination, we say that z [ 0 , 1 ] n is a max-C combination of the vectors belonging to finite set A = { a 1 , , a k } [ 0 , 1 ] n if
z = ξ = 1 k C ( λ ξ , a ξ ) z i = ξ = 1 k C ( λ ξ , a i ξ ) , i = 1 , , n ,
where λ ξ [ 0 , 1 ] for all ξ = 1 , , k . Similarly, a min-D combination of the vectors of A is given by
y = ξ = 1 k D ( θ ξ , a ξ ) y i = ξ = 1 k D ( θ ξ , a i ξ ) , i = 1 , , n ,
where θ ξ [ 0 , 1 ] , for all ξ = 1 , , k . The sets of all max-C combinations and min-D combinations of A = { a 1 , , a k } [ 0 , 1 ] n are denoted, respectively, by
C ( A ) = z = ξ = 1 k C ( λ ξ , a ξ ) : λ ξ [ 0 , 1 ]
and
D ( A ) = z = ξ = 1 k D ( θ ξ , a ξ ) : θ ξ [ 0 , 1 ] .
The sets of max-C and min-D combinations play a major role in the projection auto-associative fuzzy morphological memories (PAFMMs) presented in Section 4. However, before introducing PAFMMs, let us briefly review the fuzzy auto-associative morphological memories, defined using fuzzy logical connectives and adjunctions.

3. Auto-Associative Fuzzy Morphological Memories

Let us briefly review the auto-associative fuzzy morphological memories (AFMM). The reader interested in a detailed account of this subject is invited to consult [5,22]. To simplify the exposition, we assume that [ 0 , 1 ] , , , C , R and [ 0 , 1 ] , , , D , S , where C is a fuzzy conjunction and D is a fuzzy disjunction, are, respectively, an R-clogli and a dual R-clogli.
As far as we know, most AFMMs are implemented by a single-layer network defined in terms of either the max-C or the min-D matrix products established by (17) [5]. Formally, max-C and min-D auto-associative fuzzy morphological memories (AFMMs) are mappings W , M : [ 0 , 1 ] n [ 0 , 1 ] n defined, respectively, by the following equations:
W ( x ) = W x and M ( x ) = M x , x [ 0 , 1 ] n ,
where W , M [ 0 , 1 ] n × n are called the synaptic weight matrices. Examples of AFMMs include the auto-associative version of the max-minimum and max-product fuzzy associative memories of Kosko [19], the max-min fuzzy associative memories with a threshold [25], and the implicative fuzzy associative memories [9].
We point out that using a strong fuzzy negation η , we can derive from a max-C AFMM W another AFMM called the negation of W and denoted by W * . Formally, the negation of W is defined by equation
W * ( x ) = η W η ( x ) , x [ 0 , 1 ] n ,
where the strong fuzzy negation η is applied in a component-wise manner. It is not hard to show that the negation of a max-C AFMM W is a min-D AFMM M , and vice-versa, where fuzzy conjunction C and fuzzy disjunction D are dual with respect to strong fuzzy negation η , i.e., they satisfy (16) [22].
Let us now turn our attention to a recording recipe called fuzzy learning by adjunction (FLA), which can be effectively used for the storage of vectors on AFMM W and M defined by (22) [22]. Given A = { a 1 , , a k } [ 0 , 1 ] n , called the fundamental memory set, FLA determines matrix W [ 0 , 1 ] n × n of a max-C AFMM and matrix M [ 0 , 1 ] n × n of the min-D AFMM by means of the following equations for all i , j = 1 , , n :
w i j = ξ = 1 k R ( a j ξ , a i ξ ) and m i j = ξ = 1 k S ( a j ξ , a i ξ ) ,
where R and S are the residuals of C and D, respectively.
The following proposition reveals that a min-D AFMM M and a max-C AFMM W , both synthesized using FLA, project input x into the set of their fixed points if C and D are both associative [5]. Furthermore, Proposition 1 shows that output M ( x ) of a min-D AFMM with FLA is the greatest fixed-point less than or equal to input x . Analogously, a max-C AFMM with FLA yields the least fixed-point greater than or equal to the input [5].
Proposition 1 
(Valle and Sussner [5]). We let [ 0 , 1 ] , , , C , R and [ 0 , 1 ] , , , D , S be an associative R-clogli and an associative dual R-clogli, respectively. The output of the min-D AFMM M defined by (22) with FLA given by (24) satisfies
M ( x ) = z I ( A ) : z x , x [ 0 , 1 ] n ,
where I ( A ) denotes the set of all fixed points of M which depends on and includes the fundamental memory set A = { a 1 , , a k } . Dually, the output of the max-C AFMM W with FLA satisfies
W ( x ) = y J ( A ) : y x , x [ 0 , 1 ] n ,
where J ( A ) denotes the set of all fixed points of W , which also depends on and contains fundamental memory set A .
As a consequence of Proposition 1, AFMMs with FLA present the following properties: they can store as many vectors as desired; they have a large number of spurious memories; an AFMM exhibits tolerance to either dilative noise or erosive noise, but it is susceptible to mixed (dilative+erosive) noise. We recall that distorted version x of fundamental memory a ξ underwent a dilative change if x a ξ . Dually, we say that x underwent an erosive change if x a ξ [27].
Example 1. 
We consider fundamental memory set
A = a 1 = 0.4 0.3 0.7 0.2 , a 2 = 0.1 0.7 0.5 0.8 , a 3 = 0.8 0.5 0.4 0.2 .
Using Gödel’s co-implication J M in (24), synaptic weight matrix M M of the min- D M AFMM M M with FLA is
M M = 0.00 0.80 0.80 0.80 0.70 0.00 0.70 0.50 0.70 0.70 0.00 0.70 0.80 0.80 0.80 0.00 .
Now, we consider input fuzzy set
x = 0.4 0.3 0.8 0.7 T .
We note that x is a dilated version of fundamental memory a 1 because x = a 1 + [ 0.0 0.0 0.1 0.5 ] T a 1 . The output of the min- D M AFMM with FLA is
M M ( x ) = M M M x = 0.40 0.30 0.70 0.70 T a 1 ,
where “ M ” denotes the min- D M product defined in terms of fuzzy disjunction D M . According to Proposition 1, output 0.40 0.30 0.70 0.70 T is a fixed point of M M that does not belong to fundamental memory set A . Thus, it is a spurious memory of M M . Similarly, we can use FLA to store fundamental set A into the min-D AFMMs M L and M G obtained by considering, respectively, Lukasiewicz and Gaines fuzzy disjunctions. Upon presentation of input vector x given by (29), the min-D AFMMs M L and M G yield, respectively,
M L ( x ) = M L L x = 0.40 0.30 0.70 0.40 T a 1 ,
and
M G ( x ) = M L L x = 0.40 0.30 0.80 0.70 T a 1 ,
such that the min- D M AFMM M M , the auto-associative memories M L and M G failed to produce the desired output, a 1 .

4. Max- C and Min- D Projection Auto-Associative Fuzzy Morphological Memories

As distributed n × n matrix-based auto-associative memories, a great deal of computer memory is consumed by min-D and max-C AFMMs if length n of the stored vectors is considerable. Furthermore, from Proposition 1, their tolerance to either dilative or erosive noise is degraded as the number of fixed points increases.
Inspired by the feature that min-D and max-C AFMMs with FLA project the input vector into the set of their fixed point, we can improve the noise tolerance of these memory models by reducing their set of fixed points. Accordingly, we recently introduced the max-C projection auto-associative fuzzy memories (max-C PAFMMs) by replacing in (25) set I ( A ) by set C ( A ) of all max-C combinations of vectors of A [37,38]. Formally, we let [ 0 , 1 ] , , , C , R be an R-clogli. Given set A = { a 1 , , a k } [ 0 , 1 ] n of fundamental memories, a max-C PAFMM V : [ 0 , 1 ] n [ 0 , 1 ] n is defined by
V ( x ) = z C ( A ) : z x , x [ 0 , 1 ] n ,
where set C ( A ) is defined in (20). A dual model, referred to as min-D PAFMM, is obtained by replacing J ( A ) by set D ( A ) of all min-D combinations of the fundamental memories in (26). Specifically, we let [ 0 , 1 ] , , , D , S be a dual R-clogli. A min-D PAFMM S : [ 0 , 1 ] n [ 0 , 1 ] n satisfies
S ( x ) = y D ( A ) : y x , x [ 0 , 1 ] n ,
where set D ( A ) is given in (21). The following theorem is a straightforward consequence of these definitions.
Theorem 1. 
We let [ 0 , 1 ] , , , C , R and [ 0 , 1 ] , , , D , S be an R-clogli and a dual R-clogli, respectively. The max-C and min-D PAFMMs given, respectively, by (33) and (34) satisfy inequalities V ( x ) x S ( x ) for any input vector x [ 0 , 1 ] n . Furthermore, V ( V ( x ) ) = V ( x ) and S ( S ( x ) ) = S ( x ) for all x [ 0 , 1 ] n .
As a consequence of Theorem 1, a max-C PAFMM and a min-D PAFMM are, respectively, opening and closing form fuzzy mathematical morphologies [61]. Like min-D AFMM, max-C PAFMM exhibits only tolerance to dilative noise. Also, it is susceptible to either erosive or mixed noise. In fact, fundamental memory a ξ cannot be retrieved by a max-C PAFMM from an input such that x a ξ . Similarly, like max-C AFMM, min-D PAFMM S exhibits tolerance to erosive noise, but it is not robust to dilative or mixed noise.
Let us now address the absolute storage capacity of max-C and min-D PAFMMs. The following theorem shows that all fundamental memories are fixed points of V and S . Thus, max-C and min-D PAFMMs exhibit unlimited absolute storage capacity.
Theorem 2. 
We let [ 0 , 1 ] , , , C , R and [ 0 , 1 ] , , , D , S be, respectively, an R-clogli and a dual R-clogli and consider a fundamental memory set A = { a 1 , , a k } [ 0 , 1 ] n . Max-C PAFMM given by (33) satisfies V ( a ξ ) = a ξ for all ξ K . Dually, S ( a ξ ) = a ξ for all ξ K , where S denotes min-D PAFMM given by (34).
The following theorem, which is a straightforward consequence of the adjunction relationships given by (11) and (12), provides effective formulas for the implementation of max-C and min-D PAFMMs.
Theorem 3. 
We let [ 0 , 1 ] , , , C , R and [ 0 , 1 ] , , , D , S be an R-clogli and a dual R-clogli, respectively. Also, we consider a fundamental memory set A = a 1 , , a k [ 0 , 1 ] n and let x [ 0 , 1 ] n be an arbitrary input. Max-C PAFMM V given by (33) satisfies
V ( x ) = ξ = 1 k C ( λ ξ , a ξ ) , w h e r e λ ξ = j = 1 n R ( a j ξ , x j ) .
Similarly, the output of min-D PAFMM S can be computed by
S ( x ) = ξ = 1 k D ( θ ξ , a ξ ) , w h e r e θ ξ = j = 1 n S ( a j ξ , x j ) .
Remark 1. 
Theorem 3 above gives a formula for coefficient λ ξ that is used to define the output of max-C PAFMM. In some sense, coefficient λ ξ corresponds to the degree of inclusion of fundamental memory a ξ in input fuzzy set x . Specifically, if residual R of fuzzy conjunction C is a fuzzy implication, then we have
λ ξ = I n c F ( a ξ , x ) , ξ K ,
where I n c F denotes the Bandler–Kohout fuzzy inclusion measure [62].
Regarding computational effort, PAFMMs are generally less expensive than their corresponding AFMMs because they are non-distributive memory models, which means that they do not require the storage of a synaptic weight matrix of size n × n . Additionally, they involve fewer floating-point operations than their corresponding min-D and max-C AFMMs if k < n . To illustrate this remark, we consider a fundamental memory set A = { a 1 , , a k } , where a ξ [ 0 , 1 ] n for all ξ { 1 , , k } with k < n . On the one hand, to synthesize the synaptic weight matrix of a min-D AFMM M , we perform k n 2 evaluation of residual S and ( 2 k 1 ) n 2 comparisons. In addition, the resulting synaptic weight matrix consumes O ( n 2 ) of the memory space. In the recall phase, the min-D AFMM M requires n 2 evaluations of a fuzzy disjunction and ( 2 n 1 ) n comparisons. On the other hand, to compute parameters λ ’s of a max-C PAFMM V , we perform 2 n k evaluations of a residual operator and ( 2 n 1 ) k comparisons. The subsequent step of the max-C PAFMM V requires 2 n k evaluations of a fuzzy conjunction and ( k 1 ) n comparisons. Lastly, it consumes O ( n k ) of memory space to store the fundamental memories. Similar remarks hold for a max-C AFMM and a min-D PAFMM. Table 1 summarizes the computational effort in the recall phase of AFMMs and PAFMMs. Here, fuzzy operations refer to evaluations of fuzzy conjunctions or disjunctions and their residual operators.
Finally, different from min-D and max-C AFMMs, max-C and min-D PAFMMs are not dual models with respect to a strong fuzzy negation. The following theorem shows that the negation of a min-D PAFMM is a max-C PAFMM designed to store the negation of fundamental memories, and vice-versa.
Theorem 4. 
We let [ 0 , 1 ] , , , C , R and [ 0 , 1 ] , , , D , S be, respectively, an R-clogli and a dual R-clogli where pairs ( C , D ) and ( R , S ) are dual operators with respect to strong fuzzy negation η. Given fundamental memory set A = { a 1 , , a k } [ 0 , 1 ] n , we define B = { b 1 , , b k } by setting b i ξ = η ( a i ξ ) , for all i = 1 , , n and ξ K . Also, we let V and S be, respectively, max-C and the min-D PAFMMs designed for the storage of a 1 , , a k and define their negation as follows for every x [ 0 , 1 ] n :
V * ( x ) = η ( V [ η ( x ) ] ) a n d S * ( x ) = η ( S [ η ( x ) ] ) .
The negation S * of S is the max-C PAFMM designed for the storage of b 1 , , b k , that is,
S * ( x ) = ξ = 1 k C ( λ ξ * , b ξ ) , w h e r e λ ξ * = j = 1 n R ( b j ξ , x j ) .
Analogously, negation V * of V is the min-D PAFMM given by
V * ( x ) = ξ = 1 k D ( θ ξ * , b ξ ) , w h e r e θ ξ * = j = 1 n S ( b j ξ , x j ) .
It follows from Theorem 4 that negations S * and V * fail to store fundamental memory set A = { a 1 , , a k } .
Example 2. 
We consider fundamental memory set A given by (27). We let C M and I M be the minimum fuzzy conjunction and the fuzzy implication of Gödel, respectively. We synthesize the max-C PAFMM V M designed for the storage of A using adjunction pair ( I M , C M ) . From Theorem 2, equation V M ( a ξ ) = a ξ holds for ξ = 1 , 2 , 3 . Given input vector x defined by (29), we obtain from (35) coefficients
λ 1 = 1.0 , λ 2 = 0.3 , a n d λ 3 = 0.3 .
Thus, the output of the max-C PAFMM V M is
V M ( x ) = C M ( λ 1 , a 1 ) C M ( λ 2 , a 2 ) C M ( λ 3 , a 3 ) = 0.40 0.30 0.70 0.30 T a 1 .
We note that V M failed to retrieve fundamental memory a 1 .
Analogously, we can store fundamental memory set A into the max-C PAFMM V L using Lukasiewicz fuzzy conjunction and implication. Upon presentation of vector x , the max-C PAFMM V L produces
V L ( x ) = 0.40 0.30 0.70 0.40 T a 1 .
Like the min-D AFMM M L , memory V L failed to recall fundamental memory a 1 . Nevertheless, the outputs of the max-C PAFMMs V M and V L are more similar to the desired vector a 1 than the min-D AFMMs M M , M L , and M G (see Example 1). Quantitatively, Table 2 shows the normalized mean squared error (NMSE) between the recalled vector and the desired output a 1 . We recall that the NMSE between x and a is given by
N M S E ( x , a ) = x a 2 2 a 2 2 = j = 1 n ( x j a j ) 2 j = 1 n a j 2 .
This simple example confirms that a max-C PAFMM can exhibit a better tolerance with respect to dilative noise than its corresponding min-D AFMM.
Table 2. Normalized mean squared error.
Table 2. Normalized mean squared error.
x M M ( x ) M L ( x ) M G ( x ) V M ( x ) V L ( x ) V Z ( x )
N M S E ( , a 1 ) 0.330.320.050.330.010.050.00
Let us conclude the section by emphasizing that we can only ensure optimal absolute storage capacity if C has a left identity.
Example 3. 
We consider the “compensatory and” fuzzy conjunction defined by
C A ( x , y ) = ( x y ) ( x + y x y ) .
Fuzzy conjunction C A does not have a left identity. Moreover, the fuzzy implication that forms an adjunction with C A is
I A ( x , y ) = 1 , x = 0 , 1 x 2 + x 2 + 4 x ( 1 x ) y 2 2 x ( 1 x ) , 0 < x < 1 , y 2 , x = 1 .
Now, we let V A : [ 0 , 1 ] 4 [ 0 , 1 ] 4 be the max- C A PAFMM designed for the storage of fundamental memory set A given by (27). Upon presentation of fundamental memory a 1 as input, we obtain from (35) coefficients
λ 1 = 0.39 , λ 2 = 0.06 a n d λ 3 = 0.23 .
Thus, the output vector of the max-C PAFMM V A is
V A ( a 1 ) = C A ( λ 1 , a 1 ) C A ( λ 2 , a 2 ) C A ( λ 3 , a 3 ) = 0.40 0.27 0.47 0.20 a 1 .
In a similar fashion, using fundamental memories a 2 and a 3 as input, we obtain from V A outputs
V A ( a 2 ) = 0.10 0.39 0.30 0.44 a 2 a n d V A ( a 3 ) = 0.52 0.37 0.40 0.20 a 3 .
We note that inequality V A ( a ξ ) a ξ holds for ξ = 1 , 2 , 3 . However, fundamental memories a 1 , a 2 , and a 3 are not fixed points of the max-C PAFMM V A .

5. Zadeh Max- C PAFMM and Its Dual Model

The noise tolerance of an auto-associative memory is usually closely related to its spurious memories. In general, the more spurious memories an auto-associative memory has, the less tolerant to noise it is. We recall that a spurious memory is a fixed point of an auto-associative memory that does not belong to the fundamental memory set [1]. For example, in Example 2, fixed point y = [ 0.4 , 0.3 , 0.7 , 0.3 ] T is a spurious memory of the max-C PAFMM V M . Indeed, the set of fixed points, max-C PAFMM, corresponds to the set of all max-C combinations of the fundamental memories. Therefore, the noise tolerance of a max-C PAFMM increases as family C ( A ) becomes smaller.
Interestingly, by considering the fuzzy conjunction of Gaines C G in (20), we can significantly reduce C ( A ) , where A is a fundamental memory set { a 1 , , a k } . Indeed, from Theorem 3, the output of the max-C PAFMM based on Gaines’ fuzzy conjunction is given by
V Z ( x ) = ξ = 1 k C G ( λ ξ , a ξ ) ,
where
λ ξ = i = 1 n I G ( a j ξ , x j ) = I n c Z ( a ξ , x ) , ξ K .
Here, I n c Z : [ 0 , 1 ] n × [ 0 , 1 ] n [ 0 , 1 ] denotes the fuzzy inclusion measure of Zadeh defined as follows for all a , b [ 0 , 1 ] n :
I n c Z ( a , b ) = 1 , a j b j , j = 1 , , n , 0 , o t h e r w i s e .
In other words, coefficients λ ξ are determined using Zadeh’s fuzzy inclusion measure I n c Z . Hence, this max-C PAFMM is called Zadeh’s max-C PAFMM and is denoted by V Z .
From (52), coefficient λ ξ = I n c Z ( a ξ , x ) is either zero or one. Moreover, λ ξ = 1 if and only if a j ξ x j for all j = 1 , , n . Also, we have C G ( 0 , x ) = 0 and C G ( 1 , x ) = x for all x [ 0 , 1 ] . Therefore, for any input x [ 0 , 1 ] n , the output of Zadeh’s max-C PAFMM is alternatively given by equation
V Z ( x ) = ξ I a ξ ,
where
I = { ξ : a j ξ x j , j = 1 , , n }
is the set of indexes ξ such that a ξ is less than or equal to input x , i.e., a ξ x . Here, we have V Z ( x ) = 0 if I = , where 0 is a vector of zeros.
In a similar manner, from (36), the dual of Zadeh’s max-C PAFMM is the min-D PAFMM defined by
S Z ( x ) = ξ = 1 k D G ( θ ξ , a ξ ) , w h e r e θ ξ = j = 1 n J G ( a j ξ , x j ) .
Here, D G and I G denote the fuzzy disjunction and the fuzzy co-implication of Gaines, respectively. Alternatively, the output of Zadeh’s min-D PAFMM is given by
S Z ( x ) = ξ J a ξ ,
where
J = { ξ : a j ξ x j , j = 1 , , n }
is the set of indexes ξ such that a ξ x . Here, we have S Z ( x ) = 1 if J = , where 1 is a vector of ones.
We note from (53) and (56) that no arithmetic operation is performed during the recall phase of Zadeh’s max-C PAFMM and its dual model; they only perform comparisons! Thus, both V Z and S Z are computationally cheap and fast associative memory models. In addition, Zadeh’s max-C PAFMM is exceptionally robust to dilative noise, while its dual model S Z exhibits excellent tolerance to erosive noise. The following theorem addresses the noise tolerance of these memory models.
Theorem 5. 
We consider fundamental memory set A = { a 1 , , a k } [ 0 , 1 ] n . Identity V Z ( x ) = a γ holds true if there exists a unique γ K such that a γ x . Furthermore, if there exists a unique μ K such that a μ x then S Z ( x ) = a μ .
Example 4. 
We consider fundamental memory set A given by (27) and input fuzzy set x defined by (29). Clearly, a 1 x , a 2 x , and a 3 x . Thus, the set of indexes defined by (54) is I = { 1 } . From (53), the output of Zadeh’s max-C PAFMM is
V Z ( x ) = ξ I a ξ = a 1 .
We note that the max-C PAFMM V Z perfectly recalled the original fundamental memory. As a consequence, the NMSE is zero. From Table 2, the max-C PAFMM of Zadeh yielded the best NMSE, followed by the max-C PAFMMs V M , V P and V L .
Let us conclude this section by remarking that Zadeh’s max-C PAFMM also belongs to the class of Θ -fuzzy associative memories ( Θ -FAMs) proposed by Esmi et al. [11].
Remark 2. 
An auto-associative Θ-FAM is defined as follows: We consider fundamental memory set A = { a 1 , , a k } [ 0 , 1 ] n and let Θ ξ : [ 0 , 1 ] n [ 0 , 1 ] be operators such that Θ ξ ( a ξ ) = 1 for all ξ = 1 , , k . Given input x and weight vector w = [ w 1 , , w k ] R k , Θ-FAM O yields    
O ( x ) = ξ I w ( x ) a ξ ,
where I w ( x ) is the following set of indexes:
I w ( x ) = { γ : w γ Θ γ ( x ) = max ξ = 1 : k w ξ Θ ξ ( x ) } .
Now, the max-C PAFMM of Zadeh is obtained by considering w = [ 1 , 1 , , 1 ] R k and Θ ξ ( · ) = I n c Z ( a ξ , · ) , for all ξ = 1 , , k . Specifically, in this case, I w ( x ) coincides with the set of index I defined by (53).

6. Noise-Masking Strategy for PAFMMs

Unfortunately, a max-C PAFMM cannot retrieve fundamental memory a ξ from input x a ξ . Hence, one of the major weaknesses of max-C PAFMMs is their limited tolerance to erosive or mixed noise, which restricts their application to real-world problems. Similar remarks hold for min-D PAFMMs, according to the duality principle. However, the noise tolerance of a PAFMM can be significantly enhanced by masking the noise contained in a corrupted input [40]. Specifically, the noise-masking strategy aims to transform an input degraded by mixed noise into a vector corrupted by either dilative or erosive noise. Inspired by the works of Urcid and Ritter [40], we present a noise-masking strategy for PAFMMs (which has been previously discussed in conference paper [41]). To simplify the presentation, we focus only on max-C PAFMMs.
Suppose we have a max-C PAFMM, denoted by V , which was synthesized using fundamental memory set A = { a 1 , , a k } . Let x be a version of fundamental memory a γ corrupted by mixed noise. In this case, a d γ = x a γ represents the masked input vector. This masked vector contains only dilative noise, meaning inequality a d γ a γ holds. Since the max-C PAFMM is robust to dilative noise, we can expect it to be able to retrieve the original fuzzy set a γ when presented with masked vector a d γ .
Nevertheless, to mask the noise, we need to know beforehand which fundamental memory is corrupted. To overcome this practical shortcoming, Urcid and Ritter proposed comparing masked vector a d ξ = x a ξ to original input x and each fundamental memory a ξ , ξ K . The comparison is based on a meaningful measure, such as normalized mean squared error (NMSE).
According to our previous study [41], we suggest using a fuzzy similarity measure to determine the masked vector. Specifically, a fuzzy similarity measure is mapping σ : [ 0 , 1 ] n × [ 0 , 1 ] n [ 0 , 1 ] , which yields the degree of similarity between a [ 0 , 1 ] n and b [ 0 , 1 ] n [63,64,65,66,67]. By using a fuzzy similarity measure, we can obtain masked vector a d γ by computing the maximum similarity between input x and fundamental memory a γ . In mathematical terms, we have a d γ = x a γ where γ is an index such that
σ ( x , a γ ) = ξ = 1 k σ ( x , a ξ ) .
In summary, using noise masking to retrieve vectors through a max-C PAFMM V results in an auto-associative fuzzy morphological memory V M : [ 0 , 1 ] n [ 0 , 1 ] n defined by
V M ( x ) = V ( x a γ ) , x [ 0 , 1 ] n ,
where γ is an index that satisfies (59). Similarly, we can use the technique of noise masking for the recall of vectors using a min-D PAFMM S . Formally, we denote by S M the auto-associative fuzzy morphological memory given by
S M ( x ) = S ( x a γ ) ,
where γ is an index that satisfies (59) and S : [ 0 , 1 ] n [ 0 , 1 ] n is a min-D PAFMM.
Finally, we mentioned in the previous section that the Zadeh max-C PAFMM V Z and its dual model do not perform floating point arithmetic operations. However, some arithmetic operations may be required for computing the masked input fuzzy set. For example, if we use the Hamming similarity measure σ H defined by
σ H ( a , b ) = 1 1 n i = 1 N a i b i , a , b [ 0 , 1 ] n ,
then V Z M performs ( 2 n + 1 ) k floating point operations during the retrieval phase.
Example 5. 
We consider fundamental memory set A given by (27) but let the input fuzzy set be
x = 0.3 0.4 0.8 0.7 T .
We point out that x is obtained by introducing mixed noise into fundamental memory a 1 . Specifically, we have x = a 1 + 0.1 0.1 0.1 0.5 T . Because a max-C PAFMM exhibits only tolerance to dilative noise, it is not able to retrieve fundamental memory a 1 . For example, Zadeh’s max-C PAFMM yields V Z ( x ) = 0.0 0.0 0.0 0.0 T , which is none of the fundamental memories. In other words, Zadeh’s max-C PAFMM fails to retrieve the desired fundamental memory a 1 . However, the Hamming similarity measure σ H given by (62) results in
σ H ( x , a 1 ) = 0.8 , σ H ( x , a 2 ) = 0.775 , a n d σ H ( x , a 3 ) = 0.625 .
Hence, γ = 1 satisfies (59). Furthermore, using the noise masking strategy, the max-C PAFMM V Z M yields
V Z M ( x ) = V Z ( x a 1 ) = V Z 0.4 0.4 0.8 0.7 = 0.4 0.3 0.7 0.2 = a 1 .
In conclusion, the max-C PAFMM V Z with the noise masking strategy perfectly recalls original fundamental memory a 1 . In light of this example, we only consider fuzzy morphological associative memories with the noise masking strategy in the following section.

7. Computational Experiments

Inspired by the auto-associative memory-based classifiers described in [13,15], we propose the following auto-associative memory-based classifier for face images. Suppose we have a training dataset with k i different face images from an individual i, for i = 1 , , c . Each face image is encoded into a column-vector a ξ , i [ 0 , 1 ] n , where i { 1 , , c } and ξ { 1 , , k i } . We address below two approaches to encode face images into [ 0 , 1 ] n . For now, we let M i denote an auto-associative memory designed for the storage of the fundamental memory set A i = { a 1 , i , , a k i , i } [ 0 , 1 ] n composed of all training images from individual i { 1 , , c } . Given an unknown face image, we also encode it into a column-vector x [ 0 , 1 ] n using the same procedure as the training images. Then, we present x as input to the auto-associative memories M i ’s. Finally, we assign the unknown face image to the first individual γ such that
σ x , M γ ( x ) σ x , M i ( x ) , i = 1 , , c ,
where σ denotes a fuzzy similarity measure. In other words, x belongs to an individual such that the recalled vector is the most similar to the input.
In our experiments, we used in (66) the Hamming similarity measure defined by (62). Furthermore, a face image was encoded into a column-vector x [ 0 , 1 ] n using either one of the following two approaches:
  • A pre-processing approach given by the sequence of MATLAB-style commands: rgb2gray (This command is applied only if the input is a color face image in the RGB color space), im2double, imresize, and reshape. We point out that we resized the images according to the dimensions used by Feng et al. [68].
  • A convolutional neural network as feature extractor followed by data transformation. Specifically, we used the python package face_recognition (Available at https://github.com/ageitgey/face_recognition, accessed on 11 October 2023), which is based on Dlib library [69]. Package face_recognition includes a pre-trained ResNet network designed to extract features for face recognition. The ResNet feature extractor maps a face image into vector v R 128 . We obtained x [ 0 , 1 ] n by applying the following transformation where μ i and σ i denote the mean and standard deviation of the ith component of all 128-dimensional training vectors:
    x i = 1 1 + e ( v i μ i ) / σ i , i = 1 , , 128 .
Apart from literature models, we only consider the min- D L AFMM and Zadeh’s max-C PAFMM. We recall that the AFMMs based on Lukasiewicz connectives outperformed many other AFMMs in experiments concerning the retrieval of gray-scale images [9]. Furthermore, the min- D L AFMM can be obtained from the morphological auto-associative memory of Ritter et al. using thresholds [9,27]. As pointed out in Section 5, Zadeh’s PAFMM is expected to exhibit tolerance to dilative noise larger than the other max-C PAFMMs. Thus, we synthesized the classifiers based on the min-D AFMM M L M and Zadeh’s max-C PAFMM V Z M , both equipped with the noise masking strategy described by (60) and (62). These classifiers, combined with the first encoding strategy listed above, are denoted, respectively, by Resized+ M L M and Resized+ V Z M . Similarly, we refer to ResNet+ M L M and ResNet+ V Z M , the fuzzy associative memory-based classifiers combined with the second approach listed above.
Performances of the M L M - and V Z M -based classifiers have been compared with the following approaches from the literature: sparse representation classifier (SRC) [70], linear regression-based classifier (LRC) [71], collaborative representation-based classifier (CRC) [72], fast superimposed sparse parameter (FSSP(1)) classifier [68], and the deep ResNet network available at the python face_recognition package. We point out that face recognition methods SRC, LRC, CRC, and FSSP(1) are included for comparison purposes because they use subspace projection methods, which somewhat resemble the proposed PAFMM classifier. In contrast, the ResNet is among the state-of-the-art deep neural networks for image classification tasks. According to Dlib package developers, ResNet achieved a 99.38% accuracy on the standard “Labeled Faces in the Wild” benchmark, a performance comparable to that of other state-of-the-art approaches. Despite the many other face recognition methods in the literature, these are representative approaches for highlighting the potential application of the proposed projection auto-associative fuzzy morphological memories.

7.1. Face Recognition with Expressions and Pose

Face recognition has been an active research topic in pattern recognition and computer vision due to its applications in human–computer interaction, security, access control, and others [15,68,72]. To evaluate the performance of the V Z M -based classifier, we conducted experiments using three standard face databases, namely the Georgia Tech Face Database (GT) [73], the AT&T Face Database [74], and the AR Face Image Database [75]. These face databases incorporate pose, illumination, and gesture alterations.
  • The Georgia Tech (GT) Face Database contains face images of 50 individuals taken in two or three sessions at the Center for Signal and Image Processing at the Georgia Institute of Technology [73]. These images, up to 15 per individual, show frontal and tilted faces with different facial expressions, lighting conditions, and scales. In this paper, we used the cropped images in the GT dataset. Figure 2 presents 15 facial images of one individual from the GT database. As pointed out previously, the color images from the cropped GT database were converted into gray-scale face images and resized to 30 × 40 pixels before being presented to the classifiers SRC, LRC, CRC, FSSP(1), Resized+ M L M , and Resized+ V Z M .
  • The AT&T database, formerly known as the ORL database of faces, has 10 different images for each of 40 distinct individuals [74]. All face images are in up-right and frontal positions. The ten images of an individual are shown in Figure 3 for illustrative purposes. For classifiers SRC, LRC, CRC, FSSP(1), Resized+ M L M , and Resized+ V Z M , the face images of the AT&T database were resized to 28 × 23 pixels.
  • The AR face image database contains over 4000 facial images from 126 individuals [75]. For each individual, 26 images were taken in two sessions, separated by two weeks. The face images feature different facial expressions, illumination changes, and occlusions. Our experiments used a subset of the cropped AR face image database with face images of 100 individuals. Furthermore, we only considered each individual’s eight facial images with different expressions (normal, smile, anger, and scream). The eight face images of one individual of the AR database are shown in Figure 4. Finally, we point out that the images in the AR database were converted to gray-scale images and resized to 50 × 40 pixels for classifiers SRC, LRC, CRC, FSSP(1), Resized+ M L M , and Resized+ V Z M .
Figure 2. Images of one individual from the GT face database.
Figure 2. Images of one individual from the GT face database.
Appliedmath 03 00050 g002
Figure 3. Images of one individual from the AT&T face database.
Figure 3. Images of one individual from the AT&T face database.
Appliedmath 03 00050 g003
Figure 4. Some images of one individual from the AR face database.
Figure 4. Some images of one individual from the AR face database.
Appliedmath 03 00050 g004
For GT and the AT&T face databases, we followed the “first N” scheme adopted by Feng et al. [68]. Each person’s first N face images were used as the training set. The remaining face images of each individual were used for testing. Number N varied according to the computational experiments described in [68]. As for the AR face image database, we also followed the same evaluation protocol described in [68]: three facial expressions were used for training (e.g., normal, smile, and anger), while the remainder were used for testing (e.g., scream). Table 3, Table 4 and Table 5 list the recognition rates (RRs) yielded by the classifiers. These tables also provide the average recognition rate (ARR) of a given scenario. For a visual interpretation of the overall performance of the classifiers, Figure 5a shows the box plot comprising the normalized recognition rates listed in Table 3, Table 4 and Table 5. The normalized recognition rates were obtained by subtracting and dividing the values in Table 3, Table 4 and Table 5, respectively, by column-wise mean and standard deviation. Furthermore, Figure 5b shows the Hasse diagram obtained from the outcome of the Wilcoxson signed-ranks test comparing any two classifiers with a confidence level at 95% using all the recognition rates listed in Table 3, Table 4 and Table 5 [76,77,78]. Specifically, two classifiers were connected by an edge if the test rejected the null hypothesis that the two classifiers perform equally well against the alternative hypothesis that the recognition rates of the classifier at the top are significantly larger than the recognition rates of the classifier on the bottom of the edge. In other words, the method at the top outperformed the method at the bottom of an edge. Also, we refrained from including the edges that can be derived from transitivity. For example, from Figure 5b, we derived data signifying that ResNet + V Z M is above ResNet, and ResNet is above FSSP(1). Thus, we deduced that the ResNet + V Z M -based classifier outperformed FSSP(1) in these experiments.
In conclusion, Figure 5 shows that the ResNet+ V Z M and ResNet+ M L M -based classifiers outperformed all other classifiers, including the ResNet and FSSP(1) models, for recognition of uncorrupted face images. We recall, however, that Zadeh’s PAFMM V Z M is computationally cheaper than AFMM M L M . Let us now evaluate the performance of the classifiers in the presence of noise.

7.2. Face Recognition in the Presence of Noise

In many practical situations, captured images are susceptible to different levels of noise and blurring effects. According to Gonzalez and Woods [79], the principal noise sources in digital images arise during image acquisition or transmission. The performance of imaging sensors is affected by various factors, such as environmental conditions and the quality of the sensing elements. For instance, Gaussian noise arises in an image due to electronic circuit noise and sensor noise factors, while transmission errors cause salt and pepper noise. A blurred image may arise when the camera is out of focus, or relative motion exists between the camera and objects in the scene. Figure 6 displays undistorted and corrupted versions of an image from the AT&T face database. The noise images were obtained by introducing salt and pepper noise with probability ρ = 0.05 , Gaussian noise with a mean zero, variance σ 2 = 0.01 , and a horizontal motion of nine pixels (blurred images).
In order to simulate real-world conditions, we evaluated the performance of the classifiers when the training images were not distorted, but some noise corrupted test images. Specifically, test images were corrupted by the following kinds of noise:
  • Salt and pepper noise with probability ρ [ 0 , 0.5 ] ;
  • Gaussian noise with mean 0 and variance σ 2 [ 0 , 0.5 ] ;
  • Horizontal motion whose number of pixels varied from 1 to 20.
In each scenario, the first five images of each individual of the AT&T face database were used for training. The remaining images corrupted by some noise were used for testing. Figure 7 shows the average recognition rates (ARRs) produced by the eight classifiers in 30 experiments for each noise intensity. Furthermore, Figure 8 shows the Hasse diagram of the outcome of the Wilcoxson signed-ranks test comparing any two classifiers with a confidence level of 95%. In contrast to the previous experiment with undistorted face images, classifiers ResNet, ResNet+ V Z M , and ResNet+ M L M exhibited the worst recognition rates for corrupted input images. Moreover, we concluded the following from Figure 7 and Figure 8:
  • FSSP(1) and Resized+ V Z M were, in general, the best classifiers for the recognition of images corrupted by salt and pepper noise.
  • FSSP(1) and SRC yielded the largest recognition rates in the presence of Gaussian noise.
  • Resized+ V Z M outperformed all the other classifiers for recognizing blurred input images.
In general, Resized+ V Z M and FSSP(1) were the most robust classifiers for corrupted input images.

7.3. Computational Complexity

Let us conclude this section by analyzing the computational complexity of the classifiers considered. To this end, we let c denote the number of individuals, k be the number of training images per individual, and n be either the number of pixels of the resized face image or n = 128 for vectors encoded by ResNet.
First of all, the following inequalities rank the computational complexity of classifiers SRC, LRC, CRC, and FSSP(1) [68]:
O L R C < O C R C < O F S S P < O S R C .
Let us now compare the computational complexity of LRC and Resized+ V Z M classifiers. On the one hand, LRC is dominated by the solution of c least square problems with k unknowns and n equations. Therefore, the computational complexity of the LRC method is
O L R C = O ( c n k 2 ) .
On the other hand, due to the noise-masking strategy, the computational complexity of the Resized+ V Z M classifier is
O Resized + V Z M = O ( c n k ) .
Thus, the Resized+ V Z M classifier is computationally cheaper than LRC. According to Table 1, the Resized+ V Z M classifier is also computationally cheaper than the Resized+ M L M classifier. Therefore, the Resized- V Z M classifier is the cheapest among the classifiers based on resized versions of the face images.
The computational effort to encode a face image into a vector of length n = 128 by a pre-trained neural network is fixed. Discarding the encoding phase, the computational complexity of the ResNet+ V Z M classifier depends only on comparisons between the encoded input and the encoded trained vectors, which results in O Resized + V Z M = O ( c n k ) . In contrast, the computational effort of ResNet+ M L M is O ( c k n 2 ) , which is certainly more expensive than that of both ResNet and ResNet+ V Z M .
In conclusion, V Z M -based classifiers are competitive with state-of-the-art approaches because they exhibit a graceful balance between accuracy and computational cost.

8. Concluding Remarks

In this paper, we investigated max-C and min-D projection auto-associative fuzzy memories (max-C and min-D PAFMMs), which are defined on an algebraic structure called residuated complete lattice-ordered groupoid with left identity (R-clogli) and its dual version. Briefly, PAFMMs are non-distributed versions of the well-known max-C and min-D auto-associative fuzzy morphological memories (max-C and min-D AFMMs) [5,22]. Specifically, a PAFMM projects the input vector into the family of all max-C (or min-D) combinations of the stored items. Moreover, max-C and min-D PAFMMs are more robust to dilative or erosive noise than the AFMMs. In addition, PAFMMs are computationally cheaper than AFMMs if the number of stored items k is less than their length n.
Apart from a detailed discussion on PAFMM models, we focused on the particular model referred to as Zadeh’s max-C PAFMM because it is obtained by considering Zadeh’s fuzzy inclusion measure. Zadeh’s max-C and min-D PAFMMs are the most robust PAFMMs to either dilative or erosive noise. On the downside, they are susceptible to mixed noise. In order to improve the noise tolerance of Zadeh’s PAFMMs to mixed noise, we proposed a variation of the noise masking strategy of Urcid and Ritter using a fuzzy similarity measure [40].
Finally, experimental results using three famous face databases confirmed the potential application of Zadeh’s max-C PAFMM for face recognition. Specifically, using Wilcoxson’s signed-rank test, we concluded that ResNet+ V Z M , which is based on ResNet encoding and Zadeh’s max-C PAFMM classifier, outperformed important classifiers from the literature including ResNet [80], LRC [71], CRC [72], and FSSP(1) [68] classifiers for the recognition of undistorted face images. Furthermore, the experiments revealed that the Resized+ V Z M classifier performs as well as the FSSP(1) [68] method but requires much less computational resources.
In the future, we intend to investigate further applications of max-C and min-D PAFMMs. In particular, we plan to study further the combinations of deep neural networks and these fuzzy associative memories. We also plan to generalize Zadeh’s PAFMMs to more general complete lattices.

Author Contributions

Conceptualization, A.S.d.S. and M.E.V.; methodology, A.S.d.S. and M.E.V.; software, A.S.d.S. and M.E.V.; validation, A.S.d.S. and M.E.V.; formal analysis, A.S.d.S. and M.E.V.; investigation, A.S.d.S. and M.E.V.; writing—original draft, A.S.d.S. and M.E.V.; writing—review and editing, A.S.d.S. and M.E.V.; supervision, M.E.V.; funding acquisition, A.S.d.S. and M.E.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) under Programa de Formação Doutoral Docente, National Council for Scientific and Technological Development (CNPq) under grant no. 315820/2021-7 and São Paulo Research Foundation (FAPESP) under grant no. 2019/02278-2 and 2022/01831-2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Proof of the Theorems

To simplify the exposition, let us first prove Theorem 3.
Proof of Theorem 3. 
We prove only the first part of Theorem 3. The second part can be derived in a similar manner. We let z C ( A ) be a max-C combination of a 1 , , a k and consider set of indexes N = { 1 , , n } and K = { 1 , , k } . Since [ 0 , 1 ] , , , C , R is an R-clogli, we have
z x ξ = 1 k C ( λ ξ , a j ξ ) x j , j N
C ( λ ξ , a j ξ ) x j , ξ K , j N
λ ξ R ( a j ξ , x j ) , j N , ξ K
λ ξ j = 1 n R ( a j ξ , x j ) , ξ K .
Thus, the largest max-C combination z = ξ = 1 k C ( λ ξ , a ξ ) such that z x is obtained by considering λ ξ = j = 1 n R ( a j ξ , x j ) for all ξ K . □
Proof of Theorem 1. 
Given fundamental memory set A = { a 1 , , a k } , we define δ : [ 0 , 1 ] k [ 0 , 1 ] n and ε : [ 0 , 1 ] n [ 0 , 1 ] k by means of equations
δ ( λ ) = ξ = 1 k C ( λ ξ , a ξ ) and ε ( x ) = λ ,
where the components of λ = [ λ 1 , , λ k ] T are given by (35). From the proof of Theorem 3, we conclude that ( ε , δ ) is an adjunction. Furthermore, we have V ( x ) = δ ( ε ( x ) ) for all x [ 0 , 1 ] n . Therefore, V is an opening from mathematical morphology [21,43]. In particular, V is anti-extensive and idempotent, that is, V ( x ) x and V ( V ( x ) ) = V ( x ) for all x [ 0 , 1 ] n . In a similar fashion, we can show that S is a closing from mathematical morphology. Therefore, we have x S ( x ) and S ( S ( x ) ) = S ( x ) for all x [ 0 , 1 ] n . □
Proof of Theorem 2. 
First of all, we recall that in an R-clogli, fuzzy conjunction C has a left identity, i.e., there exists e [ 0 , 1 ] such that C ( e , x ) = x for all x [ 0 , 1 ] . Moreover, identity C ( 0 , x ) = 0 holds for all x [ 0 , 1 ] . Therefore, we can express a fundamental memory a ξ by the following max-C combination:
a ξ = C ( 0 , a 1 ) C ( e , a ξ ) C ( 0 , a k ) .
Alternatively, we have a ξ C ( A ) . From (33), we conclude that V ( a ξ ) = { z C ( A ) : z a ξ } = a ξ for all ξ = 1 , , k . Similarly, a min-D PAFMM satisfies S ( a ξ ) = a ξ for all ξ = 1 , , k if [ 0 , 1 ] , , , D , S is a dual R-clogli. □
Proof of Theorem 4. 
Let us only show (39). The second part of the theorem is derived in a similar manner. First, we recall that a strong negation is a decreasing operator. Thus, since the negation of the minimum is the maximum of the negations, we conclude from (36) and (16) that
S * ( x ) = η S η ( x ) = η j = 1 n D ( θ ξ , a ξ )
= ξ = 1 k η D ( θ ξ , a ξ ) = ξ = 1 k C ( λ ξ * , b ξ ) ,
where λ ξ * = η ( θ ξ ) satisfies the following identities:
λ ξ * = η j = 1 n S a j ξ , η ( x j )
= j = 1 n η S a j ξ , η ( x j )
= j = 1 n R η a j ξ , x j = j = 1 n R b j ξ , x j .
From the diagram depicted on Figure 1 and (35), we conclude that S * is the max-C PAFMM designed for the storage of b 1 , , b k . □
Proof of Theorem 5. 
We consider fundamental memory set A = { a 1 , , a k } [ 0 , 1 ] n and K = { 1 , , k } . If there exists a unique γ K such that a γ x , then the index set, given by (57), is equal to
I = { ξ : a j ξ x j , j = 1 , , n } = γ .
Therefore,
V Z ( x ) = ξ I L a ξ = a γ .
Analogously, we can prove the second part of this theorem. □

References

  1. Hassoun, M.H.; Watta, P.B. Associative Memory Networks. In Handbook of Neural Computation; Fiesler, E., Beale, R., Eds.; Oxford University Press: Oxford, UK, 1997; pp. C1.3:1–C1.3:14. [Google Scholar]
  2. Kohonen, T. Self-Organization and Associative Memory, 3rd ed.; Springer: New York, NY, USA, 1989. [Google Scholar]
  3. Hopfield, J.; Tank, D. Neural computation of decisions in optimization problems. Biol. Cybern. 1985, 52, 141–152. [Google Scholar] [CrossRef] [PubMed]
  4. Serpen, G. Hopfield Network as Static Optimizer: Learning the Weights and Eliminating the Guesswork. Neural Process. Lett. 2008, 27, 1–15. [Google Scholar] [CrossRef]
  5. Valle, M.E.; Sussner, P. Storage and Recall Capabilities of Fuzzy Morphological Associative Memories with Adjunction-Based Learning. Neural Netw. 2011, 24, 75–90. [Google Scholar] [CrossRef] [PubMed]
  6. Sussner, P.; Schuster, T. Interval-valued fuzzy morphological associative memories: Some theoretical aspects and applications. Inf. Sci. 2018, 438, 127–144. [Google Scholar] [CrossRef]
  7. Kusumadewi, S.; Rosita, L.; Gustri Wahyuni, E. Implementation of fuzzy associative memory toward optimizing a neural network model to predict total iron binding capacity. Biomed. Signal Process. Control 2023, 86, 105297. [Google Scholar] [CrossRef]
  8. Grana, M.; Chyzhyk, D. Image Understanding Applications of Lattice Autoassociative Memories. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1920–1932. [Google Scholar] [CrossRef] [PubMed]
  9. Sussner, P.; Valle, M.E. Implicative Fuzzy Associative Memories. IEEE Trans. Fuzzy Syst. 2006, 14, 793–807. [Google Scholar] [CrossRef]
  10. Sussner, P.; Ali, M. Image filters as reference functions for morphological associative memories in complete inf-semilattices. Mathw. Soft Comput. 2017, 24, 53–62. [Google Scholar]
  11. Esmi, E.; Sussner, P.; Bustince, H.; Fernandez, J. Theta-Fuzzy Associative Memories (Theta-FAMs). IEEE Trans. Fuzzy Syst. 2015, 23, 313–326. [Google Scholar] [CrossRef]
  12. Esmi, E.; Sussner, P.; Sandri, S. Tunable equivalence fuzzy associative memories. Fuzzy Sets Syst. 2016, 292, 242–260. [Google Scholar] [CrossRef]
  13. Sussner, P.; Valle, M.E. Grayscale Morphological Associative Memories. IEEE Trans. Neural Netw. 2006, 17, 559–570. [Google Scholar] [CrossRef]
  14. Valle, M.E.; Souza, A.C. Pattern Classification using Generalized Recurrent Exponential Fuzzy Associative Memories. In Handbook of Fuzzy Sets ComparisonHandbook of Fuzzy Sets Comparison Theory, Algorithms and Applications Theory, Algorithms and Applications; George, A., Papakostas, A.G.H., Kaburlasos, V.G., Eds.; Science Gate Publishing: Xanthi, Greece, 2016; Volume 6, Chapter 4; pp. 79–102. [Google Scholar] [CrossRef]
  15. Zhang, B.L.; Zhang, H.; Ge, S.S. Face Recognition by Applying Wavelet Subband Representation and Kernel Associative Memory. IEEE Trans. Neural Netw. 2004, 15, 166–177. [Google Scholar] [CrossRef] [PubMed]
  16. Ramsauer, H.; Schäfl, B.; Lehner, J.; Seidl, P.; Widrich, M.; Gruber, L.; Holzleitner, M.; Pavlović, M.; Sandve, G.K.; Greiff, V.; et al. Hopfield Networks is All You Need. arXiv 2020, arXiv:2008.02217. [Google Scholar]
  17. Salvatori, T.; Song, Y.; Hong, Y.; Frieder, S.; Sha, L.; Xu, Z.; Bogacz, R.; Lukasiewicz, T. Associative Memories via Predictive Coding. arXiv 2021, arXiv:2109.08063. [Google Scholar]
  18. Millidge, B.; Salvatori, T.; Song, Y.; Lukasiewicz, T.; Bogacz, R. Universal Hopfield Networks: A General Framework for Single-Shot Associative Memory Models. Proc. Mach. Learn Res. 2022, 162, 15561–15583. [Google Scholar] [PubMed]
  19. Kosko, B. Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence; Prentice Hall: Englewood Cliffs, NJ, USA, 1992. [Google Scholar]
  20. Goguen, J.A. L-fuzzy sets. J. Math. Anal. Appl. 1967, 18, 145–174. [Google Scholar] [CrossRef]
  21. Heijmans, H. Morphological Image Operators; Academic Press: New York, NY, USA, 1994. [Google Scholar]
  22. Valle, M.E.; Sussner, P. A General Framework for Fuzzy Morphological Associative Memories. Fuzzy Sets Syst. 2008, 159, 747–768. [Google Scholar] [CrossRef]
  23. Sussner, P.; Esmi, E.L. Morphological Perceptrons with Competitive Learning: Lattice-Theoretical Framework and Constructive Learning Algorithm. Inf. Sci. 2011, 181, 1929–1950. [Google Scholar] [CrossRef]
  24. Junbo, F.; Fan, J.; Yan, S. A learning rule for fuzzy associative memories. In Proceedings of the IEEE International Joint Conference on Neural Networks, Orlando, FL, USA, 28 June–2 July 1994; Volume 7, pp. 4273–4277. [Google Scholar]
  25. Liu, P. The Fuzzy Associative Memory of Max-Min Fuzzy Neural Networks with Threshold. Fuzzy Sets Syst. 1999, 107, 147–157. [Google Scholar] [CrossRef]
  26. Bělohlávek, R. Fuzzy logical bidirectional associative memory. Inf. Sci. 2000, 128, 91–103. [Google Scholar] [CrossRef]
  27. Ritter, G.X.; Sussner, P.; de Leon, J.L.D. Morphological Associative Memories. IEEE Trans. Neural Netw. 1998, 9, 281–293. [Google Scholar] [CrossRef] [PubMed]
  28. Vajgl, M.; Perfilieva, I. Associative memory in combination with the F-Transform based image reduction. In Proceedings of the 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Istanbul, Turkey, 2–5 August 2015; pp. 1–6. [Google Scholar] [CrossRef]
  29. Bui, T.D.; Nong, T.H.; Dang, T.K. Improving learning rule for fuzzy associative memory with combination of content and association. Neurocomputing 2015, 149, 59–64. [Google Scholar] [CrossRef]
  30. Perfilieva, I.; Vajgl, M. Autoassociative Fuzzy Implicative Memory on the Platform of Fuzzy Preorder. In Proceedings of the Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology (IFSA-EUSFLAT-15), Gijón, Spain, 30 June 2015; pp. 1598–1603. [Google Scholar]
  31. Perfilieva, I.; Vajgl, M. Data Retrieval and Noise Reduction by Fuzzy Associative Memories. In Proceedings of the 13th International Conference on Concept Lattices and Their Applications, Moscow, Russia, 18–22 July 2016; pp. 313–324. [Google Scholar]
  32. Vajgl, M. Reduced IFAM Weight Matrix Representation Using Sparse Matrices. In Proceedings of the EUSFLAT-2017—The 10th Conference of the European Society for Fuzzy Logic and Technology and IWIFSGN’2017—The Sixteenth International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets, Warsaw, Poland, 13–15 September 2017; Volume 3, pp. 463–469. [Google Scholar] [CrossRef]
  33. Valle, M.E.; Sussner, P. Quantale-based autoassociative memories with an application to the storage of color images. Pattern Recognit. Lett. 2013, 34, 1589–1601. [Google Scholar] [CrossRef]
  34. Li, L.; Pedrycz, W.; Li, Z. Development of associative memories with transformed data. Appl. Soft Comput. 2017, 61, 1141–1152. [Google Scholar] [CrossRef]
  35. Ikeda, N.; Watta, P.; Artiklar, M.; Hassoun, M.H. A two-level Hamming network for high performance associative memory. Neural Netw. 2001, 14, 1189–1200. [Google Scholar] [CrossRef] [PubMed]
  36. Santos, A.S.; Valle, M.E. Max-plus and min-plus projection autoassociative morphological memories and their compositions for pattern classification. Neural Netw. 2018, 100, 84–94. [Google Scholar] [CrossRef] [PubMed]
  37. Santos, A.S.; Valle, M.E. Uma introdução às memórias autoassociativas fuzzy de projeções max-C. In Proceedings of the Recentes Avanços em Sistemas Fuzzy. Sociedade Brasileira de Matemática Aplicada e Computacional, São Carlos, Brasil, 16–18 November 2016; Volume 1, pp. 493–502, ISBN 78-85-8215-079-5. [Google Scholar]
  38. Santos, A.S.; Valle, M.E. The Class of Max-C Projection Autoassociative Fuzzy Memories. Mathw. Soft Comput. Mag. 2017, 24, 63–73. [Google Scholar]
  39. Santos, A.S.; Valle, M.E. Some Theoretical Aspects of max-C and min-D Projection Fuzzy Autoassociative Memories. In Proceedings of the Series of the Brazilian Society of Computational and Applied Mathematics 2017 (CNMAC 2017), São José dos Campos, Brazil, 19–22 September 2017. [Google Scholar] [CrossRef]
  40. Urcid, G.; Ritter, G.X. Noise Masking for Pattern Recall Using a Single Lattice Matrix Associative Memory. In Computational Intelligence Based on Lattice Theory; Kaburlasos, V., Ritter, G., Eds.; Springer: Heidelberg, Germany, 2007; Chapter 5; pp. 81–100. [Google Scholar]
  41. Santos, A.S.; Valle, M.E. A Fast and Robust Max-C Projection Fuzzy Autoassociative Memory with an Application for Face Recognition. In Proceedings of the Brazilian Conference on Intelligent Systems 2017 (BRACIS 2017), Uberlândia, Brazil, 2–5 October 2017; pp. 306–311. [Google Scholar] [CrossRef]
  42. Birkhoff, G. Lattice Theory, 3rd ed.; American Mathematical Society: Providence, RI, USA, 1993. [Google Scholar]
  43. Heijmans, H.J.A.M. Mathematical Morphology: A Modern Approach in Image Processing Based on Algebra and Geometry. SIAM Rev. 1995, 37, 1–36. [Google Scholar] [CrossRef]
  44. Serra, J. Image Analysis and Mathematical Morphology, Volume 2: Theoretical Advances; Academic Press: New York, NY, USA, 1988. [Google Scholar]
  45. Soille, P.; Vogt, J.; Colombo, R. Carving and adpative drainage enforcement of grid digital elevation models. Water Resour. Res. 2003, 39, 1366. [Google Scholar] [CrossRef]
  46. Davey, B.; Priestley, H. Introduction to Lattices and Order, 2nd ed.; Cambridge University Press: Cambridge, MA, USA, 2002. [Google Scholar]
  47. Blyth, T.; Janowitz, M. Residuation Theory; Pergamon Press: Oxford, UK, 1972. [Google Scholar]
  48. Belohlavek, R.; Konecny, J. Concept lattices of isotone vs. antitone Galois connections in graded setting: Mutual reducibility revisited. Inf. Sci. 2012, 199, 133–137. [Google Scholar] [CrossRef]
  49. Sussner, P. Lattice fuzzy transforms from the perspective of mathematical morphology. Fuzzy Sets Syst. 2016, 288, 115–128. [Google Scholar] [CrossRef]
  50. Maragos, P. Lattice Image Processing: A Unification of Morphological and Fuzzy Algebraic Systems. J. Math. Imaging Vis. 2005, 22, 333–353. [Google Scholar] [CrossRef]
  51. Ward, M.; Dilworth, R.P. Residuated Lattices. Trans. Am. Math. Soc. 1939, 45, 335–354. [Google Scholar] [CrossRef]
  52. Höhle, U. On the Fundamentals of Fuzzy Set Theory. J. Math. Anal. Appl. 1996, 201, 786–826. [Google Scholar] [CrossRef]
  53. Hájek, P. Metamathematics of Fuzzy Logic; Trends in Logic: Studia Logica Library; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1998; Volume 4. [Google Scholar]
  54. Mulvey, C.J. Second topology conference (Taormina, 1984). Rend. Circ. Mat. Palermo 1986, 12, 99–104. [Google Scholar]
  55. Barros, L.C.; Bassanezi, R.; Lodwick, W. First Course in Fuzzy Logic, Fuzzy Dynamical Systems, and Biomathematics: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2017; Volume 347. [Google Scholar]
  56. Klir, G.J.; Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Applications; Prentice Hall: Upper Saddle River, NY, USA, 1995. [Google Scholar]
  57. Nguyen, H.T.; Walker, E.A. A First Course in Fuzzy Logic, 2nd ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2000. [Google Scholar]
  58. Pedrycz, W.; Gomide, F. Fuzzy Systems Engineering: Toward Human-Centric Computing; Wiley-IEEE Press: New York, NY, USA, 2007. [Google Scholar]
  59. De Baets, B. Coimplicators, the forgotten connectives. Trata Mt. Math. Publ. 1997, 12, 229–240. [Google Scholar]
  60. Sussner, P.; Valle, M.E. Classification of Fuzzy Mathematical Morphologies Based on Concepts of Inclusion Measure and Duality. J. Math. Imaging Vis. 2008, 32, 139–159. [Google Scholar] [CrossRef]
  61. Deng, T.; Heijmans, H. Grey-scale morphology based on fuzzy logic. J. Math. Imaging Vis. 2002, 16, 155–171. [Google Scholar] [CrossRef]
  62. Bandler, W.; Kohout, L. Fuzzy power sets and fuzzy implication operators. Fuzzy Sets Syst. 1980, 4, 13–30. [Google Scholar] [CrossRef]
  63. Couso, I.; Garrido, L.; Sánchez, L. Similarity and dissimilarity measures between fuzzy sets: A formal relational study. Inf. Sci. 2013, 229, 122–141. [Google Scholar] [CrossRef]
  64. De Baets, B.; De Meyer, H. Transitivity-preserving fuzzification schemes for cardinality-based similarity measures. Eur. J. Oper. Res. 2005, 160, 726–740. [Google Scholar] [CrossRef]
  65. De Baets, B.; Janssens, S.; De Meyer, H. On the transitivity of a parametric family of cardinality-based similarity measures. Int. J. Approx. Reason. 2009, 50, 104–116. [Google Scholar] [CrossRef]
  66. Fan, J.; Xie, W. Some notes on similarity measure and proximity measure. Fuzzy Sets Syst. 1999, 101, 403–412. [Google Scholar] [CrossRef]
  67. Xuecheng, L. Entropy, distance measure and similarity measure of fuzzy sets and their relations. Fuzzy Sets Syst. 1992, 52, 305–318. [Google Scholar] [CrossRef]
  68. Feng, Q.; Yuan, C.; Pan, J.S.; Yang, J.F.; Chou, Y.T.; Zhou, Y.; Li, W. Superimposed Sparse Parameter Classifiers for Face Recognition. IEEE Trans. Cybern. 2017, 47, 378–390. [Google Scholar] [CrossRef] [PubMed]
  69. King, D.E. Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
  70. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust Face Recognition via Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  71. Naseem, I.; Togneri, R.; Bennamoun, M. Linear Regression for Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2106–2112. [Google Scholar] [CrossRef]
  72. Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar] [CrossRef]
  73. Nefian, A.V. Georgia Tech Face Database. 2017. Available online: http://www.anefian.com/research/face_reco.htm (accessed on 11 October 2023).
  74. Cambridge, A.L. The AT&T Database of Faces. 1994. Available online: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html (accessed on 11 October 2023).
  75. Martinez, A.M.; Benavente, R. The AR Face Database; Technical Report 24; CVC; The Ohio State University: Columbus, OH, USA, 1998. [Google Scholar]
  76. Burda, M. paircompviz: An R Package for Visualization of Multiple Pairwise Comparison Test Results. 2013. Available online: https://bioconductor.org/packages/release/bioc/html/paircompviz.html (accessed on 11 October 2023).
  77. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  78. Weise, T.; Chiong, R. An alternative way of presenting statistical test results when evaluating the performance of stochastic approaches. Neurocomputing 2015, 147, 235–238. [Google Scholar] [CrossRef]
  79. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  80. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
Figure 1. Relationship between fuzzy operations using adjunction and negation.
Figure 1. Relationship between fuzzy operations using adjunction and negation.
Appliedmath 03 00050 g001
Figure 5. Box-plot and Hasse diagram of Wilcoxson signed-ranks test for the face recognition task.
Figure 5. Box-plot and Hasse diagram of Wilcoxson signed-ranks test for the face recognition task.
Appliedmath 03 00050 g005
Figure 6. Original images from the AT&T face database and versions corrupted, respectively, by salt and pepper noise, Gaussian noise, and horizontal motion (blurred image).
Figure 6. Original images from the AT&T face database and versions corrupted, respectively, by salt and pepper noise, Gaussian noise, and horizontal motion (blurred image).
Appliedmath 03 00050 g006
Figure 7. Average recognition rate (ARR) versus noise intensity or horizontal motion.
Figure 7. Average recognition rate (ARR) versus noise intensity or horizontal motion.
Appliedmath 03 00050 g007
Figure 8. Hasse diagram of Wilcoxson signed-ranks test for the recognition task from corrupted input images.
Figure 8. Hasse diagram of Wilcoxson signed-ranks test for the recognition task from corrupted input images.
Appliedmath 03 00050 g008
Table 1. Computational complexity in the recall phase of auto-associative memories.
Table 1. Computational complexity in the recall phase of auto-associative memories.
Fuzzy OperationsComparisonsMemory Space
AFMMs M and W O ( n 2 ) O ( n 2 ) O ( n 2 )
PAFMMs V and S O ( n k ) O ( n k ) O ( n k )
Table 3. RRs and ARRs of the classifiers on GT face database with the “FIRST N” scheme.
Table 3. RRs and ARRs of the classifiers on GT face database with the “FIRST N” scheme.
Classifier N = 3 N = 4 N = 5 N = 6 N = 9 ARR
SRC0.53670.58360.62400.71330.78670.6489
LRC0.51830.56360.59800.68220.78330.6291
CRC0.46830.50180.54200.62000.72000.5704
FSSP(1)0.56000.60000.63000.70440.78000.6549
Resized+ M L M 0.55670.57090.60000.70890.79000.6453
Resized+ V Z M 0.56000.57820.59000.72220.80330.6507
ResNet0.95330.95640.95600.95110.96000.9554
ResNet+ M L M 0.95330.95640.95800.96000.96330.9582
ResNet+ V Z M 0.95330.96000.96000.95780.96000.9582
Table 4. RRs and ARRs of the classifiers on AT&T face database with the “FIRST N” scheme.
Table 4. RRs and ARRs of the classifiers on AT&T face database with the “FIRST N” scheme.
Classifier N = 3 N = 4 N = 5 N = 6 N = 7 ARR
SRC0.87140.91670.93000.95000.95830.9253
LRC0.82500.85830.91000.96250.95830.9028
CRC0.86430.90000.91000.91870.92500.9036
FSSP(1)0.91070.94170.95000.94370.95000.9392
Resized+ M L M 0.89290.90420.93000.96880.96670.9325
Resized+ V Z M 0.89290.91670.95000.98120.97500.9432
ResNet0.95000.96250.95500.96250.96670.9593
ResNet+ M L M 0.95000.97080.96000.96250.97500.9637
ResNet+ V Z M 0.95000.97080.96500.97500.97500.9672
Table 5. RRs and ARRs of the classifiers on AR face database with expressions.
Table 5. RRs and ARRs of the classifiers on AR face database with expressions.
ClassifierSmileAngerScreamARR
SRC1.0000.98000.79000.9233
LRC0.99500.97000.76500.9100
CRC1.00000.99500.75500.9167
FSSP(1)1.00000.99000.86000.9500
Resized+ M L M 0.99500.98500.92000.9667
Resized+ V Z M 0.99500.98000.92500.9667
ResNet0.99501.00000.93500.9767
ResNet+ M L M 1.00001.00000.92000.9733
ResNet+ V Z M 0.99501.00000.94500.9800
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

dos Santos, A.S.; Valle, M.E. Max-C and Min-D Projection Auto-Associative Fuzzy Morphological Memories: Theory and an Application for Face Recognition. AppliedMath 2023, 3, 989-1018. https://doi.org/10.3390/appliedmath3040050

AMA Style

dos Santos AS, Valle ME. Max-C and Min-D Projection Auto-Associative Fuzzy Morphological Memories: Theory and an Application for Face Recognition. AppliedMath. 2023; 3(4):989-1018. https://doi.org/10.3390/appliedmath3040050

Chicago/Turabian Style

dos Santos, Alex Santana, and Marcos Eduardo Valle. 2023. "Max-C and Min-D Projection Auto-Associative Fuzzy Morphological Memories: Theory and an Application for Face Recognition" AppliedMath 3, no. 4: 989-1018. https://doi.org/10.3390/appliedmath3040050

Article Metrics

Back to TopTop