Next Article in Journal
Four-Point Bending of Basic Rails: Theory and Experimental Verification
Next Article in Special Issue
Symmetry-Enhanced LSTM-Based Recurrent Neural Network for Oscillation Minimization of Overhead Crane Systems during Material Transportation
Previous Article in Journal
Enhancing Transportation Efficiency with Interval-Valued Fermatean Neutrosophic Numbers: A Multi-Item Optimization Approach
Previous Article in Special Issue
Symmetry-Enhanced Fuzzy Logic Analysis in Parallel and Cross-Road Scenarios: Optimizing Direction and Distance Weights for Map Matching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Manifold Learning and Density Estimation for Fine-Tuned Face Recognition

1
School of Automation, Jiangsu University of Science and Technology, Zhenjiang 212003, China
2
School of Information Engineering, Chang’an University, Xi’an 710000, China
3
School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(6), 765; https://doi.org/10.3390/sym16060765
Submission received: 10 April 2024 / Revised: 25 May 2024 / Accepted: 27 May 2024 / Published: 18 June 2024

Abstract

:
With the rapid advancements in data analysis and the increasing complexity of high-dimensional datasets, traditional dimensionality reduction techniques like Local Linear Embedding (LLE) often face challenges in maintaining accuracy and efficiency. This research aims to overcome the limitations of LLE, specifically its reliance on the nearest neighbor concept, its inability to distinguish differences among manifold points, and its underutilization of data discrimination information. To address these issues, we propose an advanced LLE algorithm that integrates decision tree-based neighbor recognition with Gaussian kernel density estimation. Decision trees accurately determine neighboring relationships, which are then optimized using Gaussian kernel density estimation to better reflect the distribution of sample points on the manifold. The algorithm also incorporates data discrimination information to enhance classification accuracy and efficiency. Evaluations in facial recognition tasks using SVM classifiers demonstrate significant improvements. Integrating decision trees (LLE-DT) yielded accuracy gains, with LFW at 98.75%, CFP 96.10%, and Olivetti 92.18%. Gaussian density estimation (LLE-GDE) achieved further enhancements, especially in LFW (99.13%), with CFP at 96.85%, and Olivetti at 91.82%. Combining both methods (LLE-DT-GDE) led to substantial improvements: LFW 99.61%, CFP 97.23%, and Olivetti 93.56%, highlighting the synergy between decision trees and Gaussian estimation. This advanced LLE algorithm effectively addresses the limitations of traditional approaches, showing promising results in complex data processing tasks such as facial recognition. These findings suggest its potential for broader applications in fields requiring robust data analysis and classification.

1. Introduction

The integration of manifold learning, density estimation, and sparse representation for face recognition is engineered to enhance both the robustness and precision of the identification process. By utilizing sparse representation, the system is tailored to emphasize critical features that are resilient to environmental changes and obstructions, while manifold learning elucidates the intrinsic geometric structure of facial data, improving adaptability to various facial expressions and orientations. This amalgamation aims to significantly mitigate the challenges posed by variations in lighting, pose, expression, and occlusion, ensuring a robust framework capable of accurate face recognition under diverse conditions. Density estimation further refines this process by precisely determining the class membership of test samples based on their distribution within the feature space, thereby improving classification accuracy.
Moreover, this approach addresses the computational challenges associated with high-dimensional face data by reducing their dimensionality, thus simplifying the computational process and enhancing the system’s efficiency. The adaptive feature selection mechanism ensures the system focuses on the most relevant features, making the recognition process not only discriminative but also efficient. The integrated method’s capability to distinguish between known and unknown faces, coupled with its scalability and generalization across different datasets and conditions, marks a significant advancement in face recognition technology. It promises a scalable, reliable, and versatile solution that adapts seamlessly to a wide range of applications, setting a new benchmark for accuracy and adaptability in the field of face recognition.
Amidst the big data era, the challenges brought by high-dimensional data in areas like machine learning, data compression, pattern recognition, and data visualization have emerged as key areas of focus [1,2]. Especially in advanced pattern recognition domains like facial recognition, the escalation in data dimensions leads to an exponential increase in the needed sample size for efficacious statistical learning and feature identification [3,4]. As a result, the challenge of effectively learning and processing with limited high-dimensional samples (such as facial images with various angles and expressions) for more precise facial classification, feature denoising, and identity tracking has become a focal point of research.
Manifold learning algorithms constitute a set of effective nonlinear dimensionality reduction techniques [5,6]. These algorithms usually include building a neighborhood graph of data samples, a critical step in their execution [7,8]. In the conventional approach to constructing neighborhood graphs, methods such as k-nearest neighbors and ε-nearest neighbors are frequently used [9,10,11]. However, a significant drawback of these methods is their dependence on a predefined fixed number of neighbors, uniformly applied across all samples without taking into account the distinct neighborhood distribution features of each sample [12,13]. The concept of adaptive neighborhoods was introduced to address this limitation [14]. It centers around dynamically determining the size of the neighborhood based on the data distribution around the samples. In applications like face recognition within unsupervised learning, typical approaches encompass dimensionality reduction and density estimation. Techniques for dimensionality reduction, including Principal Component Analysis [15,16], Kernel-based PCA [17], and Multidimensional Scaling [18], effectively transform high-dimensional facial data into lower-dimensional formats, reducing computational complexity and preserving the essential information of facial features [19,20,21]. Furthermore, density estimation methods, be they parametric or non-parametric, aim to uncover the inherent distribution of facial data, facilitating the process.
Within parametric density estimation, although these approaches are significantly effective in approximating straightforward density functions [22,23], their precision might be constrained when handling complex features like multimodal distributions in facial characteristics [24,25]. In contrast, non-parametric density estimation methods demonstrate increased adaptability in mimicking the complex distributions of facial data [26]. Nonetheless, these non-parametric kernel estimation methods encounter the so-called “curse of dimensionality” when applied to high-dimensional facial data [27].
More specifically, the requisite sample size for density estimation in high-dimensional spaces (like multidimensional facial feature spaces) grows substantially with increasing dimensions [28,29,30], challenging the efficiency and precision of facial recognition technologies.
In this research, the identification of data points’ neighbors is based on decision trees, subsequently optimized by Gaussian kernel density estimation. Initially, the method utilizes decision trees to analyze the features of samples to recognize each sample’s local neighbors [31]. Then, Gaussian kernel density estimation is applied to refine the selection of neighbors, aiding in more precisely capturing the local structure and distribution characteristics of the data. The integration of decision trees and Gaussian kernels not only elevates the precision in selecting neighbors but also boosts the classification capabilities of samples in low-dimensional spaces, leading to improved category segregation.
This method demonstrates significant advantages over traditional approaches due to several key factors. 1. Precision in neighbor selection is greatly enhanced by employing decision trees to analyze sample features and recognize local neighbors. This results in higher accuracy in identifying relevant data points, addressing the limitations of traditional nearest neighbor methods which often overlook subtle but significant relationships within the data. 2. The method further refines neighbor selection through the application of Gaussian kernel density estimation. This optimization step ensures a more accurate capture of the local structure and distribution characteristics of the data, a critical improvement over methods that rely solely on nearest neighbor concepts. 3. The integration of decision trees and Gaussian kernels not only enhances the accuracy of neighbor selection but also significantly boosts the overall classification capabilities of the algorithm. This dual approach effectively segregates categories in low-dimensional spaces, leading to improved classification performance. 4. This method effectively incorporates data discrimination information, unlike traditional LLE methods that do not fully leverage such information. This results in a more nuanced and efficient classification process, further enhancing the method’s robustness. 5. The ability of this method to precisely capture local structures and optimize neighbor relationships makes it particularly suitable for handling complex and high-dimensional datasets. Traditional methods often fall short in these scenarios, whereas this approach excels.
Overall, the combination of decision tree-based neighbor recognition and Gaussian kernel density estimation provides a robust framework that addresses the inherent limitations of other dimensionality reduction and classification methods. This method offers superior performance in terms of precision, efficiency, and effectiveness in data analysis tasks, making it a powerful tool for complex data processing.

2. Related Work

In the field of machine learning, samples within high-dimensional spaces are inclined towards sparsity, resulting in overfitting and the dimensionality dilemma. In high-dimensional feature spaces, most samples are situated at the edges of hypercubes, complicating classification with hyperplanes. A clear method to determine an “overly large” dimension is lacking. Approaches to address this include augmenting the sample size and diminishing the feature dimensions. The reduction in feature dimensions can be realized via feature selection and extraction, the latter of which develops new features encapsulating fundamental information, aiding in classifier simplification and improved generalizability to new data.
The processing of voluminous high-dimensional data is simplified by employing dimension reduction techniques. While linear dimension reduction maintains global features, it has limitations in handling complex data. Manifold learning reinstates low-dimensional structures, uncovering the inherent laws of data. Data often transition from low to high dimensions, akin to a two-dimensional sheet being rolled into a three-dimensional form [32]. Principal Component Analysis (PCA) [33] and Singular Value Decomposition (SVD) [34] are two commonly employed linear dimension reduction methods in dimensionality reduction strategies. PCA converts data into linearly independent variables, known as principal components, via orthogonal transformation, and SVD diminishes the dimensionality of data by decomposing the original data matrix. These approaches simultaneously decrease data volume and maintain the maximum variance in the data. Linear Discriminant Analysis (LDA) [35] is an additional prevalent linear dimension reduction method, focusing on identifying the optimal projection direction for maximum differentiation between various data categories. LDA is especially appropriate for classification tasks in the realm of supervised learning. Multidimensional Scaling (MDS) [36] aims to preserve the relative distances among samples from the original high-dimensional space in a reduced-dimensional setting, thus enabling MDS to maintain the original data’s structure to the greatest extent during dimensionality reduction. Locally Linear Embedding (LLE) [37] and Laplacian Eigenmaps (LE) [38] are two types of dimension reduction techniques grounded in manifold learning.
Considering these dimensionality reduction methods collectively, it becomes evident that each possesses unique strengths and suitable contexts for use. PCA and SVD are appropriate for global linear reduction of data dimensions, LDA is superior for dimensionality reduction in supervised classification problems, MDS emphasizes preserving the relative distances among samples during the reduction process, and LLE and LE are better suited for revealing the local geometric and topological structures of data. Hence, in practical applications, the choice of a suitable dimensionality reduction algorithm depends on the data’s attributes and the specific needs at hand. For example, LLE and LE may be more fitting in cases where retaining local characteristics is crucial, while PCA and MDS could be preferable when the maintenance of the data’s global structure is essential. Proper utilization of these methodologies is crucial for the effective handling and analysis of high-dimensional data.
Face recognition involves recognizing and analyzing local facial features, including the eyes, nose, and mouth. LLE is especially apt for maintaining the geometric and topological structures of such local features, as it retains the linear relationships between each data point and its nearest neighbors throughout the dimension reduction process. In contrast to global dimension reduction methods such as PCA and MDS, LLE is more adept at handling data with intricate local structures, as it more effectively captures nonlinear characteristics. Furthermore, facial recognition demands extensive computational resources to process high-dimensional image data. Utilizing LLE effectively reduces data dimensionality, thereby alleviating computational load while retaining crucial identification information. Nonetheless, LLE has its limitations, as the selection of the number of nearest neighbors greatly influences the outcomes of the LLE algorithm’s dimension reduction, but finding the ideal number of nearest neighbors is challenging and necessitates trial and error. Consequently, this study utilizes decision trees to adaptively select nearest neighbors and applies Gaussian density estimation for data optimization, introducing a modified version of the LLE algorithm.
Sparse representation stands as a complementary approach to the aforementioned dimensionality reduction techniques, offering a unique perspective on handling high-dimensional data, particularly in the context of face recognition. This method posits that high-dimensional data can often be represented as a sparse linear combination of basis elements from a dictionary, effectively reducing the complexity of the data while preserving essential information. Unlike traditional dimensionality reduction methods that focus on linear transformations or manifold learning techniques emphasizing the preservation of local or global data structures, sparse representation targets the sparsity principle to achieve data compression and feature selection. By identifying a minimal set of dictionary elements that can accurately represent each data sample, sparse representation facilitates robust classification and recognition by enhancing the discriminative power of the features.
In the domain of face recognition, sparse representation has demonstrated exceptional prowess in dealing with variations in illumination, pose, and expression, as well as occlusion. The technique’s ability to selectively focus on the most relevant features for representing a face enables the construction of compact yet highly informative feature vectors. This selective focus not only mitigates the curse of dimensionality but also improves the efficiency and effectiveness of face recognition systems. Furthermore, the adaptability of sparse representation to different data and noise levels makes it a versatile tool in the arsenal of dimensionality reduction and feature extraction methods. By integrating sparse representation with other dimensionality reduction strategies such as manifold learning, researchers can leverage the strengths of both approaches to develop more sophisticated and capable face recognition systems that are both computationally efficient and highly accurate.

3. Theoretical Framework

3.1. Manifold Learning

Manifold learning in face recognition leverages the concept that faces, represented as high-dimensional data points (each pixel as a dimension), lie on a lower-dimensional manifold within this high-dimensional space. This approach is based on the observation that while images of faces can vary widely due to different lighting conditions, facial expressions, and angles, the intrinsic structure that defines a “face” is comparatively simple and can be captured in fewer dimensions. Manifold learning techniques reduce the dimensionality of face data to this intrinsic space, improving the efficiency and performance of face recognition algorithms by focusing on the most meaningful features.
Given a dataset X = x 1 , x 2 , x n in a high-dimensional space R D dimensionality reduction can be accomplished using an enhanced LLE algorithm. In the conventional k-nearest neighbors approach, the similarity between samples is determined based on their Euclidean distance, with each sample having an equal number of neighbors. The expression for the similarity coefficient is as follows:
S i j { α i j ( x i   and   x j   are   neighbors   of   each   other ) 0 ( others )
where α i j = e x p ( x i x j 2 / β ) ,   x i x j 2 refers to the Euclidean distance between samples and β denotes the squared mean of the Euclidean distances among all samples.
The k-nearest neighbors (k-NN) algorithm has three main limitations: Equal Importance to All Features: k-NN treats each feature with equal importance, not distinguishing which features are more crucial for classification or regression tasks. This can lead the algorithm to be influenced by irrelevant features, thereby reducing its performance. Distance-Based Neighbor Selection: The algorithm determines the nearest neighbors based on distance metrics, typically assuming that neighboring points in the data space have similar properties. However, this approach might not consider the nonlinear relationships between features, resulting in poor performance in identifying nonlinear boundaries in complex data structures. Interpretability Challenges: While the principle behind k-NN is simple and intuitive, its decision-making process can be less interpretable, especially in practical problems. This issue becomes more pronounced in high-dimensional spaces, where understanding why certain points are considered neighbors can be complex and non-intuitive.

3.1.1. Decision Tree-Based Nearest Neighbor Optimization

Decision trees are commonly employed for classification or regression issues, rather than for directly identifying the nearest neighbors of data points. Nonetheless, the outcomes of decision tree classifications can indirectly indicate the proximity relationships among data points. In the improved Local Linear Embedding (LLE) algorithm, decision trees help identify neighboring data points within the feature space, thus aiding in more efficient nonlinear dimensionality reduction.
Although decision trees are not traditionally used to determine nearest neighbors, we can indirectly use the results of decision tree classification to infer the neighbors of data points. The specific steps of this method are as follows:
(1) Develop a decision tree from the training dataset D = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x n , y n ) } , with xi as the feature vector and y i as the target variable. The decision tree determines features for division by maximizing Information Gain (IG) or minimizing Gini Impurity. The equation for Information Gain (IG) is as follows:
I G ( D p , f ) = I ( D p ) m j = 1 N j N p I ( D j )
(2) In this context, D p represents the dataset of the parent node, f denotes a particular feature, D j is the dataset of the j-th child node created after splitting based on feature f, N p and N j are the number of samples in the parent node and the j-th child node, respectively, and I is the function measuring impurity.
(3) Every leaf node symbolizes a set of samples from the training data, which are alike in the feature space. The identity of a leaf node can be determined by the split decisions along the path from the root to the leaf node. For a new query point x q , the decision tree is used to assign it to a specific leaf node. All training samples that fall into the same leaf node as x q are considered to be the neighbors of x q .

3.1.2. Kernel Density Estimation for Enhanced Optimization

In the process of constructing neighbors for samples, those densely distributed in space demonstrate similar local features. For a given sample xi, a higher probability density of other samples in its vicinity indicates a larger number of samples with local features akin to xi, leading to a higher count of neighbors for xi. Consequently, this paper suggests the use of Gaussian Kernel Density Estimation (GKDE) for building a neighborhood graph and further refining and adjusting adaptive neighbor results based on the mean similarity coefficient M i . As a non-parametric probability density estimation technique, Gaussian Kernel Density Estimation can deduce the overall distribution probability density from the samples’ own data, especially when prior knowledge of sample distribution is lacking. In the high-dimensional space R D , the Gaussian kernel estimation of neighborhood probability density for the sample xi is expressed as:
P ( x i ) = 1 k i D j k i [ 1 N 1 ( 2 π ) D e x p ( d i j 2 2 k i 2 ) ]
From Equation (3), the neighborhood probability density estimates P(xi) for all samples are derived, leading to the calculation of the average neighborhood probability density p ¯ = [ j = 1 N p ( x i ) ] / N for all samples in the dataset X. The adjustment of the neighbor count for the sample xi, based on the probability of neighborhood distribution, proceeds as follows:
k i p = f l o o r ( k i p ( x i p ) )
where ‘floor’ denotes rounding the data towards negative infinity.
According to Equation (4), it is evident that when the density of data samples around xi, is high, the count of its neighbors is automatically increased, whereas when the data distribution nearby is sparse, the neighbor count is automatically reduced.
Assuming that there is a low-dimensional manifold in the data, the density is estimated by learning the local structure on the manifold through the calculation of local covariance. If the neighboring points of each sample can reflect the local manifold features of that sample, thereby improving the density estimation, the function for density estimation is as follows:
p ^ ( x ) = 1 l N x i , c i ( x )
In this equation, N x i , c i ( x ) is a multidimensional Gaussian kernel function, and ci denotes the covariance matrix. The multidimensional Gaussian density function defined in Equation (5) is as follows:
N x i , c i ( x ) = 1 ( 2 π ) n C i e 1 2 ( x x i ) c 1 ( x x i )
In this equation, C i is the rank of the covariance matrix c i . For a given sample point x i its local geometric structure is disclosed through the distribution of x i and its nearest neighbors. For each sample point x i , a local covariance matrix c i is calculated by finding its k-nearest neighbors. In reality, if x i and its surrounding points are densely distributed along a specific direction, the covariance values approach zero in orthogonal directions.

3.1.3. Improved LLE Method

The Locally Linear Embedding (LLE) algorithm is a well-known non-linear feature extraction method. Its core objective is to explore the critical structures within high-dimensional data and efficiently transfer these structures to a lower-dimensional space, emphasizing the essential properties of the data. Let X = [ X 1 , X 2 , , X N ] R D × N be the sample set in the high-dimensional space, where X = [ X i 1 , X i 2 , , X i k X i D ] R D × 1 denotes the i-th sample in the sample set, and i = 1, 2, …, k, …, N, D represent the feature dimensions of the high-dimensional samples, with N being the total number of samples. The detailed calculation steps are as follows:
(1) In the process of improving the Local Linear Embedding (LLE) algorithm through the integration of decision trees, we first employ decision trees, rather than the conventional Euclidean distance, to ascertain the nearest neighbors for each sample point. This method provides a more thorough consideration of the multidimensional features of the samples, not just their spatial positioning. Decision trees intelligently choose neighbors with similar characteristics for each point by analyzing their attributes. This approach allows for more precise identification of each sample’s local linear areas, even within complex datasets, thereby enhancing the efficacy of the LLE algorithm in feature extraction.
(2) Unearthing the local structure in the original data space. For instance, considering the sample point xi, for example, when examining the sample point xi, its high-dimensional local reconstruction model can be formally represented as follows:
This is example 1 of an equation:
m i n 1 2 x i j = 1 K W i j X i j 2 2 = 1 2 x i x i w i 2 2 s . t   e W i = 1
In this context, e = [1, 1, …,1] ∈ Rk×1, Xij refers to the j-th nearest neighbor of sample point Xi, x i = [ x i 1 , x i 2 , , x i k ] represents the set of nearest neighbors for point Xi, and Wij is the reconstruction weight between point Xi and its j-th nearest neighbor Xij. Imposing constraints can stabilize the results. Thus, models with constraints can be solved using the method of Lagrangian multipliers. The model of Lagrangian multipliers is defined as follows:
L ( W i , λ ) = 1 2 x i x i w i 2 2 + λ 1 ( e W i 1 )
The method to compute the aforementioned equation involves taking the partial derivatives of the parameters on the left side of the equation and setting the results to zero:
x i ( x i x i w i ) + λ 1 e = 0
e W i 1 = 0
Here, λ1 represents the Lagrange multiplier. The optimal solution Wi for the objective function (7) is as follows:
W i = G i 1 e e G i 1 e
where G i = [ x i x i 1 , x i x i 2 , , x i x i k ] [ x i x i 1 , x i x i 2 , , x i x i k ] R k × k is calculated as the local covariance matrix.
(3) Compute the low-dimensional embedding results for the samples. This is achieved by maintaining the local structure of the high-dimensional data unchanged and performing linear reconstruction in a low-dimensional space, with the reconstruction error function being
m i n i = 1 N Y i j J i w i j Y i j 2 2 s . t i = 1 N Y i = 0 , 1 N i = 1 N Y i Y i = I N
In this context, Yi denotes the low-dimensional embedding result corresponding to Xi, Yij is the j-th nearest neighbor of Yi, Ji ∈ Rk signifies the set of labels (i, j = 1, 2, …, N) for Xi’s nearest neighbors, and d is the dimension of the embedding result. Imposing two constraints on the objective function ensures the stability of the results. Solving the objective function (12) using related algorithms would increase computational complexity, thus it can be rephrased as follows:
m i n N i = 1 Y i j J i w i j Y i j 2 2 = Y ( I N W ) ( I N W ) Y = Y M Y
where Y = [ Y 1 , Y 2 , , Y 1 = N ] , W ∈ RN×N are the local reconstruction weight matrices, with elements wij = (i, j = 1, 2, …, N) in W. If Xj is a neighboring point of Xi, wij can be calculated using the above equation; if not, then wij = 0, i = 0, and M are large sparse symmetric matrices. Based on the relationship between eigenvalues and eigenvectors, it follows that
M p = d i a g [ λ 1 , λ 2 , , λ d ] P
where P represents the eigenvectors corresponding to the eigenvalues calculated from matrix M, λ denotes the eigenvalues, and they are mutually orthogonal to each other, so multiplying by P’ yields
P M P = d i a g [ λ 1 , λ 2 , , λ d ]
Thus, it becomes evident that the matrix composed of the eigenvectors associated with the eigenvalues derived from the objective function (12) serves as the transformation matrix, using which the ultimate embedding result is achieved.

3.1.4. Support Vector Machine

The Support Vector Machine (SVM), simply put, is a classification model. Its basic model is defined as the linear classifier with the maximum margin in the feature space. Its learning strategy is to maximize the margin, which ultimately can be transformed into solving a convex quadratic programming problem. The classical solution method is the Lagrange multiplier method, as shown in Equation (16):
L ( W , c , d ) = 1 2 W 2 i = 1 N c i { y i ( X i W + d ) 1 }
in which W is the coefficient vector and d is a constant.
Taking partial derivatives of W and d yields Equation (17):
max H ( c ) = N i = 1 c i 1 2 i = 1 N j = 1 N y i y j c i c j X i X j
Solving this equation yields the vector W*, as shown in Equation (18):
W * = N i = 1 y i c i X i
where yi represent the vertical coordinates of points i and j, ci represent the offsets of points i and j, Xi and Xj are the image feature pixel points, and H(c) is the obtained discriminant matrix. Solve for the optimal values of c* and d*, as well as the optimal discriminant function. The optimal c* is determined by the constraint in Equation (19), and c* and W* can be obtained using a quadratic programming algorithm. Then, by selecting a support vector X, the value of d* can be found, as shown in Equation (20), resulting in the final optimal discriminant function as shown in Equation (21).
c * [ y i [ ( W * X i + d * ) 1 ] ] = 0
d * = y i W * X i
f ( X ) = i = 1 N y i c i X X i + d *
where W* is the matrix obtained after differentiation, c* is the optimal offset, and d* is the obtained optimal correction value.
To address the difficulty of finding a corresponding classification hyperplane in a low-dimensional plane for the dataset, this article introduces a kernel function to transform the data into a high-dimensional plane, which is advantageous for finding a more effective classification plane. The specific process is illustrated in the diagram below (Figure 1).
In summary, the entire process involves three key steps: starting with the collection of high-dimensional sample input, applying manifold learning algorithms for dimensionality reduction (as depicted in Figure 2), and finally using an SVM classifier for accurate classification. This combination of techniques leverages the strengths of manifold learning to simplify the data and the robustness of SVM for effective classification, thereby improving the overall performance of the system.

3.2. Sparse Representation Enhancement

Sparse representation refers to expressing a signal or data sample as a linear combination of a few essential elements or features from a given dictionary, where the majority of coefficients in the combination are zero. In other words, sparse representation aims to represent data efficiently by using only a small number of significant components from a larger set of possibilities. This approach is valuable in various fields, including signal processing, image analysis, and machine learning, as it helps in reducing data complexity, extracting essential information, and enhancing computational efficiency. The sparsest representation is naturally discriminative; among all subsets of base vectors, it selects the subset which most compactly expresses the input signal and rejects all other possible but less compact representations.
This approach assumes that if adequate training samples are available for each class, a test sample can be represented as a linear combination of only those training samples that belong to the same class, resulting in a naturally sparse representation. This sparsity involves only a small fraction of the training database and is posited to be the sparsest linear representation for the test sample within this dictionary, which can be efficiently identified through ℓ1-minimization.
The proposed classifier is considered a generalization of existing classifiers such as nearest neighbor (NN) and nearest subspace (NS), offering a more flexible approach that adaptively chooses the minimal number of training samples needed for representing each test sample. This method achieves a balance similar to the nearest feature line (NFL) algorithm but extends the concept by considering all possible combinations of training samples, both within and across classes, to achieve optimal sparse representation.
Further analysis involves evaluating the residuals, ri(y), for a test image of subject 1, which are calculated with respect to the projected sparse coefficients, i ( x ^ ) , obtained through l 1 -minimization. The comparison of the two smallest residuals reveals a ratio of approximately 1:8.6, highlighting the precision of the classification method in distinguishing between subjects based on sparse representation techniques. This ratio underscores the method’s ability to accurately identify and differentiate between subjects with a high degree of confidence.
The algorithmic process for enhancing face detection through sparse representation is a sophisticated approach that leverages mathematical and computational techniques to improve the accuracy and robustness of identifying faces within digital images. Here is a detailed explanation of each step in the process:

3.2.1. Dictionary Creation

The foundation of sparse representation is the construction of a dictionary composed of basis elements. These elements are essentially a set of vectors that span the space in which facial images reside. The dictionary can be derived from training images, capturing the variability of facial features, or generated through methods like principal component analysis (PCA) or independent component analysis (ICA), which identify statistically significant bases. A crucial aspect of the dictionary is its overcompleteness; it contains more elements than the dimensionality of the input space, enabling a more flexible and expressive representation of facial images. Overcompleteness is key to achieving sparsity, as it allows each image to be represented by a small subset of the dictionary elements.

3.2.2. Preprocessing of Input Images

Before representation, input images undergo preprocessing to standardize their format and enhance feature visibility. This typically involves normalizing the intensity values to ensure consistency across the dataset, resizing images to a uniform dimension to simplify processing, and converting them to grayscale to reduce the computational load. Optionally, feature extraction techniques can be applied to distill essential information from the images, focusing on characteristics crucial for face detection, such as edges or texture patterns. This step reduces the dimensionality of the data, making the subsequent sparse representation more computationally efficient.

3.2.3. Sparse Coding of Facial Images

Sparse coding is the core of this process, where each facial image is represented as a sparse linear combination of the dictionary elements. This involves solving an optimization problem to find a coefficient vector that combines dictionary elements to reconstruct the input image with minimal error, while enforcing sparsity constraints on the coefficients. Techniques like Lasso, Orthogonal Matching Pursuit (OMP), or Basis Pursuit are employed to achieve this balance, ensuring that the resulting representation uses the fewest possible dictionary elements. This sparsity is crucial for highlighting the unique aspects of each face and facilitating efficient classification.

3.2.4. Face Classification/Recognition

Once an image is represented in sparse form, the resulting coefficient vector serves as a feature vector for classification or recognition tasks. This vector’s sparsity pattern—which elements are non-zero and their magnitudes—provides a powerful discriminative signature for each face. A classifier trained on these sparse representations learns to distinguish between different individuals based on these signatures. Various machine learning models, including support vector machines (SVM), k-nearest neighbors (k-NN), or neural networks, can be used depending on the specific requirements of accuracy, speed, and scalability.

3.2.5. Detection and Localization in New Images

For detecting faces within new, unseen images, a sliding window technique is often employed. This method involves moving a fixed-size window across the image, applying the sparse coding and classification process to each windowed segment to determine whether it contains a face. This approach allows for the localization of faces within larger images by identifying and classifying regions based on their sparse representation.

3.2.6. Post-Processing

After initial detection, post-processing steps such as non-maximum suppression are used to refine the results (in Figure 3). This can involve eliminating redundant detections that overlap significantly, ensuring that each face is only marked once and that the bounding boxes accurately represent the location and extent of faces in the image. Adjusting the classification confidence threshold is another post-processing strategy, balancing the detection rate against the false positive rate to meet the specific needs of the application.
Sparse representation in image recognition involves representing images as linear combinations of a few basis elements from a large dictionary. The core mathematical formulations related to sparse representation in this context are centered around solving optimization problems that seek the sparsest coefficients to reconstruct the input images from the dictionary. Below are the key formulas and concepts used in sparse representation for image recognition:
  • Sparse Coding:
Given a dictionary D R m × n where m is the dimensionality of the feature space and n is the number of dictionary elements (with n > m typically, making D overcomplete), and an image (or image feature vector) y R m , sparse coding aims to find the sparsest coefficient vector x R n such that y approximates Dx. The optimization problem can be formulated as follows:
x ^ = arg min x x 0
  • subject to D x y 2 2 ϵ
where x 0 denotes the l 2 norm, which counts the number of non-zero entries in x and ϵ is a tolerance level for the reconstruction error.

3.2.7. L1 Minimization

Since the l 0 minimization is NP-hard and computationally infeasible for large problems, a common approximation is to use the l 1 norm instead, leading to the basis pursuit problem.
x ^ = arg min x x 1
subject to D x y 2 2 ϵ
This problem promotes sparsity in x due to the properties of the l 1 norm and can be solved more efficiently.
  • Lasso (Least Absolute Shrinkage and Selection Operator):
Another popular formulation for finding sparse representations is the Lasso, which directly incorporates the error term into the objective function.
x ^ = arg min x 1 2 D x y 2 2 + λ x 1
where lambda is a regularization parameter that controls the trade-off between the reconstruction error and the sparsity of x .
  • Orthogonal Matching Pursuit (OMP):
OMP is a greedy algorithm that iteratively selects the dictionary elements most correlated with the current residual (the difference between the input image and its current sparse approximation). At each step, it adjusts the coefficients of the selected elements to best fit the input image:
(1)
Initialize the residual r 0 = y   and the support set   S = ϕ .
(2)
At each step k   find the dictionary element d j   most correlated with the residual r k 1 and add j to S .
(3)
Solve a least squares problem to update x using only elements in S .
(4)
Update the residual r k = y D x .
(5)
Repeat steps 2–4 until a stopping criterion is met (e.g., a desired number of elements has been selected or the residual is below a threshold)
These formulations and algorithms underlie the application of sparse representation in image recognition, providing a framework for representing images with a few significant features extracted from a comprehensive dictionary, thereby facilitating efficient and effective recognition.

4. Proposed Method

The proposed approach incorporates the integration of manifold learning and sparse representation which presents a sophisticated approach for enhancing image recognition tasks, particularly in the domain of face recognition. Manifold learning’s ability to unveil the intrinsic low-dimensional structures hidden within high-dimensional data complements the efficiency of sparse representation, which encodes images as sparse linear combinations of basis elements from a pre-defined dictionary. This synergy not only significantly reduces the computational complexity associated with processing and analyzing high-dimensional datasets but also improves the accuracy and robustness of recognition algorithms. By effectively capturing the essential features of images through manifold reduction and then succinctly representing these features in a sparse format, this integrated approach offers a powerful solution for identifying and classifying images with high precision, even in the presence of variability and noise. The process is shown in Figure 4.
  • Step 1: Preparing the Dataset
Collect a Diverse Dataset: gather a large and varied set of facial images that include different expressions, lighting conditions, and angles to ensure the robustness of your model.
Preprocess the Images: Normalize the images to have the same size and scale. Convert them to grayscale to reduce computational complexity and apply other preprocessing techniques as necessary to enhance image quality.
  • Step 2: Applying Manifold Learning for Dimensionality Reduction
Select a Manifold Learning Algorithm: choose an algorithm like Locally Linear Embedding (LLE), Isometric Mapping (Isomap), or Uniform Manifold Approximation and Projection (UMAP) based on the specific needs of your dataset and task.
Reduce the Dimensionality: Apply the chosen manifold learning algorithm to the dataset to reduce its dimensionality. This step aims to retain the essential structure of the data while projecting them onto a lower-dimensional space.
  • Step 3: Constructing the Dictionary for Sparse Representation
Dictionary Creation: Build a dictionary using a subset of the reduced-dimensional data. This dictionary can also be learned from the data using algorithms like K-SVD.
Overcompleteness: ensure the dictionary is overcomplete (has more atoms than the dimensionality of the feature space) to allow for a rich representation of faces.
  • Step 4: Sparse Coding of Faces
Encode Faces: For each reduced-dimension face image, find the sparsest representation by selecting the minimal number of dictionary atoms that combine to approximate the face. This involves solving an optimization problem, often using L1 minimization (Lasso) or Orthogonal Matching Pursuit (OMP).
Feature Vector: the sparse coefficient vector resulting from this encoding serves as the feature vector for each face, encapsulating its most distinguishing characteristics.
  • Step 5: Classification and Recognition
Training a Classifier: Use the sparse representations as features to train a classifier. Support Vector Machines (SVM) or Nearest Neighbor classifiers are common choices.
Face Recognition: For a new face image, repeat the preprocessing, dimensionality reduction, and sparse coding steps to obtain its sparse representation. Then use the trained classifier to determine the identity of the face based on its sparse feature vector.
  • Step 6: Evaluation and Refinement
Test the System: Evaluate the performance of your face recognition system using a separate test dataset not seen during training. Common metrics include accuracy, precision, recall, and F1 score.
Refine the Model: Based on the evaluation, refine the model by adjusting parameters, choosing different manifold learning algorithms, or modifying the dictionary. Iteratively improve the system for better performance.

5. Results

In order to thoroughly assess the effectiveness and practicality of the approach presented in this research, this experiment utilized three diverse datasets, a. Labeled Faces in the Wild (LFW) [39], b. Celebrities in Frontal-Profile (CFP) [40], and c. the Olivetti Faces Dataset [41], and performed comparative experiments with six varied algorithms. These datasets include multiple types and sizes to guarantee the extensive applicability and dependability of the experimental outcomes.

5.1. Improved LLE Method

The Labeled Faces in the Wild (LFW) dataset comprises over 13,000 facial images, encompassing over 5000 distinct individuals. These images represent a range of ages, ethnicities, and genders. The diversity of the images, featuring variations in illumination, facial expressions, poses, and obstructions, renders it an excellent choice for evaluating face recognition algorithms in complex settings, which motivated its selection for this research. Introduced by Sengupta and others in 2016, the CFP (Celebrities in Frontal-Profile) dataset [27] aims to verify the accuracy of face recognition algorithms in the transition from frontal to profile postures. It contains 500 identities, with each identity having 10 frontal and 4 profile images. Like the LFW dataset, it is segmented into 10 subsets, with each subset comprising 350 pairs of matching samples and 350 pairs of non-matching samples. Created by the Olivetti Research Laboratory, the Olivetti Faces Dataset features 400 grayscale facial images of 40 distinct individuals. Each individual is represented with 10 images, showcasing a variety of expressions and poses. A notable aspect of this dataset is the uniformity in posture. LFW, CFP, Olivetti samples are shown in Figure 5.

5.2. Dimensionality Reduction Analysis

For a clearer demonstration of how the LLE Improvement Algorithm Based on Decision Trees and Gaussian Density Estimation (LLE-DT-GDE) method processes input images through feature extraction, visualization of the output feature maps from the PCA, SVD, LDA, MDS, LLE, LE, and LLE-DT-GDE methods was conducted. As depicted in Figure 6, the method proposed in this study captures and extracts finer detail information in comparison to other methods, thus contributing to improved facial expression recognition results [42]. The visualizations demonstrate that the method proposed, by employing both global enhancement and local attentive features, more effectively concentrates on distinguishable facial regions like the eyes, mouth, and eyebrows. This allows for the retention of crucial and supplementary information for recognition, consequently enhancing the algorithm’s capacity for accurate identification.
To showcase the face recognition abilities of our approach, ten distinct identities were chosen from the LFW dataset, each including images with either no obstruction or synthetic mask obstructions. Feature extraction was performed on these images to obtain 512-dimensional features, subsequently reduced to 2 dimensions via PCA, SVD, LDA, MDS, LLE, LE, and the LLE-DT-GDE method. Figure 7 illustrates the recognition process utilizing image-derived features, demonstrating how these features are processed through the sparse coding and dictionary learning framework to achieve accurate recognition. The visualization of these features, depicted in Figure 8, underscores the efficacy of our method. The results show that PCA, SVD, LDA, and MDS have smaller inter-class gaps and less compact intra-class grouping, which are not ideal for facial recognition. On the other hand, LE, LLE, and the method proposed in this study demonstrate the opposite, especially the latter, which distinctly separates different identities with clear boundaries and accurately clusters various images of the same identity, thus validating the effectiveness of our method.
For an objective evaluation of the effectiveness of the method presented in this study, we used an SVM classifier and devised six contrasting methods. Comparisons were made between the LLE-DT-GDE+SVM method and PCA+SVM, SVD+SVM, LDA+SVM, MDS+SVM, LLE+SVM, and LE+SVM algorithms to validate the efficacy of our approach.
From Table 1, it is clear that the method introduced in this study records average accuracies of 99.61%, 97.23%, and 93.56% on the LFW, CFP, and Olivetti datasets, respectively, surpassing the performance of other methods. The enhanced LLE method includes two components: one is the use of decision trees to calculate nearest neighbor relations, replacing traditional approaches, and the other is the application of Gaussian density estimation for further optimization of these relations. Relative to the original LLE, there are accuracy improvements of 1.09, 0.68, and 3.57 percentage points, respectively. The results demonstrate that the refined LLE algorithm improves facial recognition rates and fosters intra-class compactness.
To further validate our approach, we compared the improved graph representations of Sparse Representation-based Classification (SRC), linear SVM, nearest neighbor, and nearest subspace methods, as shown in Figure 9. The results demonstrate that our method outperforms these traditional approaches, providing clearer and more distinct class separations, thus enhancing recognition accuracy.

6. Comparison Study

To systematically assess the contributions of the suggested enhancements—decision tree integration and Gaussian kernel density estimation (GDE)—to the conventional Locally Linear Embedding (LLE) approach, we carried out comparison study in this section. Our goal was to identify the relative contributions of each improvement to the overall performance of the upgraded LLE algorithm.

6.1. Decision Tree Integration

In order to isolate the effect of adding decision trees to the LLE algorithm, we ran tests where we only used decision trees to calculate the nearest neighbor—that is, we did not use Gaussian kernel density estimation. “LLE-DT” is the designation given to this algorithmic variation.
From Table 2, the error rates of LLE-DT on the LFW, CFP, and Olivetti datasets are 1.25%, 3.9%, and 7.82% respectively, which are notably lower compared to those achieved by the traditional LLE algorithm. The comparison demonstrates that decision trees are effective in capturing nearest neighbor interactions and boosting the discriminative capability of the LLE algorithm. It also shows that adding decision trees alone results in a slight improvement in accuracy across all datasets.

6.2. Gaussian Density Estimation (GDE)

As shown in Table 3, we then investigated how Gaussian kernel density estimation (GDE) affects the LLE algorithm by using GDE only for density estimation in place of decision trees. This variation has the designation “LLE-GDE”.
As shown in Figure 3, the findings show that Gaussian kernel density estimation makes a substantial contribution to the accuracy gain, which is especially noticeable in the LFW dataset. The error rate is merely 0.87%, demonstrating a significant improvement in accuracy. This emphasizes how crucial density estimation is to improving the proximity associations that the LLE algorithm detects.

6.3. Combined Enhancement (LLE-DT-GDE)

As shown in Table 4, we assessed the overall enhancement attained through the combination of decision trees and Gaussian kernel density estimation within the LLE technique. The full suggested method is represented by this setup.
As shown in Figure 4, the comparison demonstrates how decision trees and Gaussian density estimation work well together to produce significant accuracy gains on all datasets. The effectiveness of the suggested improvements is confirmed by the suggested method’s persistent superiority over the conventional LLE algorithm.

6.4. Discussion

In this section, we systematically assessed the contributions of integrating decision trees and Gaussian kernel density estimation (GDE) into the conventional Locally Linear Embedding (LLE) algorithm through an ablation study. Our primary goal was to identify the relative contributions of each enhancement to the overall performance of the upgraded LLE algorithm.
Our results indicate that the integration of decision trees into the LLE algorithm (referred to as LLE-DT) leads to a noticeable improvement in accuracy across all datasets tested. Specifically, LLE-DT achieved accuracies of 98.75% on the LFW dataset, 96.10% on the CFP dataset, and 92.18% on the Olivetti dataset, compared to the traditional LLE’s accuracies of 98.52%, 96.55%, and 89.99%, respectively. The decision trees effectively capture nearest neighbor interactions, which enhances the discriminative capability of the LLE algorithm. These studies have demonstrated the utility of decision trees in improving classification tasks by providing robust mechanisms for nearest neighbor searches.
When examining the effect of Gaussian kernel density estimation (GDE) on the LLE algorithm (designated as LLE-GDE), we observed a substantial contribution to accuracy improvement. The LLE-GDE variation achieved accuracies of 99.13% on the LFW dataset, 96.85% on the CFP dataset, and 91.82% on the Olivetti dataset. This underscores the significance of effective density estimation in enhancing the proximity associations identified by the LLE algorithm. GDE’s ability to provide a smoother and more nuanced understanding of the data distribution contributes significantly to these accuracy gains, particularly in the LFW dataset, which is known for its challenging face recognition scenarios.
LLE-DT-GDE, integrating both decision trees and Gaussian kernel density estimation within the LLE technique (denoted as LLE-DT-GDE), demonstrated the most significant accuracy improvements across all datasets. This combined method achieved accuracies of 99.61% on the LFW dataset, 97.23% on the CFP dataset, and 93.56% on the Olivetti dataset, outperforming the traditional LLE algorithm, which achieved 98.52%, 96.55%, and 89.99%, respectively. The synergy between decision trees and GDE is evident, as they complement each other in capturing complex data structures and enhancing the overall performance of the LLE algorithm.
The ablation study confirms that both decision tree integration and Gaussian kernel density estimation individually contribute to improving the accuracy of the LLE algorithm. However, their combined application (LLE-DT-GDE) yields the most significant performance gains, demonstrating the potential of hybrid approaches in enhancing manifold learning techniques. These findings contribute to the broader understanding of how traditional algorithms can be effectively upgraded through strategic enhancements, offering valuable insights for future research and applications in machine learning and pattern recognition.

6.5. Future Directions

Subsequent investigations will concentrate on optimizing the suggested approach to improve computing effectiveness, especially when working with extensive datasets. Furthermore, investigating different density estimation methods and integrating with sophisticated machine learning frameworks can produce more performance improvements and broaden the algorithm’s application to a variety of real-world situations.

7. Future Directions and Conclusions

In this research, an advanced face recognition algorithm is proposed, which integrates decision tree and Gaussian kernel density estimation methods to more precisely capture the relationships between nearest data points, thus significantly enhancing the accuracy of facial recognition. More specifically, the algorithm initially employs the robust classification abilities of decision trees to accurately ascertain the nearest neighbor relationships among the data points. Decision trees, an effective machine learning approach, can swiftly and accurately classify and process data by developing a sequence of decision-making rules. Within this algorithm, the utilization of decision trees offers a distinct map of the proximity relationships among data points, a crucial step for comprehending and analyzing intricate data structures. The algorithm enhances these neighboring relationships through Gaussian kernel density estimation. Gaussian kernel density estimation is a non-parametric method for estimating probability density functions. It smoothes and optimizes the distribution of sample points, resulting in a finer and more faithful representation of the distribution of sample points on the manifold. In the context of facial recognition, this step is especially crucial as it can unveil latent non-linear structural features within the data, which are vital for understanding and recognizing faces.
The method of combining decision trees and Gaussian kernel density estimation enables this algorithm to accurately model advanced features and complex relationships within facial data. This approach not only improves the accuracy of facial recognition but also enhances the algorithm’s ability to generalize across different facial types and expressions. Through testing the algorithm on several standard datasets, we have observed that it achieves significantly higher recognition accuracy compared to traditional facial recognition techniques. While this algorithm has made significant strides in enhancing facial recognition accuracy, it does have its share of challenges and limitations. One example is the potential for higher computational costs due to the algorithm’s complexity, which could pose challenges when working with large-scale datasets. Future efforts will be directed toward enhancing the algorithm’s efficiency and reducing its computational resource requirements, making it more practical and efficient for a broad spectrum of application scenarios.

Future Work

The integration of decision trees and Gaussian kernel density estimation, while enhancing the accuracy of the Local Linear Embedding (LLE) algorithm, introduces several significant challenges. Firstly, the increased computational complexity may result in higher processing times and memory usage, posing limitations for large-scale datasets and real-time applications [43]. Secondly, the scalability of the algorithm is a concern, as performance may degrade with increasing dataset size, necessitating efficient scaling techniques to maintain accuracy [44]. Additionally, the sensitivity of decision trees and Gaussian kernel density estimation to noise and outliers can adversely impact the algorithm’s robustness and accuracy [45]. Lastly, while the algorithm demonstrates strong performance on standard facial recognition datasets, its generalization to more diverse and complex datasets requires further validation and adaptation to ensure broader applicability.
Future work will focus on addressing these limitations to enhance the algorithm’s practicality and efficiency. Optimization efforts will target reducing computational resource requirements by refining decision tree construction and Gaussian kernel density estimation processes, making the algorithm more suitable for large-scale and real-time applications. Improving scalability will involve designing more efficient algorithms and data structures to handle larger datasets without compromising accuracy or efficiency [46]. Enhancing robustness to noise and outliers could be achieved by incorporating robust statistical techniques or regularization methods. Implementing automated parameter tuning techniques, such as grid search or Bayesian optimization, will help identify optimal settings efficiently, ensuring consistent performance across different datasets. Extensive testing on a wider range of datasets will ensure generalization capability, adapting the algorithm to various real-world scenarios and data types, such as video sequences and 3D facial data. Additionally, integrating this algorithm with advanced techniques like deep learning frameworks could further boost performance, combining the strengths of different approaches for more accurate and efficient facial recognition systems. Through these future endeavors, we aim to advance the field of facial recognition and contribute to the development of more robust and efficient data analysis and classification techniques.
The integration of decision tree-based neighbor recognition and Gaussian kernel density estimation into the LLE algorithm significantly enhances its performance in various complex data processing tasks. This advanced algorithm has broad applications in fields such as healthcare and medical diagnostics, facial recognition and security, and autonomous vehicles, ultimately contributing to the improvement of human life.

Author Contributions

Conceptualization, H.G. and Z.Z.; methodology, H.G. and M.A.A.; software, H.G. and U.M.I.; validation, H.G., Z.Z., and J.O.; formal analysis, H.G.; investigation, Z.Q. and M.A.A.; resources, Z.Q. and U.M.I.; data curation, H.G.; writing—original draft preparation, H.G.; writing—review and editing, H.G.; visualization, H.G.; supervision, Z.Z.; project administration, H.G.; funding acquisition, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 62006102).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bellman, R. Adaptive Control Processes: A Guided Tour; Princeton University Press: Princeton, NJ, USA, 1961. [Google Scholar]
  2. Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation. Neural Netw. 2020, 121, 74–87. [Google Scholar] [CrossRef]
  3. Wong, K.K.L.; Fortino, G.; Abbott, D. Deep learning-based cardiovascular image diagnosis: A promising challenge. Future Gener. Comput. Syst. 2020, 110, 802–811. [Google Scholar] [CrossRef]
  4. Tang, Z.; Sui, M.; Wang, X.; Xue, W.; Yang, Y.; Wang, Z.; Ouyang, T. Theory-guided Deep Neural Network for boiler 3-D NOx concentration distribution prediction. Energy 2024, 299, 131500. [Google Scholar] [CrossRef]
  5. Wong, K.K.L. Cybernetical Intelligence: Engineering Cybernetics with Machine Intelligence, 1st ed.; The Institute of Electrical and Electronics Engineers, Inc.: Piscataway, NJ, USA; John Wiley & Sons, Inc.: Oxford, UK, 2024; ISBN 9781394217489. [Google Scholar]
  6. Raducanu, B.; Dornaika, F. A supervised non-linear dimensionality reduction approach for manifold learning. Pattern Recognit. 2012, 45, 2432–2444. [Google Scholar] [CrossRef]
  7. Law, M.H.C.; Jain, A.K. Incremental nonlinear dimensionality reduction by manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 377–391. [Google Scholar] [CrossRef]
  8. Lunga, D.; Prasad, S.; Crawford, M.M.; Ersoy, O. Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning. IEEE Signal Process. Mag. 2013, 31, 55–66. [Google Scholar] [CrossRef]
  9. Wong, K.K.L. Bridging game theory and knapsack problem. J. Eng. Math. 2015, 91, 177–192. [Google Scholar] [CrossRef]
  10. Jarvis, R.A.; Patrick, E.A. Clustering using a similarity measure based on shared near neighbors. IEEE Trans. Comput. 1973, 100, 1025–1034. [Google Scholar] [CrossRef]
  11. Wala, M.; Klöckner, A. Optimization of fast algorithms for global Quadrature by Expansion using target-specific expansions. J. Comput. Phys. 2020, 403, 108976. [Google Scholar] [CrossRef]
  12. Wala, M.; Klöckner, A. A fast algorithm for quadrature by expansion in three dimensions. J. Comput. Phys. 2019, 388, 655–689. [Google Scholar] [CrossRef]
  13. Corsini, M.; Cignoni, P.; Scopigno, R. Efficient and flexible sampling with blue noise properties of triangular meshes. IEEE Trans. Vis. Comput. Graph. 2012, 18, 914–924. [Google Scholar] [CrossRef]
  14. Gjoka, M.; Kurant, M.; Butts, C.T.; Markopoulou, A. A walk in facebook: Uniform sampling of users in online social networks. arXiv 2009, arXiv:0906.0060. [Google Scholar]
  15. Wong, K.K.L. A geometrical perspective for the bargaining problem. PLoS ONE 2010, 5, e10331. [Google Scholar] [CrossRef]
  16. Candes, E.J.; Li, X.; Ma, Y.; Wright, J. Robust Principal Component Analysis. J. ACM 2011, 58, 11. [Google Scholar] [CrossRef]
  17. Jolliffe, I.T.; Cadima, J. Principal Component Analysis: A Review and Recent Developments. Philos. Trans. R. Soc. A 2016, 374, 20150202. [Google Scholar] [CrossRef]
  18. Scornet, E. Random Forests and Kernel Methods. IEEE Trans. Inf. Theory 2016, 62, 1485–1500. [Google Scholar] [CrossRef]
  19. Cox, T.; Cox, M. Multidimensional Scaling; Chapman & Hall: London, UK, 1994. [Google Scholar]
  20. Hu, L.; Chen, G. A systematic review of visual representations for analyzing collaborative discourse. Educ. Res. Rev. 2021, 34, 100403. [Google Scholar] [CrossRef]
  21. Hu, L.; Chen, G. Exploring turn-taking patterns during dialogic collaborative problem solving. Instr. Sci. 2021, 50, 63–88. [Google Scholar] [CrossRef]
  22. Javeed, M.A.; Ghaffar, M.A.; Ashraf, M.A.; Zubair, N.; Metwally, A.S.M.; Tag-Eldin, E.M.; Bocchetta, P.; Javed, M.S.; Jiang, X. Lane Line Detection and Object Scene Segmentation Using Otsu Thresholding and the Fast Hough Transform for Intelligent Vehicles in Complex Road Conditions. Electronics 2023, 12, 1079. [Google Scholar] [CrossRef]
  23. Scott, D.W. Multivariate Density Estimation: Theory, Practice, and Visualization; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  24. Seifert, G.; Joswig, J.O. Density-functional tight binding—An approximate density-functional theory method. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2012, 2, 456–465. [Google Scholar] [CrossRef]
  25. Samadiani, N.; Huang, G.; Cai, B.; Luo, W.; Chi, C.-H.; Xiang, Y.; He, J. A review on automatic facial expression recognition systems assisted by multimodal sensor data. Sensors 2019, 19, 1863. [Google Scholar] [CrossRef]
  26. Tong, Y.; Wang, Y.; Zhu, Z.; Ji, Q. Robust facial feature tracking under varying face pose and facial expression. Pattern Recognit. 2007, 40, 3195–3208. [Google Scholar] [CrossRef]
  27. Shokrzadeh, S.; Jozani, M.J.; Bibeau, E. Wind turbine power curve modeling using advanced parametric and nonparametric methods. IEEE Trans. Sustain. Energy 2014, 5, 1262–1269. [Google Scholar] [CrossRef]
  28. Lavergne, P.; Patilea, V. Breaking the curse of dimensionality in nonparametric testing. J. Econom. 2008, 143, 103–122. [Google Scholar] [CrossRef]
  29. Jani, J.H.; Desai, S. A Study on various Human Facial Feature Extraction Techniques in High Dimensional Spaces. Int. J. Eng. Res. Technol. 2021, 9, 160–164. [Google Scholar]
  30. Verleysen, M.; François, D. The Curse of Dimensionality in Data Mining and Time Series Prediction; International Work-Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2005; pp. 758–770. [Google Scholar]
  31. Strehl, A. Relationship-Based Clustering and Cluster Ensembles for High-Dimensional Data Mining; The University of Texas at Austin: Austin, TX, USA, 2002. [Google Scholar]
  32. McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv 2018, arXiv:1802.03426. [Google Scholar]
  33. Zhang, Z.; Zhao, L. Locally linear embedding for classification. In Proceedings of the Forth International Conference on Fuzzy Systerms and Knowledge Discovery (FSDK), Haikou, China, 24–27 August 2007. [Google Scholar]
  34. Bilal, M.; Javaid, A.; Amjad, F.; Youssif, T.A.; Afzal, S. An overview of prostate cancer (PCa) diagnosis: Potential role of miRNAs. Transl. Oncol. 2022, 26, 101542. [Google Scholar] [CrossRef]
  35. Falini, A. A review on the selection criteria for the truncated SVD in Data Science applications. J. Comput. Math. Data Sci. 2022, 5, 100064. [Google Scholar] [CrossRef]
  36. Kukreja, V. Recent trends in mathematical expressions recognition: An LDA-based analysis. Expert Syst. Appl. 2023, 213, 119028. [Google Scholar]
  37. Mair, P.; Groenen, P.J.F.; de Leeuw, J. More on multidimensional scaling and unfolding in R: Smacof version 2. J. Stat. Softw. 2022, 102, 1–47. [Google Scholar] [CrossRef]
  38. Ghojogh, B.; Ghodsi, A.; Karray, F.; Crowley, M. Theoretical Connection between Locally Linear Embedding, Factor Analysis, and Probabilistic PCA. In Proceedings of the Canadian Conference on Artificial Intelligence, Toronto, ON, Canada, 30 May–3 June 2022. [Google Scholar]
  39. Tai, M.; Kudo, M.; Tanaka, A.; Imai, H.; Kimura, K. Kernelized supervised Laplacian eigenmap for visualization and classification of multi-label data. Pattern Recognit. 2022, 123, 108399. [Google Scholar] [CrossRef]
  40. Huang, G.B.; Mattar, M.; Berg, T.; Learned-Miller, E. Labeled faces in the wild: A database for studying face recognition in un constrained environments. In Proceedings of the Workshop on Faces in ’Real Life’ Images: Detection, Alignment, and Recognition, Marseille, France, 13 October 2008; pp. 617–624. [Google Scholar]
  41. Sengupta, S.; Chen, J.C.; Castillo, C.; Patel, V.M.; Chellappa, R.; Jacobs, D.W. Frontal to profile face verification in the wild. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–9. [Google Scholar]
  42. Samaria, F.S.; Harter, A.C. Parameterisation of a stochastic model for human face identification. In Proceedings of the 1994 IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 5–7 December 1994; pp. 138–142. [Google Scholar]
  43. Liu, M.; Lv, J.; Du, S.; Deng, Y.; Shen, X.; Zhou, Y. Multi-resource constrained flexible job shop scheduling problem with fixture-pallet combinatorial optimisation. Comput. Ind. Eng. 2024, 188, 109903. [Google Scholar] [CrossRef]
  44. Zhou, Y.; Du, S.; Liu, M.; Shen, X. Machine-fixture-pallet resources constrained flexible job shop scheduling considering loading and unloading times under pallet automation system. J. Manuf. Syst. 2024, 73, 143–158. [Google Scholar] [CrossRef]
  45. Liu, C.; Chen, B.; Shao, W.; Zhang, W.J.; Zhang, Y.; Wong, K.K.L. Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them. arXiv 2024, arXiv:2401.11723. [Google Scholar]
  46. Tang, Z.; Wang, S.; Li, Y. Dynamic NOX emission concentration prediction based on the combined feature selection algorithm and deep neural network. Energy 2024, 292, 130608. [Google Scholar] [CrossRef]
Figure 1. (a) Potentially occluded or (b) corrupted, as a sparse linear combination of all the training images (middle) plus sparse errors (right) due to occlusion or corruption. Green (darker) coefficients correspond to training images of the correct individual. The algorithm determines the true identity (indicated with a red box at second row and third column) from 700 training images of 100 individuals (7 each) in the standard AR face database.
Figure 1. (a) Potentially occluded or (b) corrupted, as a sparse linear combination of all the training images (middle) plus sparse errors (right) due to occlusion or corruption. Green (darker) coefficients correspond to training images of the correct individual. The algorithm determines the true identity (indicated with a red box at second row and third column) from 700 training images of 100 individuals (7 each) in the standard AR face database.
Symmetry 16 00765 g001
Figure 2. Flow diagram of the manifold learning algorithm.
Figure 2. Flow diagram of the manifold learning algorithm.
Symmetry 16 00765 g002
Figure 3. Processing After Detection Refinement techniques like non-maximum suppression are used after first detection to improve the results.
Figure 3. Processing After Detection Refinement techniques like non-maximum suppression are used after first detection to improve the results.
Symmetry 16 00765 g003
Figure 4. Summary of the manifold learning and sparse representation proposed methodology.
Figure 4. Summary of the manifold learning and sparse representation proposed methodology.
Symmetry 16 00765 g004
Figure 5. Sample images from LFW, CFP, and Olivetti datasets for improved LLE method.
Figure 5. Sample images from LFW, CFP, and Olivetti datasets for improved LLE method.
Symmetry 16 00765 g005
Figure 6. Illustrates a case involving an irrelevant test image. (a) This section displays the sparse coefficients obtained for the irrelevant test image when compared to the same training dataset used in Example 1. The test image was chosen at random and does not pertain to the dataset. (b) This part shows the residuals for the irrelevant test image relative to the projection δ i ( x ^ ) from the sparse representation determined through l 1 -minimization. The smallest two residuals have a ratio of approximately 1:1.2.
Figure 6. Illustrates a case involving an irrelevant test image. (a) This section displays the sparse coefficients obtained for the irrelevant test image when compared to the same training dataset used in Example 1. The test image was chosen at random and does not pertain to the dataset. (b) This part shows the residuals for the irrelevant test image relative to the projection δ i ( x ^ ) from the sparse representation determined through l 1 -minimization. The smallest two residuals have a ratio of approximately 1:1.2.
Symmetry 16 00765 g006
Figure 7. Illustration of recognition process utilizing image-derived features.
Figure 7. Illustration of recognition process utilizing image-derived features.
Symmetry 16 00765 g007
Figure 8. Feature map visualization.
Figure 8. Feature map visualization.
Symmetry 16 00765 g008
Figure 9. Improved graph representations of SRC, linear SVM, nearest neighbor, and nearest subspace.
Figure 9. Improved graph representations of SRC, linear SVM, nearest neighbor, and nearest subspace.
Symmetry 16 00765 g009
Table 1. Comparative performance of various algorithms on the LFW, CFP, and Olivetti face databases.
Table 1. Comparative performance of various algorithms on the LFW, CFP, and Olivetti face databases.
MethodsLFWCFPOlivetti
PCA+SVM95.88%92.45%88.63%
SVD+SVM94.22%91.99%87.12%
LDA+SVM94.62%92.03%87.96%
MDS+SVM95.83%91.78%88.52%
LLE+SVM98.52%96.55%89.99%
LE+SVM98.45%96.88%90.02%
Ours99.61%97.23%93.56%
Table 2. Comparative performance of LLE-DT on the various face databases.
Table 2. Comparative performance of LLE-DT on the various face databases.
MethodsLFW AccuracyCFP AccuracyOlivetti Accuracy
LLE-DT98.75%96.10%92.18%
Traditional LLE98.52%96.55%89.99%
Table 3. Comparative performance of LLE-GDE on the various face databases.
Table 3. Comparative performance of LLE-GDE on the various face databases.
MethodsLFW AccuracyCFP AccuracyOlivetti Accuracy
LLE-GDE99.13%96.85%91.82%
Traditional LLE98.52%96.55%89.99%
Table 4. Comparative performance of LLE-DT-GDE on the various face databases.
Table 4. Comparative performance of LLE-DT-GDE on the various face databases.
MethodsLFW AccuracyCFP AccuracyOlivetti Accuracy
LLE-DT-GDE (Ours)99.61%97.23%93.56%
Traditional LLE98.52%96.55%89.99%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ge, H.; Zhu, Z.; Ouyang, J.; Ashraf, M.A.; Qiu, Z.; Ibrahim, U.M. Integration of Manifold Learning and Density Estimation for Fine-Tuned Face Recognition. Symmetry 2024, 16, 765. https://doi.org/10.3390/sym16060765

AMA Style

Ge H, Zhu Z, Ouyang J, Ashraf MA, Qiu Z, Ibrahim UM. Integration of Manifold Learning and Density Estimation for Fine-Tuned Face Recognition. Symmetry. 2024; 16(6):765. https://doi.org/10.3390/sym16060765

Chicago/Turabian Style

Ge, Huilin, Zhiyu Zhu, Jiali Ouyang, Muhammad Awais Ashraf, Zhiwen Qiu, and Umar Muhammad Ibrahim. 2024. "Integration of Manifold Learning and Density Estimation for Fine-Tuned Face Recognition" Symmetry 16, no. 6: 765. https://doi.org/10.3390/sym16060765

APA Style

Ge, H., Zhu, Z., Ouyang, J., Ashraf, M. A., Qiu, Z., & Ibrahim, U. M. (2024). Integration of Manifold Learning and Density Estimation for Fine-Tuned Face Recognition. Symmetry, 16(6), 765. https://doi.org/10.3390/sym16060765

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop