Next Article in Journal
Strawberry (Fragaria × ananassa) and Kiwifruit (Actinidia deliciosa) Extracts as Potential Radioprotective Agents: Relation to Their Phytochemical Composition and Antioxidant Capacity
Previous Article in Journal
Application of Interpretable Machine Learning for Production Feasibility Prediction of Gold Mine Project
Previous Article in Special Issue
Multi-Attention-Guided Cascading Network for End-to-End Person Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies

1
Drilling and Production Technology Research Institute, Liaohe Oilfield Company, PetroChina, Panjin 124010, China
2
College of Information and Science Technology, Dalian Maritime University, Dalian 116021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8993; https://doi.org/10.3390/app13158993
Submission received: 12 July 2023 / Revised: 31 July 2023 / Accepted: 2 August 2023 / Published: 5 August 2023
(This article belongs to the Special Issue Algorithms and Applications of Multi-View Information Clustering)

Abstract

:
Semi-supervised metric learning intends to learn a distance function from the limited labeled data as well as a large amount of unlabeled data to better gauge the similarities of any two instances than using a general distance function. However, most existing semi-supervised metric learning methods rely on the manifold assumptions to mine the rich discriminant information of the unlabeled data, which breaks the intrinsic connection between the manifold regularizer-building process and the subsequent metric learning. Moreover, these methods usually encounter high computational or memory overhead. To solve these issues, we develop a novel method entitled Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies (ISMLP). ISMLP aims to simultaneously learn multiple proxy vectors as well as a Mahalanobis matrix and forms the semi-supervised metric learning as the probability distribution optimization parameterized by the Mahalanobis distance between the instance and each proxy vector. ISMLP maximizes the entropy of the labeled data and minimizes that of the unlabeled data to follow the entropy regularization, in this way, the labeled part and unlabeled part can be integrated in a meaningful way. Furthermore, the time complexity of the proposed method has a linear dependency concerning the number of instances, thereby, can be extended to the large-scale dataset without incurring too much time. Experiments on multiple datasets demonstrate the superiority of the proposed method over the compared methods used in the experiments.

1. Introduction

Distance Metric Learning (DML), usually referring to learning a Mahalanobis matrix from the given side information, has been an active studying field for the last two decades [1,2,3,4]. Compared to those off-the-shelf distance functions, e.g., Euclidean distance, DML takes the correlations and weights of the features into distance consideration, thus being more appropriate for various downstream tasks. Its efficiency has been validated by a large spectrum of applications [5,6,7], for example, few-shot learning [8,9], face recognition [9,10], and fault detection [11,12]. Despite the success of existing DML methods, they rely on massive side information constructed by labeled information [13]. However, manually labeling the data is a labor-consuming task [14], and sometimes it needs domain knowledge [15,16] to provide meaningful labeling information, e.g., labeling the checkup samples of patients.
To solve this issue, researchers have devoted themselves to Semi-Supervised Distance Metric Learning (SSDML). SSDML intends to learn a Mahalanobis matrix from limited labeled data as well as a large amount of unlabeled data, such that under this metric, similar instances are brought closer together whereas dissimilar ones are pushed farther away. Inspired by the unsupervised dimensionality reduction methods, which aim to preserve some properties in the original data space, a lot of SSDML approaches based on manifold-based regularization terms have been proposed in the last decades [1,17,18,19,20,21,22]. For example, Wang [1] proposed to project the data into a new space, where labeled data has the maximum margin constraint, and the unlabeled data has the maximum variance. Similarly, Baghshah and Shouraki [18] constructed a novel SDML method by retaining locally linear relationships between close data points in the transformed space and proposed a regularization term based on Locally Linear Embedding (LLE) [23]. The above regularization terms cannot boost the discriminative ability of the model. There are also some SSDML methods based on the Laplacian graph [19,21,22,24,25,26,27], i.e., Laplacian Regularized Metric Learning (LRML) [17] utilized the graph Laplacian to preserve the neighbor relationship of the original space. However, the above graph Laplacian construction process does not take the labeled information into consideration. To mitigate this issue, Dutta and Sekhar [21] proposed to utilize Markov random walk technology to transform the strong limited labeled information into a Laplacian matrix. Ying et al. [20] also took the density information of each instance into the Laplacian graph construction process. However, these methods rely on a default metric to determine the affinities among the samples, which contradicts the goal of metric learning. If the default metric is an appropriate metric, why should we still strive to search for another metric? There are also a few works that do not depend on manifold-based regularization. For example, Semi-Supervised Metric Learning Paradigm with Hyper Sparsity (SERAPH) [28] and Semi-Supervised Regularized Large Margin Distance Metric Learning (S-RLMM) [29]. SERAPH is an information-theoretic metric learning method, and it maximizes the entropy on the labeled data while minimizing the entropy on the unlabeled data. However, the time complexity of these methods is at least quadratically dependent on the number of training instances (Table 4 provides a brief time complexity analysis of some representative methods), which means these methods can hardly scale to large-scale datasets. Moreover, these methods rely on a fixed metric to mine the information similarities between samples, which contradicts the goal of learning a metric from data.
To solve this issue, in this paper, we propose an efficient SSDML method called ISMLP; rather than building the probability model via the instance–instance distance parameterized by the learned Mahalanobis matrix, we propose to learn a set of proxy vectors and transform the instance–instance relationship as the instance–proxy relationship. We minimize the labeled instances and their corresponding proxy vectors to efficiently mine the information of the unlabeled data; inspired by the SERAPH, we incorporate entropy regularization. Importantly, the Mahalanobis matrix is constricted as a hierarchical form to further boost training efficiency. An Alternating Direction Method (ADM) technology is adopted to seek a feasible solution for ISMLP, and the sub-problem concerning the Mahalanobis matrix can be efficiently solved by an iterative method on the product space of two Riemannian manifold. The merits of using proxy vectors lie in two folds: on the one hand, the time complexity of ISMLP is linearly dependent on the number of instances, thus can be easily extended to large-scale datasets; on the other hand, the instance–instance distances may be corrupted because of the noise instances in the dataset. The proxy vectors can be considered as aggregating the class/local information of the dataset, therefore, it is more stable than SERAPH.
  • We propose a novel information-theoretic-based SSDML method called ISMLP, which simultaneously learns multiple proxy vectors as well as the Mahalanobis matrix. Specifically, we adopt the entropy regularization to mine the discriminant information of the unlabeled data.
  • The merits of the proposed ISMLP lie in two folds: on the one hand, compared to those manifold-based SSDML methods, ISMLP does not rely on manifold assumptions. Thus, it can be applied to border scenes; the time complexity of ISMLP is linear with respect to the number of training instances, and thus can be easily extended to large-scale datasets.
  • Extensive experimental results on classification and retrieval experiments can validate the superiority performance and in the meantime can be trained more efficiently than those compared methods.
The rest of this paper is organized as follows: In Section 2, we briefly introduce the SERAPH framework. Then, we introduce the construction process of the proposed method in Section 3, followed by the extensive numerical experiment in Section 5. Finally, in Section 6, we make a conclusion and provide a possible future direction of the proposed ISMLP.

2. Related Work

SERAPH Framework

Recently, Niu et al. proposed a semi-supervised metric learning framework called SERAPH, based on entropy regularization [28]. Given the probability distribution parameterized by the Mahalanobis distance between two instances, SERAPH maximizes the entropy of the sample pairs from the similar set and minimizes the entropy of those from the dissimilar set. The objective function of SERAPH can be constructed as follows:
max A S + d x i , x i P log p ^ i j A y i j μ x i , x i U y 1 , 1 p ^ i j A y log p ^ i j A y λ Tr A ,
where λ > 0 and μ > 0 are two hyperparameters. P = S D with S ( D ) denoting the similar (dissimilar) set, which is defined in Section 3.1. U = x i , x j | x i , x j P . The trace regularization ensures A to be low-rank. y i j (y) denotes the ground truth (predicted) label of x i , x j , more specifically when x i , x j S , y i j = 1 when x i , x j D , y i j = 1 . p y ^ i j represents the predicted probability of a pair of examples x i , x j given the Mahalanobis matrix A , which is defined as:
p y ^ i j = 1 1 + exp y i j d A 2 x i , x j η ,
where η > 0 denotes the margin hyper-parameter.

3. Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies

In this section, we first provide the detailed construction procedure of the proposed ISMLP method. Then, we derive the optimization strategy of ISMLP.

3.1. Notations and Problem Definition

Given a dataset, X = X l , X u R d × n composed of the labeled data X l R d × n l and the unlabeled dataset X u R d × n u , where n l and n u denote the number of labeled instances and unlabeled instances, respectively, in the semi-supervised setting, n l is usually far smaller than n u . d is the dimensionality of the feature. For the labeled dataset X l , suppose that each instance is associated with the class label, i.e., X l = x 1 , y 1 , x 2 , y 2 , , x n l , y n l , with y 0 , 1 C , C is the number of the classes. Pairwise constraint sets S and D can be extracted from X l :
S = x i , x j | x i and x j are semantic similar D = x i , x j | x i and x j are semantic dissimilar .
Semi-supervised metric learning aims to learn a Mahalanobis matrix M from X , such that data can be transformed into a new space, where the semantic similarity between data points can be diametrically estimated from their distances in the transformed space. A typical formulation of metric learning is to require the pairwise distance from S to be smaller than l and that of the dissimilar set to be larger than u.
d M 2 x i , x j < l , x i , x j S d M 2 x i , x k > u , x i , x k D
The general form of a semi-supervised metric learning framework can be constructed as:
min M l M , S , D + λ u ( M , X u ) + μ R M ,
where l and u denote the pairwise loss and the unlabeled loss, respectively. R is the regularization term concerning the structural information of the metric. λ and μ are two hyper-parameters that control the weight of the u and R .

3.2. Learning From Proxy Vectors

SEPAPH approach relies on the pairwise distance to construct the probability model, which has the following two drawbacks: (1) the time complexity of SERAPH has a quadratic dependence on the number of unlabeled data, which means it cannot scale to large-scale datasets; (2) SERAPH is a global metric learning method and SERAPH is sensitivity to the outlier in the dataset. To solve these problems, suppose that there are multiple proxy vectors anchoring around the whole instance space, and these vectors can be expressed as a set: Z = z 1 , z 2 , , z m , where m is the number of vectors. The function of the proxy vector can be considered as the mean center of each class, or anchor that aggregates the local similarity information of some local instances [30]. By aligning the relevant instances to their corresponding proxy vectors, the problem of the outlier instances inside the dataset as well as the high time complexity can be efficiently settled. Similar to proxy-NCA [31], for any instance x , its corresponding proxy vector can be estimated by:
p x = arg min p Z d M 2 x , z ,
where d M 2 x , z is the Mahalanobis distance between x and z . Equation (5) means that the proxy vector of an instance is the proxy vector that is closest to the instance under the metric M .
Instead of directly constructing the probabilistic model from the pairwise Mahalanobis distance, we utilize the pairwise distances among the instance and all the proxy vectors to form the NCA-style [32] probabilistic model. More specifically, for any instance, x i , its probability of randomly choosing a proxy vector can be estimated via:
q i j = exp d M 2 x i , z j j = 1 m exp d M 2 x i , z j .
q i j can reflect the closeness between x i and z j ; the closer x i to z j , the bigger the value. All the q i j ( j = 1 , 2 , , m ) form a valid discrete probability distribution:
q i = q i 1 , q i 2 , , q i m T .
Suppose that the weakly-supervised information is provided in the form of pairwise constraint forms. For any two instances x i , x j from S , we aim to minimize the distance between x i and the proxy vector of x j , and, in the meanwhile, keep a large distance from the other proxy vectors. Such an idea can be expressed as:
max M S + d 1 | S | x i , x j S log exp d M 2 x i , p x j j = 1 m exp d M 2 x i , z j .
where | S | denotes the cardinality of S . S + d is the set of all the d × d PSD (Positive Semi-Definite) matrices. Maximizing Equation (8) will push x i toward p x j . In contrast to the SERAPH algorithm, ISMLP converts the pairwise distance to that of the instance and the corresponding proxy vector. The proxy vector can be viewed as aggregating the local similarity information of local instances. Therefore, the proposed method can exhibit more robust behavior than SERAPH.

3.3. Entropy Regularization

The above Equation (8) only considers the labeled data information. In semi-supervised settings, the amount of labeled data is usually limited, applying Equation (8) to these conditions may easily come across the over-fitting problem. To solve this issue, researchers propose to simultaneously mine the discriminant information of unlabeled data. Multiple tricks have been taken in the literature, such as manifold-based regularization [20,21,33] and entropy regularization [28]. However, these regularizations are usually of high time or space complexity. By using the proxy vector, we can show that the time complexity of the entropy regularization can be significantly reduced.
In the field of information theory, the information entropy measures the degree of “uncertainty” of the given random variable. For a given random variable p ¯ = p 1 , , p m , its definition of the information entropy can be expressed as H m p ¯ = i = 1 m p m log p m . When p 1 = = p i 1 = p i + 1 = = p m = 0 , p i = 1 , i = 1 , , m , H m p ¯ reaches its minimal value. In this case, the system has minimal “uncertainty” [34].
The intuition of entropy regularization used in semi-supervised learning follows the low-density separation assumption [35,36], which encourages the unlabeled data to be predicted with high probability. In ISMLP, it is desirable for unlabeled data to locate around a certain proxy vector. In other words, the distribution in Equation (7) should be a perky one. To achieve this goal, according to the above analysis of the minimization of information entropy, the following entropy regularization can be constructed:
min M S + d 1 n u i = n l + 1 n k = 1 m q i k log q i k ,
where q i j is the probability of i-th instance choosing the j-th proxy-vector as a neighborhood, and it is defined by Equation (6). Minimizing Equation (9) will shrink the distances between the unlabeled instances and their corresponding proxy vectors while keeping a large distance to the other proxy vectors.

3.4. Joint Dimensional Reduction and Metric Learning

One can combine the labeled part and the entropy regularization to finish the objective function of ISMLP. However, when the dimensionality of the feature is large, adopting PGD technology to solve ISMLP will encounter high time complexity. To solve this issue, inspired by the hierarchical way to build the Mahalanobis matrix [37], we propose to decompose M as the following form:
M = P R P T ,
where P St p , d , and R S + + p . In the experiment, p is usually much smaller than d; therefore, the running time can be significantly reduced. The decomposition form can be understood as first projecting the original feature into a lower embedding space, then using R to learn the weight and correlations in the embedding space. Following the common practice in dimensionality reduction, we require P to be a column-orthogonal matrix, namely P T P = I p .
Integrating the Equations (8)–(10) to finish the objective function of ISMLP, we obtain:
min P , R , Z 1 | S | x i , x j S log exp d M 2 x i , p x j j = 1 m exp d M 2 x i , z j λ n u i = n l + 1 n k = 1 m q i k log q i k + μ r R , R 0 s . t . M = P R P T , P St p , d , R S + + p
where r : S + + p × S + + p R + denotes the regularization term concerning R . Here, we aim to keep R to be close to a prior matrix R 0 and propose to utilize the Burg divergences [38]:
r R , R 0 = Tr R R 0 1 logdet R R 0 1 p .
In our experiments, we set R 0 as I p for simplicity.
λ > 0 and μ > 0 are two hyper-parameters that control the importance of the entropy regularization term and the structural prior of M , respectively. p is the dimensionality of the latent space. The proposed ISMLP jointly learns the decomposition forms of the Mahalanobis matrix ( P , R ) and multiple proxy vectors Z . Maximizing Equation (11) will push the labeled data moving toward their corresponding proxy vectors, which will assign the unlabeled data a high probability.
Since the first part can be considered as minimizing the entropy of the labeled data and maximizing that of the unlabeled data. These two parts can be naturally combined together in a meaningful way. Instead of directly building the probabilistic model via the pairwise Mahalanobis distance as SERAPH did, ISMLP takes advantage of the proxy vectors to convert the semi-supervised metric learning into m class distribution optimization. The merits of the ISMLP lie in two folds; on the one hand, the proposed IMSLP is more robust than SERAPH, and can cope well with the outlier in the dataset; on the other hand, the time complexity of the algorithm can be significantly reduced.

4. Optimization for ISMLP

There are three types of parameters to be estimated in the objective function of ISMLP. In this section, an alternating-direction technology is proposed to seek a feasible solution. More specifically, we keep the other variables fixed to update the current variable until the stop condition meet.
Fix Z , to solve P and R : the sub-problem concerning P and R can be expressed as the following Riemannian manifold-based optimization problem.
min P , R F P , R X , M 0 ) s . t . P St p , d , R S + + p .
The above minimization problem can be solved via the product space of the Stiefel and SPD manifold. According to [39], the Stiefel and SPD manifolds are locally homogeneous spaces, and their product space should also follow the smoothness and differentiability. Therefore, M p = St p , d × S + + p can contain a Riemannian structural.
Theorem 1.
The set St p , d × S + + p O p with the following equivalence relation
P , R P Q , Q T R Q , Q O p
and Riemannian metric
g P , R ξ P , ξ R , ζ P , ζ R = 2 Tr ξ P T ζ P + Tr R 1 ξ R R 1 ζ R
forms a Riemannian quotient manifold.
Proof. 
We first prove that the equivalence relation hold: F P , R X , M 0 ) = F P Q , Q T R Q X , M 0 ) , since the following equation holds:
P Q Q T R Q P Q T = P R P T , Q O p .
Therefore, the equivalence relation holds. To prove M p O p is a valid quotient manifold, one can follow the proof in [40]. Lastly, as for the Riemannian metric, interested readers can refer to [37].
To perform the Riemannian gradient descent on M p , we usually follow the “projection and retraction” procedures. More specifically, firstly, one can transform the Euclidean gradient as the Riemannian gradient, and then perform the gradient descent step; then, map the intermediate solution back to the manifold [39]. For the Stiefel manifold, the Riemannian gradient can be computed as:
ξ P = F P 1 2 P P T F P + F P T P .
As for the PSD manifold, its Riemannian gradient has the following form:
ξ R = 1 2 R F R + F M T R ,
where F P and F R denote the Euclidean partial gradient of F w.r.t P and R , respectively.
As for the quotient manifold M p , the tangent space at Γ = P , R is divided into two complementary parts, namely a horizontal part H Γ M p and a vertical one V Γ M p . Importantly, the tangent space of M p (denoted as T Γ M p ) can be uniquely identified as it horizontal part.
The horizontal vector in the horizontal tangent space of the proposed quotient manifold can be identified as:
ξ P P ψ , ξ R R ψ + ψ R ,
where ψ is the solution of the following equation [37]:
ψ R 2 + R 2 ψ = R ξ P T P P T ξ P + R 1 ξ R ξ R R 1 R .
For the retraction operation, it can be organized in the following form:
R P , R ξ P , ξ R = ( uf P + ξ P , R 1 2 exp R 1 2 ξ R R 1 2 ) R 1 2 ,
where uf B = B B T B 1 2 , and exp · denotes the matrix exponential operation.
Lastly, the only missing component is F P and F R , and they can be calculated by:
F P = 1 | S | x i , x j S k = 1 m q i k d M 2 x i , z j P d M 2 x i , p x j P λ n u i = n l + 1 n k = 1 m 1 + log q i k q i k ( d M 2 x i , z k P + j = 1 m exp d M 2 x i , z j d M 2 x i , z k P ) ,
with d M 2 x i , z k P = 2 x i z k x i z k T P R .
For F R , it can be expressed as:
F R = 1 | S | x i , x j S k = 1 m q i k d M 2 x i , z j R d M 2 x i , p x j R λ n u i = n l + 1 n k = 1 m 1 + log q i k q i k ( d M 2 x i , z k R + j = 1 m exp d M 2 x i , z j d M 2 x i , z k R ) + μ R 0 1 R 1 ,
with d M 2 x i , z k R = P T x i z k x i z k T P .
Fix P and R to solve Z : The sub-problem with respect to Z can be stated as:
min G Z = 1 | S | x i , x j S log exp d M 2 x i , p x j j = 1 m exp d M 2 x i , z j + λ n u i = n l + 1 n k = 1 m q i k log q i k .
Firstly, we can update the proxy assignment of each instance by recalculating Equation (5). Then, we can solve the proxy vector one by one. More specifically, for the k-th proxy vector z k , by taking the derivative of G with respect to z k , we can get:
G z k = 1 | S | x i , x j S p x j z k 1 q i p x j q i p x j z k + x i , x j S p x j = z k 1 q i k q i k z k λ n u i = n l + 1 n l k 1 + log q i l q i l z k + 1 + log q i k q i k z k
where for the given similar pair x i , x j , q i p x i denotes the probability of x i choosing p x j as the proxy vector computed by Equation (6). q i k z k = 2 1 q i k M x i z k , and q i l z k = 2 q i l j = 1 m exp d M 2 x i , z j M x i z k .
By setting G z k to zero, Equation (25) is simply a linear equation:
w M z i = ψ
where   
w = 1 | S | ( x i , x j S p x j z k 1 j = 1 m exp d M 2 x i , z j x i , x j S p x j = z k 1 q i k q i k ) λ n u i = n l + 1 n ( 1 + log q i k 1 q i k q i k + l k 1 + log q i l q i l j = 1 m exp d M 2 x i , z j ) ,
and ψ :
ψ = M | S | ( x i , x j S p x j z k x i j = 1 m exp d M 2 x i , z j + x i , x j S p x j = z k 1 q i k q i k x i ) + λ n u i = n l + 1 n ( 1 + log q i k 1 q i k q i k l k 1 + log q i l q i l j = 1 m exp d M 2 x i , z j ) M x i ,
we can get the closed-form solution of z k :
z k = M + η I d 1 ψ w
where η > 0 is a small positive number to make A a positively defined matrix. In the experiment, we empirically set it as 1 e 6 , which works fine.    □
To sum up, we propose an alternating direction strategy to solve the minimization of Equation (11). The sub-problems concerning P and R are updated on the product manifold of the Stiefel and SPD manifold via the Riemannian gradient descent algorithm [41]. The sub-problem concerning Z has a closed-form solution. The main procedure of ISMLP is documented in Algorithm 1. It should be noted that we utilize a Gaussian Mixture Model (GMM) to initialize the set of proxy vectors which are the means of the corresponding components. We also document the variations of objective function values with respect to iterations on three datasets in Figure 1. Clearly, the loss decreases as the iterations and inclines become stable after several iterations, which proves that the proposed algorithm can converge within limited iterations.
Algorithm 1: The optimization strategy of ISMLP.
Input:
The labeled and unlabeled datasets X l R d × n l and X u R d × n u , the similar constraint set S , λ , the number of proxy vectors m, μ , and the dimensionality of the embedding space p;
1
Initialize P R d × p as a column orthonormal matrix, and setting R as the identity matrix I p ;
2
Initialize the set of proxy vectors via the Gaussian mixture model by setting the number of components as m;
3
while not converged do
Applsci 13 08993 i001
8
end 
Output: The projection matrix P and low-dimensionality Mahalanobis matrix R ;

Time Complexity of ISMLP

In this section, we provide a brief analysis of the time complexity of the proposed ISMLP. Recall that the number of labeled and unlabeled data is n l and n u , respectively. The dimensionality of the original and the reduced data is denoted as d and p. The number of proxy vectors is denoted as m. The main procedures of ISMLP consist of the following main steps: (1) Evaluating the loss function; (2) Computing the Euclidean gradient of the loss function with respect to P and R ; (3) Projecting the Euclidean gradient of P and R to the tangent space and then retracing them back to the manifold; (4) Updating the set of the proxy vectors.
  • Since the time complexity of solving the inverse of a p × p matrix costs O p 3 , evaluating the objective function in Equation (11) takes O n p d + m n p 2 + p 3 .
  • Computing the Euclidean gradient of the loss function with respect to P by using Equation (22) takes O m n d p 2 , and computing F R via Equation (23) costs O m n p 2 + p 3 .
  • According to [37], projecting the Euclidean gradient of the loss function with respect to P and R by using Equations (17) and (18) costs O 4 d p 2 + 3 p 3 . Retracting the Riemannian gradient back to the manifold via Equation (21) costs O 4 d p 2 + 14 p 3 .
  • Solving all the proxy vectors by using Equation (29) cost O d 3 + m 2 n l p 2 , where n l is the number of labeled data.
Considering the usual case of large-scale semi-supervised metric learning is d n , the time complexity of the proposed ISMLP is about O m n d p 2 , and the main cost lies in evaluating the Euclidean gradient of P . To sum up, the time complexity of the proposed ISMLP has a linear dependence on the number of n; therefore, it can be effectively and efficiently extended to large-scale datasets.

5. Experiment

In this section, extensive visual classification and retrieval experiments are conducted to verify the efficacy and efficiency of the proposed ISMLP. Firstly, we provide a detailed description of the datasets and evaluation index, and compared methods used in the experiments. Then, the experimental results are provided.

5.1. Datasets, Evaluation Protocol and Compared Methods

Datasets: a total of five datasets are utilized, including MNIST [42], Fashion-MNIST [43], Corel 5K [44], CUB-200 [45], and Cars-196 [46]. The MNIST contains 70,000 grayscale handwritten digital images from ten classes, whereas Fashion-MNIST consists of 70,000 images from ten fashion objects. MNIST and Fashion-MNIST are wildly used in the field of semi-supervised learning. The latter two datasets CUB-200 and Cars-196 are two fine-grained visual recognition datasets. The detailed information on the datasets is listed in Table 1.
The pixel value of MNIST and Fashion-MNIST datasets are served as the image feature, which provides us with a 784-dimensional feature. As for the other datasets, the VGG-19 network [47] pre-trained on ImageNet is utilized to extract the features. Since the dimensionality of the features is extremely high, PCA is adopted to reduce the dimensionality of each feature to a 150-dimensional subspace.
Evaluation protocol: Given that each dataset comes with a default partition of training/testing set, we adopt the same strategy for consistency. Additionally, for each dataset, we set aside 1000 instances from the training data to form the validation set (Since Corel 5K has a default partition of the validation set, we exclude it). The specific number of instances used for validation can be found in Table 1. For the MNIST and Fashion-MNIST datasets, we adopt 3-nearest neighbors to quantify the performance of each compared method, whereas, for the CUB-200 and Cars-196 datasets, we report the Recall@K (abbreviation as R@K) performance, where R@K reflects the proportion of the correct samples in the return K samples. More specifically, R@1, R@2, R@4, and R@8 index is utilized to measure the performance of each method.
We report the R@K of each method under different labeling rates, namely 5%, 10%, and 30%, the rest samples in the training set serve as the unlabeled data.
Compared methods: We compare the proposed ISMLP with several state-of-the-art semi-supervised metric learning methods including, LSeMML [22], SERAPH [28], S-RLMM [29], LRML [33], SLRML [19], APIT [21], CMM [1], and APLLR [21]. One supervised metric learning method LMNN [48], one deep semi-supervised metric learning entitled as SSML-DR [49], the Euclidean distance denoted as EUCLID is also adopted for a baseline method. The hyper-parameters of all the methods are tuned on the validation set and we choose those parameters that achieve the best results on the validation set. For example, for LMNN, we tune λ from the set 0.1 , 0.2 , , 0.9 . As for the proposed method, we empirically set p as 50, and choose μ from the range 0.00001 , 0.0001 , , 1000 , and tune the λ from the range 0.0001 , 0.001 , , 100 , 1000 , the number for the proxy vector is chosen from # Class , 2 # Class , 3 # Class . For SSML-DR, to make a fair comparison, a three-layer full-connected neural network whose nodes are 128 , 256 , 128 is incorporated as the backbone network by SSML-DR. The batch size is set as 100, and the number of epochs is set as 50.

5.2. Classification Experimental Result on MNIST and Fashion-MNIST Datasets

In this section, we test the classification performance of the compared methods based on 3-nearest neighbors on MNIST and Fashion-MNIST datasets. To mitigate the influence of the random partition of the dataset, we repeat each task 30 times, and the mean accuracy and standard deviation are recorded to quantify the performance of each method. Table 2 and Table 3 record the mean error rate and standard deviation of all the methods on MNIST and Fashion-MNIST datasets, respectively.
It is readily seen that the metric learning methods can boost the performance of k-nearest neighbors classification, and all the methods can benefit from the amount of labeled data. The performance of the supervised metric learning method LMNN shows inferior performance compared to those semi-supervised methods, which can prove the necessity of utilizing the information of unlabeled data during the metric learning process. Both SERAPH and ISMLP utilize entropy regularization to preserve the discriminative information of the unlabeled data. Unlike SERAPH, ISMLP adopts a set of proxy vectors to substitute the sample–sample probability assigning procedure, which is more robust than SERAPH. As a result, its performance surpasses SERAPH on all the tasks. Compared to those Laplacian-graph-based methods (i.e., LSeMML, SLRML, and APIT), ISMLP is free from the untrustable Laplacian-graph construction process; thus, it usually can achieve better performance. ISMLP obtains the best performance on 5 / 6 tasks. It is curious to see that CMM achieves the worst performance among all the semi-supervised methods; we surmise its manifold-based regularization term may cause this. CMM intends to find a projection direction where the unlabeled data has the maximum variance, which may not increase the discriminative ability of the model. Owing to the powerful nonlinear feature extraction ability of the deep neural network, the performance of SSML-DR consistently surpasses those shallow Laplacian graph-based methods; however, it still falls behind the proposed ISMLP. We believe that this can be attributed to the two-stage construction processes of the Laplacian graph.
To systematically provide a comprehensive analysis of the time complexity of each method, we provide the time complexity of some representative methods in Table 4. Clearly, the time complexity of the proposed ISMLP is linear with respect to the number of instances, whereas the other compared methods exhibit at least quadratic dependence on n. Considering the usual case in the large-scale semi-supervised setting is d n , the proposed ISMLP can be efficiently trained in a reasonable time. To verify this hypothesis, we also conduct experiments on MNIST and Fashion-MNIST to compare the training time of each method. Figure 2 displays the results, and clearly, the training time of ISMLP is significantly less than the compared methods. More specifically, on the MNIST dataset, it takes about 400 s for SLP to train the model, a 6.5× improvement over the second-fastest approach CMM, which can prove the efficiency of the proposed ISMLP.

5.3. Retrieval Performance on Corel 5K Dataset

We also run a retrieval experiment on Corel 5K dataset with a labeling rate of 30%. Figure 3 documents the performance of the proposed ISMLP and LSeMML. In the first sub-figure, it is evident that ISMLP can effectively capture the semantic meaning of the query image, leading to accurate retrieval of five relevant images. In contrast, LSeMML mistakes the “red” element as the key property of the query image and unsurprisingly gives the irrelevant images in the 4-th and 5-th nearest neighbors. The major difference between ISMLP and LSeMML lies in the utilization of the unlabeled regularization term, i.e., LSeMML utilizes the EUCLID metric to mine the manifold information of the unlabeled data. When the EUCLID metric is not appropriate to measure the correlations and weights of the features, the resulting graph will be an inferior one. Therefore, the sub-optimal results can be observed in the retrieval list. In contrast, the proposed ISMLP utilizes entropy regularization to mine the information of the unlabeled data, it makes no data distribution assumption; therefore, can be applied to broader scenes.
Similar results can also be found in the other sub-figures. Therefore, the retrieval experiment on Corel 5K dataset can verify the superiority performance over the compared LSeMML approach.

5.4. Classification Performance on CUB-200 and Cars-196 Datasets

We further conduct image recognition experiments on two fine-grained datasets to test the classification ability of the proposed ISMLP and its compared methods. The R@K index is utilized to quantify the performance of each method. Table 5 and Table 6 show the classification results on the CUB-200 and Cars-196 datasets, respectively.
We can draw the following conclusions from the figure: (1) All metric learning methods can benefit from the amount of labeled data; the more labeled data, the higher recognition performance. Since LMNN can only utilize the labeled data, when the labeling rate is low, LMNN can easily fall into the trap of over-fitting. Its performance is inferior to those semi-supervised metric learning methods under all tasks. This can prove the superiority of utilizing the unlabeled in metric learning. (2) Compared to those manifold-based semi-supervised approaches, the proposed ISMLP makes no assumptions about the smoothness or density of the data. Thus, ISMLP can be applied to broader scenes, and achieve better performance. (3) Both SERAPH and ISMLP utilize entropy regularization to mine the discrimination information of the unlabeled data; ISMLP adopts the proxy vectors to construct the probability model, which is more robust than SERAPH, and it obtains better performance across all the tasks on CUB-200 and Cars-196 datasets. (4) SSML-DR can obtain competing results due to its strong hierarchical feature extraction ability. (5) the proposed ISMLP can better mine the rich structural information of the unlabeled data; it achieves the best performance on 20 of all 24 tasks, which proves the efficiency of adopting the proxy vectors as surrogate points.

5.5. Sensitivity Analysis

In this section, we conduct an experiment on the Cars-196 dataset to analyze the sensitivity of the proposed ISMLP on different hyper-parameters. To simplify the experiment, we keep the other parameters fixed when analyzing the current one.
Figure 4a depicts the R@1 accuracy of the proposed ISMLP with different λ when we set μ = 0.001 , p = 50 , and m = c , where c denotes the number of classes. One can observe that each curve has a turning point, and the fewer the amount of labeled data, the earlier the turning point appears. We gauge that this can be attributed to the utility of the entropy regularization; either an excessively large or a small λ will lead to a biased model.
Figure 4b shows the sensitivity of ISMLP on μ . We can see that ISMLP is insensitive to the change of μ to some extent. However, setting a large μ will impose the learned R close to the prior metric I p , which prevents ISMLP to learn the correlations and weights of the feature in the reduced space.
Figure 4c documents the results on p. Recall that p is the dimensionality of the reduced space, and setting a small p will lose a large amount of information of the original data. As a result, we can observe inferior results with small p; however, as p increases, the performance becomes stable. To compromise between accuracy and computational efficiency, we set p as 50 in all experiments.
We further conduct an experiment to test the sensitivity of ISMLP on the number of proxy vectors and document the result in Figure 4d. It has nearly the same tendency as Figure 4c; when we learn a small number of proxy vectors, instances from the other classes will be mixed up together, which undoubtedly degrades the discriminative ability of the model. Increasing the number of proxy vectors will boost the performance to some extent, and it can help to discover the latent pattern within a class. Such an idea is also utilized in some cluster-based multi-metric learning methods [30,50]. However, allocating too many proxy vectors will cost additional computational resources.

6. Conclusions

In this paper, we propose an efficient information-theoretic-based semi-supervised metric learning method called ISMLP. By learning a hierarchical form of the Mahalanobis matrix as well as a set of proxy vectors, ISMLP casts the semi-supervised metric learning problem as a probability model. Importantly, the entropy regularization term is adopted to mine the rich unlabeled information. We further prove that ISMLP can be efficiently solved via the alternating direction method. Extensive experiments on five large-scale image datasets reveal that (1) the proposed probability model based on proxy vectors can accurately mine the rich information of unlabeled data, and is thus profitable for semi-supervised learning tasks; (2) ISMLP can be more efficiently trained than the semi-supervised learning methods used in the experiment; and (3) ISMLP is not sensitive to its parameters to some extent.
Despite its promising result, the proposed ISMLP assumes linear separability of the data, which is often unrealistic due to the presence of complex data structures. The kernel tricks or the deep neural networks can be utilized to extract the nonlinear features to enhance the performance. In the future, we plan to extend the proposed ISMLP to multi-model settings to deal with the multi-modal input [51,52,53].

Author Contributions

Conceptualization, P.C. and H.W.; methodology, H.W.; software, H.W.; validation, P.C. and H.W.; formal analysis, H.W.; investigation, H.W.; resources, P.C.; data curation, H.W.; writing—original draft preparation, H.W.; writing—review and editing, P.C. and H.W.; visualization, P.C.; supervision, P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Dalian Science and Technology Innovation Fund 2021JJ12GX028.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets are publicly available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, F. Semisupervised Metric Learning by Maximizing Constraint Margin. IEEE Trans. Syst. Man Cybern. Part B Cybern. A Publ. IEEE Syst. Man Cybern. Soc. 2011, 41, 931–939. [Google Scholar] [CrossRef]
  2. Bellet, A.; Habrard, A.; Sebban, M. Metric Learning; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  3. Wang, H.; Feng, L.; Zhang, J.; Liu, Y. Semantic Discriminative Metric Learning for Image Similarity Measurement. IEEE Trans. Multimed. 2016, 18, 1579–1589. [Google Scholar] [CrossRef]
  4. Feng, L.; Wang, H.; Jin, B.; Li, H.; Xue, M.; Wang, L. Learning a Distance Metric by Balancing KL-Divergence for Imbalanced Datasets. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2384–2395. [Google Scholar] [CrossRef]
  5. Wang, H.; Wang, Y.; Zhang, Z.; Fu, X.; Zhuo, L.; Xu, M.; Wang, M. Kernelized multiview subspace analysis by self-weighted learning. IEEE Trans. Multimed. 2020, 23, 3828–3840. [Google Scholar] [CrossRef]
  6. Wang, H.; Peng, J.; Chen, D.; Jiang, G.; Zhao, T.; Fu, X. Attribute-guided feature learning network for vehicle reidentification. IEEE Multimed. 2020, 27, 112–121. [Google Scholar]
  7. Wang, H.; Peng, J.; Zhao, Y.; Fu, X. Multi-path deep cnns for fine-grained car recognition. IEEE Trans. Veh. Technol. 2020, 69, 10484–10493. [Google Scholar] [CrossRef]
  8. Liu, Q.; Cao, W.; He, Z. Cycle optimization metric learning for few-shot classification. Pattern Recognit. 2023, 139, 109468. [Google Scholar] [CrossRef]
  9. Holkar, A.; Walambe, R.; Kotecha, K. Few-shot learning for face recognition in the presence of image discrepancies for limited multi-class datasets. Image Vis. Comput. 2022, 120, 104420. [Google Scholar]
  10. Gao, X.; Niu, S.; Wei, D.; Liu, X.; Wang, T.; Zhu, F.; Dong, J.; Sun, Q. Joint metric learning-based class-specific representation for image set classification. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef]
  11. Huang, K.; Wu, S.; Sun, B.; Yang, C.; Gui, W. Metric learning-based fault diagnosis and anomaly detection for industrial data with intraclass variance. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef]
  12. Gui, X.; Zhang, J.; Tang, J.; Xu, H.; Zou, J.; Fan, S. A Quadruplet Deep Metric Learning model for imbalanced time-series fault diagnosis. Knowl.-Based Syst. 2022, 238, 107932. [Google Scholar] [CrossRef]
  13. Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
  14. Peng, J.; Jiang, G.; Wang, H. Adaptive Memorization with Group Labels for Unsupervised Person Re-identification. IEEE Trans. Circuits Syst. Video Technol. 2023, 1. [Google Scholar] [CrossRef]
  15. Wang, H.; Peng, J.; Jiang, G.; Xu, F.; Fu, X. Discriminative feature and dictionary learning with part-aware model for vehicle re-identification. Neurocomputing 2021, 438, 55–62. [Google Scholar] [CrossRef]
  16. Wang, Y.; Peng, J.; Wang, H.; Wang, M. Progressive learning with multi-scale attention network for cross-domain vehicle re-identification. Sci. China Inf. Sci. 2022, 65, 160103. [Google Scholar] [CrossRef]
  17. Liu, W.; Ma, S.; Tao, D.; Liu, J.; Liu, P. Semi-supervised sparse metric learning using alternating linearization optimization. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 25–28 July 2010; pp. 1139–1148. [Google Scholar]
  18. Baghshah, M.S.; Shouraki, S.B. Semi-supervised metric learning using pairwise constraints. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, Pasadena, CA, USA, 11–17 July 2009; pp. 1217–1222. [Google Scholar]
  19. Liang, J.; Zhu, P.; Dang, C.; Hu, Q. Semisupervised Laplace-Regularized Multimodality Metric Learning. IEEE Trans. Cybern. 2020, 52, 2955–2967. [Google Scholar] [CrossRef]
  20. Ying, S.; Wen, Z.; Shi, J.; Peng, Y.; Peng, J.; Qiao, H. Manifold preserving: An intrinsic approach for semisupervised distance metric learning. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 2731–2742. [Google Scholar] [CrossRef]
  21. Kr Dutta, U.; Chandra Sekhar, C. Affinity Propagation Based Closed-Form Semi-supervised Metric Learning Framework. In Proceedings of the Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 4–7 October 2018; Proceedings, Part I 27. Springer: Berlin/Heidelberg, Germany, 2018; pp. 556–565. [Google Scholar]
  22. Sun, P.; Yang, L. Low-rank supervised and semi-supervised multi-metric learning for classification. Knowl.-Based Syst. 2022, 236, 107787. [Google Scholar] [CrossRef]
  23. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  24. Wang, H.; Jiang, G.; Peng, J.; Deng, R.; Fu, X. Towards Adaptive Consensus Graph: Multi-view Clustering via Graph Collaboration. IEEE Trans. Multimed. 2022, 1–13. [Google Scholar] [CrossRef]
  25. Jiang, G.; Peng, J.; Wang, H.; Mi, Z.; Fu, X. Tensorial Multi-View Clustering via Low-Rank Constrained High-Order Graph Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5307–5318. [Google Scholar] [CrossRef]
  26. Yin, Y.; Shah, R.R.; Zimmermann, R. Learning and fusing multimodal deep features for acoustic scene categorization. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; pp. 1892–1900. [Google Scholar]
  27. Wang, H.; Yao, M.; Jiang, G.; Mi, Z.; Fu, X. Graph-Collaborated Auto-Encoder Hashing for Multiview Binary Clustering. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–13. [Google Scholar] [CrossRef]
  28. Niu, G.; Dai, B.; Yamada, M.; Sugiyama, M. Information-theoretic semi-supervised metric learning via entropy regularization. Neural Comput. 2014, 26, 1717–1762. [Google Scholar] [CrossRef]
  29. Li, Y.; Tian, X.; Tao, D. Regularized large margin distance metric learning. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining, Barcelona, Spain, 12–15 December 2016; pp. 1015–1022. [Google Scholar]
  30. Ye, H.J.; Zhan, D.C.; Li, N.; Jiang, Y. Learning multiple local metrics: Global consideration helps. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1698–1712. [Google Scholar] [CrossRef]
  31. Movshovitz-Attias, Y.; Toshev, A.; Leung, T.K.; Ioffe, S.; Singh, S. No fuss distance metric learning using proxies. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 360–368. [Google Scholar]
  32. Goldberger, J.; Hinton, G.E.; Roweis, S.; Salakhutdinov, R.R. Neighbourhood components analysis. Adv. Neural Inf. Process. Syst. 2004, 17. [Google Scholar]
  33. Hoi, S.C.; Liu, W.; Chang, S.F. Semi-supervised distance metric learning for collaborative image retrieval and clustering. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2010, 6, 1–26. [Google Scholar] [CrossRef]
  34. Deng, H.; Meng, X.; Deng, F.; Feng, L. UNIT: A unified metric learning framework based on maximum entropy regularization. Appl. Intell. 2023, 1–21. [Google Scholar] [CrossRef]
  35. Chapelle, O.; Zien, A. Semi-supervised classification by low density separation. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, PMLR, Bridgetown, Barbados, 6–8 January 2005; pp. 57–64. [Google Scholar]
  36. Grandvalet, Y.; Bengio, Y. Semi-supervised learning by entropy minimization. Adv. Neural Inf. Process. Syst. 2004, 17. [Google Scholar]
  37. Harandi, M.; Salzmann, M.; Hartley, R. Joint dimensionality reduction and metric learning: A geometric take. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 1404–1413. [Google Scholar]
  38. Davis, J.V.; Kulis, B.; Jain, P.; Sra, S.; Dhillon, I.S. Information-theoretic metric learning. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 209–216. [Google Scholar]
  39. Absil, P.A.; Mahony, R.; Sepulchre, R. Optimization Algorithms on Matrix Manifolds; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  40. Lee, J.M.; Lee, J.M. Smooth Manifolds; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  41. Bonnabel, S. Stochastic gradient descent on Riemannian manifolds. IEEE Trans. Autom. Control 2013, 58, 2217–2229. [Google Scholar] [CrossRef] [Green Version]
  42. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  43. Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
  44. Duygulu, P.; Barnard, K.; de Freitas, J.F.; Forsyth, D.A. Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 28–31 May 2002; pp. 97–112. [Google Scholar]
  45. Welinder, P.; Branson, S.; Mita, T.; Wah, C.; Schroff, F.; Belongie, S.; Perona, P. Caltech-UCSD Birds 200; California Institute of Technology: Pasadena, CA, USA, 2010. [Google Scholar]
  46. Krause, J.; Stark, M.; Deng, J.; Fei-Fei, L. 3d object representations for fine-grained categorization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, NSW, Australia, 2–8 December 2013; pp. 554–561. [Google Scholar]
  47. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  48. Weinberger, K.Q.; Saul, L.K. Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res. 2009, 10, 207–244. [Google Scholar]
  49. Dutta, U.K.; Harandi, M.; Shekhar, C.C. Semi-supervised metric learning: A deep resurrection. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021; Volume 35, pp. 7279–7287. [Google Scholar]
  50. Nguyen, B.; Ferri, F.J.; Morell, C.; De Baets, B. An efficient method for clustered multi-metric learning. Inf. Sci. 2019, 471, 149–163. [Google Scholar] [CrossRef]
  51. Wang, Y.; Zhang, W.; Wu, L.; Lin, X.; Fang, M.; Pan, S. Iterative Views Agreement: An Iterative Low-Rank based Structured Optimization Method to Multi-View Spectral Clustering. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), New York, NY, USA, 9–15 July 2016; pp. 2153–2159. [Google Scholar]
  52. Wang, Y. Survey on deep multi-modal data analytics: Collaboration, rivalry, and fusion. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2021, 17, 1–25. [Google Scholar]
  53. Deng, H.; Meng, X.; Wang, H.; Feng, L. Hierarchical multi-view metric learning with HSIC regularization. Neurocomputing 2022, 510, 135–148. [Google Scholar] [CrossRef]
Figure 1. The variations of the normalized objective function values of the proposed method with the number of iterations on MNIST, Corel 5K, and Cars-196. Obviously, the loss value decreases with the number of iterations and inclines to become stable after several iterations. (a) MNIST; (b) Corel 5K; (c) Cars-196.
Figure 1. The variations of the normalized objective function values of the proposed method with the number of iterations on MNIST, Corel 5K, and Cars-196. Obviously, the loss value decreases with the number of iterations and inclines to become stable after several iterations. (a) MNIST; (b) Corel 5K; (c) Cars-196.
Applsci 13 08993 g001
Figure 2. The training time of different methods on two image datasets: (a) displays the training time of different methods on MNIST dataset; (b) shows the result on Fashion-MNIST dataset.
Figure 2. The training time of different methods on two image datasets: (a) displays the training time of different methods on MNIST dataset; (b) shows the result on Fashion-MNIST dataset.
Applsci 13 08993 g002
Figure 3. The typical retrieval result of the proposed ISMLP and LSeMML approach on the Corel 5K dataset with a training labeling rate of 30%. The leftmost figure denotes the query image and the first row of each sub-figure displays the result of the proposed method, whereas the second row shows the result of LSeMML. The green checkmark means the right retrieval result while the red cross means the wrong results.
Figure 3. The typical retrieval result of the proposed ISMLP and LSeMML approach on the Corel 5K dataset with a training labeling rate of 30%. The leftmost figure denotes the query image and the first row of each sub-figure displays the result of the proposed method, whereas the second row shows the result of LSeMML. The green checkmark means the right retrieval result while the red cross means the wrong results.
Applsci 13 08993 g003
Figure 4. The R@1 accuracy under different parameters. We keep the other parameters fixed when analyzing the current one. (a) λ ; (b) μ ; (c) Reduced dimensionality; (d) Number of proxy vectors.
Figure 4. The R@1 accuracy under different parameters. We keep the other parameters fixed when analyzing the current one. (a) λ ; (b) μ ; (c) Reduced dimensionality; (d) Number of proxy vectors.
Applsci 13 08993 g004
Table 1. The detailed information of datasets used in the experiment.
Table 1. The detailed information of datasets used in the experiment.
DatasetTypeClassInstanceFeatureTrain, Validation, Test
MNISTImage1070,00078450,000, 10,000, 10,000
Fashion-MNISTImage1070,00078450,000, 10,000, 10,000
Corel 5KImage50500020484000, 500, 500
CUB-200Image20011,78820484994, 1000, 5794
Cars-196Image19616,18520487144, 1000, 8041
Table 2. Classification performance (mean error rate, and standard deviation) of all the compared methods based on 3-NN with varying labeling rates on MNIST dataset; the best performance of each task is marked with boldface.
Table 2. Classification performance (mean error rate, and standard deviation) of all the compared methods based on 3-NN with varying labeling rates on MNIST dataset; the best performance of each task is marked with boldface.
EUCLIDLSeMMLSERAPHS-RLMMLRMLSLRMLAPITCMMAPLLRLMNNSSML-DRISMLP
5% labeled data0.296 ± 0.0050.228 ± 0.0100.231 ± 0.0140.241 ± 0.0270.221 ± 0.0110.208 ± 0.0080.228 ± 0.0100.245 ± 0.0210.209 ± 0.0220.261 ± 0.0220.200 ± 0.0130.211 ± 0.018
10% labeled data0.240 ± 0.0070.191 ± 0.0130.184 ± 0.0110.211 ± 0.0160.201 ± 0.0100.221 ± 0.0150.198 ± 0.0110.227 ± 0.0170.189 ± 0.0090.231 ± 0.0210.175 ± 0.0120.170 ± 0.015
30% labeled data0.186 ± 0.0060.131 ± 0.0120.140 ± 0.0070.127 ± 0.0140.153 ± 0.0120.130 ± 0.0090.146 ± 0.0080.150 ± 0.0160.128 ± 0.0070.143 ± 0.0100.120 ± 0.0160.116 ± 0.011
Table 3. Classification performance (mean error rate, and standard deviation) of all the compared methods based on 3-NN with varying labeling rate on Fashion-MNIST dataset; the best performance of each task is marked with boldface.
Table 3. Classification performance (mean error rate, and standard deviation) of all the compared methods based on 3-NN with varying labeling rate on Fashion-MNIST dataset; the best performance of each task is marked with boldface.
EUCLIDLSeMMLSERAPHS-RLMMLRMLSLRMLAPITCMMAPLLRLMNNSSML-DRISMLP
5% labeled data0.355 ± 0.0080.281 ± 0.0170.289 ± 0.0140.291 ± 0.0170.295 ± 0.0120.278 ± 0.0080.298 ± 0.0120.302 ± 0.0120.285 ± 0.0130.292 ± 0.0190.248 ± 0.0100.252 ± 0.014
10% labeled data0.281 ± 0.0090.242 ± 0.0120.248 ± 0.0110.237 ± 0.0120.241 ± 0.0170.239 ± 0.0070.247 ± 0.0130.258 ± 0.0160.227 ± 0.0120.261 ± 0.0100.218 ± 0.0090.210 ± 0.012
30% labeled data0.235 ± 0.0060.180 ± 0.0140.172 ± 0.0090.178 ± 0.0120.172 ± 0.0110.181 ± 0.0080.187 ± 0.0110.191 ± 0.0130.188 ± 0.0120.192 ± 0.0100.168 ± 0.0110.162 ± 0.012
Table 4. The time complexity of several typical semi-supervised learning methods, where | P | = | S | + | D | , r is the number of iterations in each method, and c (in the APID) denotes the number of the inner iterations. Clearly, the time complexity of the proposed ISMLP has a linear dependence on the n.
Table 4. The time complexity of several typical semi-supervised learning methods, where | P | = | S | + | D | , r is the number of iterations in each method, and c (in the APID) denotes the number of the inner iterations. Clearly, the time complexity of the proposed ISMLP has a linear dependence on the n.
LSeMMLSERAPHS-RLMMLRMLSLRMLAPITCMMISMLP
Time complexity O n 2 log n + | P | d 2 O n 2 d + d 3 r O n 2 d 2 + d 3 r O n 2 log n + d 3 O n 2 d + d 3 O n 3 + c d 2 O n 2 + | P | d 2 O m n d p 2 + d 3
Table 5. The performance of the proposed ISMLP and compared methods on the CUB-200 dataset with varying labeling rates. The best performance under each index is marked in bold.
Table 5. The performance of the proposed ISMLP and compared methods on the CUB-200 dataset with varying labeling rates. The best performance under each index is marked in bold.
5%Labeled Data10%Labeled Data30%Labeled Data
R@1R@2R@4R@8R@1R@2R@4R@8R@1R@2R@4R@8
EUCLID25.7529.8232.7334.8226.8532.5734.1236.4529.6832.2236.9039.29
LSeMML32.8434.5436.9038.7334.5336.7238.1340.6535.9037.8139.7242.20
SERAPH33.1135.6137.8139.9835.0236.8738.4241.9737.8339.2442.6944.73
S-RLMM32.8334.8136.3138.7134.6336.5938.2040.2138.8040.6842.9843.73
LRML31.1033.6836.8338.5033.7635.8037.8139.6937.3339.8442.3844.16
SLRML33.6334.1937.2439.4235.8237.6739.2942.1937.9039.1940.5742.78
APIT32.5234.5737.9938.6834.6836.8539.0936.9837.8140.8342.6844.73
CMM32.1334.1136.7338.1034.8237.4037.4939.8037.8640.1341.6842.68
APLLR31.1633.2935.7337.9633.2436.7639.8542.4836.6639.8342.0845.71
LMNN29.7132.9035.8437.8031.8433.5937.8539.8433.8335.4939.9041.83
SSML-DR34.0035.9538.3440.5036.8439.0043.1945.5438.2642.1845.4048.21
ISMLP34.4236.8238.9041.7036.9939.2444.4546.0239.8442.8045.6148.90
Table 6. The performance of the proposed ISMLP and compared methods on the Cars-196 dataset with varying labeling rates. The best performance under each index is marked in bold.
Table 6. The performance of the proposed ISMLP and compared methods on the Cars-196 dataset with varying labeling rates. The best performance under each index is marked in bold.
5%Labeled Data10%Labeled Data30%Labeled Data
R@1R@2R@4R@8R@1R@2R@4R@8R@1R@2R@4R@8
EUCLID24.6328.1631.3434.7325.6830.6533.8735.7128.5631.8934.1738.81
LSeMML33.2634.5437.1539.2734.5837.1839.4642.1736.1939.4341.8744.98
SERAPH32.1934.6136.2539.8935.1837.2639.4843.1637.3039.5642.4345.25
S-RLMM34.3736.8137.1839.9636.5838.7840.4144.3438.4841.6844.1547.28
LRML32.8235.3336.8337.4133.7235.4137.1939.7138.1841.2943.2145.87
SLRML34.3736.1938.9840.5735.4238.4240.7643.8737.5739.5842.8146.10
APIT33.4635.7237.9939.5134.7136.7839.1841.5738.2441.5843.7946.28
CMM34.1935.4536.7339.1035.8838.0440.0043.0637.6039.8741.8145.28
APLLR32.6735.3435.7337.7633.2636.8739.1942.6236.6239.8942.2845.78
LMNN31.3033.5335.8436.9232.1934.4836.4938.8434.3936.6240.1944.17
SSML-DR35.3537.5339.4741.3837.0139.4642.8744.5939.8442.6945.9247.25
ISMLP34.0136.5238.7240.2937.1140.3843.4845.8140.1943.5846.3948.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, P.; Wang, H. Efficient Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies. Appl. Sci. 2023, 13, 8993. https://doi.org/10.3390/app13158993

AMA Style

Chen P, Wang H. Efficient Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies. Applied Sciences. 2023; 13(15):8993. https://doi.org/10.3390/app13158993

Chicago/Turabian Style

Chen, Peng, and Huibing Wang. 2023. "Efficient Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies" Applied Sciences 13, no. 15: 8993. https://doi.org/10.3390/app13158993

APA Style

Chen, P., & Wang, H. (2023). Efficient Information-Theoretic Large-Scale Semi-Supervised Metric Learning via Proxies. Applied Sciences, 13(15), 8993. https://doi.org/10.3390/app13158993

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop