*Article* **Fast Approximation for Sparse Coding with Applications to Object Recognition**

**Zhenzhen Sun and Yuanlong Yu \***

The College of Mathematics and Computer Science, Fuzhou University, Fuzhou 350116, China; zhenzhen\_sun@foxmail.com

**\*** Correspondence: yu.yuanlong@fzu.edu.cn

**Abstract:** Sparse Coding (SC) has been widely studied and shown its superiority in the fields of signal processing, statistics, and machine learning. However, due to the high computational cost of the optimization algorithms required to compute the sparse feature, the applicability of SC to real-time object recognition tasks is limited. Many deep neural networks have been constructed to low fast estimate the sparse feature with the help of a large number of training samples, which is not suitable for small-scale datasets. Therefore, this work presents a simple and efficient fast approximation method for SC, in which a special single-hidden-layer neural network (SLNNs) is constructed to perform the approximation task, and the optimal sparse features of training samples exactly computed by sparse coding algorithm are used as ground truth to train the SLNNs. After training, the proposed SLNNs can quickly estimate sparse features for testing samples. Ten benchmark data sets taken from UCI databases and two face image datasets are used for experiment, and the low root mean square error (RMSE) results between the approximated sparse features and the optimal ones have verified the approximation performance of this proposed method. Furthermore, the recognition results demonstrate that the proposed method can effectively reduce the computational time of testing process while maintaining the recognition performance, and outperforms several state-of-the-art fast approximation sparse coding methods, as well as the exact sparse coding algorithms.

### **1. Introduction**

Object recognition is a fundamental problem in machine learning, and has been widely researched for many years. The performance of object recognition methods largely relies on feature representation. Traditional methods used handcrafted features to represent objects, i.e., scale-invariant feature transform (SIFT) [1], histograms of oriented gradients (HOG) [2], etc. Inspired by biological finding [3,4], learning sparse representation is more beneficial for object recognition, because mapping features from low-dimensional space to a highdimensional space makes the features more likely to be linearly separable. Therefore, many sparse coding (SC) algorithms have been proposed to learn a good sparse representation for natural signals [5–7].

In general, SC is the problem of reconstructing input signal using a linear combination of an over-complete dictionary with sparse coefficients, i.e., for an observed signal *x* ∈ *R p* an over-complete dictionary *D* ∈ *R <sup>p</sup>*×*K*(*<sup>p</sup> <sup>K</sup>*), SC aims to find a representation *<sup>α</sup>* <sup>∈</sup> *<sup>R</sup> K* to reconstruct *x* by using only a small number of atoms chosen from *D*. The problem of SC is formulated as

$$\min\_{\mathfrak{a}} : ||\mathfrak{x} - \mathsf{D}\mathfrak{a}||\_2^2 + \lambda ||\mathfrak{a}||\_{0\prime} \tag{1}$$

where the *l*0-norm is defined as the number of non-zero elements of *α*, and *λ* is the regularization factor. Several optimization algorithms have been proposed for the numerical solution of (1). However, the high computational cost induced by these optimization

**Citation:** Sun, Z.; Yu, Y. Fast Approximation for Sparse Coding with Applications to Object Recognition. *Sensors* **2021**, *21*, 1442. https://doi.org/10.3390/s21041442

Academic Editor: Antonio Guerrieri Received: 19 December 2020 Accepted: 17 February 2021 Published: 19 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

algorithms is a major drawback for real-time applications, especially when a large-sized dictionary is used.

To get rid of this problem, many works focusing on fast approximation for sparse coding have been proposed. Kavukcuoglu et al. [8] proposed a method named Predictive Sparse Decomposition (PSD) that used a non-linear regressor to approximate the sparse feature, and applied this method to objection recognition. However, the predictor is simplistic and produces crude approximation, and the regressor training procedure is somewhat time-consuming because of the gradient descent training method. Recently, deep learning showed its widespread success on many inference problems, which provides another way to design fast approximation methods for sparse coding algorithms. The idea is first proposed by Gegor et al. [9] who constructed two deep learning networks to approximate the iterative soft thresholding and coordinate descent algorithms, leading to the so-called LISTA and LCoD methods, respectively. LISTA showed its superiority on calculation and approximation, and many recent variants of LISTA have been proposed for miscellaneous applications, see [10,11] for some examples. Inspired by [9], many fast approximation sparse coding methods based on deep learning have been proposed and shown their effectiveness on unfolding the corresponding sparse coding algorithms, i.e., LAMP [12], LVAMP [12], etc.

Though these methods perform well in large-scale datasets, there are three defects. First, they are not suitable for small-scale datasets, in which the number of training samples is far less then ten thousand. The performance of deep neural network is sensitive to the scale of training data, when the number of training samples is small, the deep network model is over-parameterized and may result in over-fitting. Second, deep networks involve lots of hyper-parameters, whose training requires large computational and storage resources because of the gradient-based back-propagation method, and is easy to get stuck in a local optimal solution. Last but not least, each deep network architecture is designed only for the corresponding sparse coding algorithm that cannot be generalized to other algorithms. Therefore, the extendibility of these methods are limited.

To solve the problems mentioned above, a simple and effective fast approximation sparse coding method is proposed for small-scale datasets object recognition task in this paper. Differing from the deep learning-based methods, a special single-hidden-layer neural network (SLNNs) is constructed to perform the approximation task, and the training process of this SLNNs can be easily implemented by the least squared method. The proposed method includes two steps. In the first step, the optimal sparse features of training samples are exactly computed by sparse coding algorithm (in this paper, the homotopy iterative hard thresholding (HIHT) algorithm [13] is used), and in the second step the optimal sparse features are used as ground truth to train the especially constructed SLNNs. After training, the input layer and hidden layer of this SLNNs can be used to implement the nonlinear feature mapping from the input space to sparse feature space, which only involves simple inner product calculation with a non-linear activation function. Therefore, the sparse features of new samples can be estimated quickly. Ten benchmark datasets taken from UCI databases and two face image datasets are used to validate the proposed method, and the root mean square error (RMSE) results on testing data have verified the approximation performance of this proposed method. Furthermore, the approximated sparse features have been applied to object recognition task, and the recognition results demonstrate that this proposed approximation sparse coding method is beneficial for object recognition in terms of recognition accuracy and testing time.

The main contributions of this paper can be concluded as


3. Experiment results on ten benchmark UCI datasets and two face image datasets show that our approach is more effective than current state-of-the-art deep learning-based fast approximation sparse coding methods both in RMSE, recognition accuracy and testing time.

The remainder of this paper is organized as follows. Section 2 briefly reviews the sparse coding algorithms and fast approximation sparse coding methods. Section 3 details the proposed method. Section 4 describes implementation details and presents experimental results. Finally, conclusions are given in Section 5.

#### **2. Related Work**

#### *2.1. Sparse Coding Algorithms*

As described in Section 1, the problem of SC can be formulated as problem (1). However, problem (1) is NP-hard, which is difficult to be solved. There are three common methods for approximations/relaxations of this problem: (1) iterative greedy algorithms [14–16]; (2) *l*1-norm convex relaxation methods (which are called basis pursuit) [17]; (3) *lp*-norm (0 < *p* < 1) relaxation methods [18–22]. Among these methods, BP has been studied more widely, in which the *l*<sup>0</sup> norm is replaced by *l*<sup>1</sup> norm to make a convex relaxation for the problem (1), i.e.,

$$\min\_{\mathfrak{a}} : ||\mathfrak{x} - \mathsf{D}\mathfrak{a}||\_2^2 + \lambda ||\mathfrak{a}||\_{1\prime} \tag{2}$$

where the *l*1-norm is defined as the sum of absolute values of all elements of *α*.

BP methods were proven to give the same solutions to (1) when the dictionary satisfies the Restricted Isometry Property (RIP) condition [23,24]. Many research works focusing on efficiently solving problem (2) have been proposed, [25] provides a comprehensive review of five representative algorithms, namely *Gradient Projection* (GP) [26,27], *Homotopy* [28,29], *Iterative Soft Shrinkage-Thresholding Algorithm* (ISTA) [30–32], *Proximal Gradient* (PG) [33,34], and *Augmented Lagrange Multiplier* (ALM) [35]. Among these algorithms, ISTA is the most popular algorithm, and lots of heuristic strategies have been proposed to reduce the computational time of ISTA, i.e., TwIST [36], FISTA [33], etc. Recently, a kind of pathwise coordinate optimization method called PICASSO [37–39] has been proposed to solve the *l<sup>p</sup>* (0 < *p* ≤ 1) least squared problem, which showed superior empirical performance compared with other state-of-the-art sparse coding algorithms mentioned above.

Although satisfactory results can be achieved by using the approximation/relaxation methods, the *l*0-norm is more desirable from the sparsity perspective. In recent years, researchers have attempted to solve problem (1) directly, with iterative hard thresholding (IHT) [13,40,41] being the most popular method. The IHT methods have strong theoretical guarantees, and the extensive experimental results show that the IHT methods can improve the sparse representation reconstruction results.

#### *2.2. Fast Approximation for Sparse Coding*

The sparse coding algorithms mentioned in Section 2.1 involve a lot of iterative operations, which induces high computational cost and prohibits them from real-time applications. To get rid of this problem, some research focusing on fast approximation for sparse coding was proposed. Kavukcuoglu et al. [8] proposed the PSD method to approximate sparse coding algorithms using a non-linear regressor. In inspired by this, Chalasani et al. [42] extended PSD to estimate convolutional sparse features. However, the approximation performance of non-linear regressor is limited. As the development of deep learning, some researchers have constructed deep networks to solve the fast approximation sparse coding problem. Given a large set of training examples {(*x<sup>i</sup>* , *αi*)} *N i*=1 , a many-layer neural network is optimized to minimize the reconstruction mean squared error between network outputs and {*αi*} *N i*=1 . After training, the approximation of sparse representation for a new signal *xnew* can be quickly predicted by the deep network. The idea is first proposed by Gregor et al. [9] who constructed two deep learning networks to approximate the

iterative soft thresholding and coordinate descent algorithms, leading to the so-called LISTA and LCoD methods, respectively. Inspired by [9], Xin et al. [43] translated the iterative hard thresholding algorithm into a deep learning framework. Borgerding et al. [12] proposed two deep neural-network architectures to unfold the approximate message passing (AMP) algorithm [44] and "vector AMP" (VAMP) algorithm [45] respectively, namely LAMP and LVAMP. In [46], the authors proposed a deep learning framework for the approximation of sparse representation of a signal with the aid of a correlated signal, the so-called side information. The learned deep networks perform steps similar to those implemented by corresponding sparse coding algorithms; however, the trained network can reduce the computational cost when calculating the sparse representation of new samples effectively, which is critical in large-scale data settings and real-time applications.

#### **3. Materials and Methods**

#### *3.1. Homotopy Iterative Hard Thresholding Algorithm*

The homotopy iterative hard thresholding (HIHT) [13] is an extension of IHT for the *l*0-norm regularized problem

$$\min\_{\mathfrak{a}} \colon \theta\_{\lambda}(\mathfrak{a}) = f(\mathfrak{a}) + \lambda ||\mathfrak{a}||\_{0} \tag{3}$$

where *f*(*α*) = <sup>1</sup> 2 ||*<sup>x</sup>* <sup>−</sup> *<sup>D</sup>α*||<sup>2</sup> 2 is a differentiable convex function, whose gradient ∇ *f*(*α*) satisfies the Lipschitz continuous condition with parameter *L<sup>f</sup>* > 0. Therefore, *f*(*α*) can be approximately iteratively updated by the projected gradient method

$$\mathfrak{a}^{k+1} = \arg\min f(\mathfrak{a}^k) + \nabla f(\mathfrak{a}^k)^T (\mathfrak{a} - \mathfrak{a}^k) + \frac{L}{2} ||\mathfrak{a} - \mathfrak{a}^k||\_{2'}^2 \tag{4}$$

where *L* ≥ 0 is a constant, which should satisfies the condition of *L* > *L<sup>f</sup>* .

Adding *λ*||*α*||<sup>0</sup> into both side of (4), the solution of (3) can be obtained by iteratively solving the subproblem

$$\begin{split} \mathfrak{a}^{k+1} &= \arg\min \ p\_{L,\mathsf{A}}(\mathfrak{a}^{k}, \mathfrak{a}) \\ &= f(\mathfrak{a}^{k}) + \nabla f(\mathfrak{a}^{k})^{T}(\mathfrak{a} - \mathfrak{a}^{k}) + \frac{L}{2}||\mathfrak{a} - \mathfrak{a}^{k}||\_{2}^{2} + \lambda||\mathfrak{a}||\_{0}. \end{split} \tag{5}$$

The optimization of (5) is the same as follows (by removing or adding some constant items which are independent on *α*):

$$\min\_{\mathfrak{a}} \frac{L}{2} \left[ ||\mathfrak{a} - (\mathfrak{a}^k - \frac{1}{L} \nabla f(\mathfrak{a}^k))||\_2^2 + \frac{2\lambda}{L} ||\mathfrak{a}||\_0 \right]. \tag{6}$$

If denote

$$T\_L(\mathfrak{a}^{k+1}) = \underset{\mathfrak{a}}{\arg\min} \ p\_{L,\lambda}(\mathfrak{a}^k, \mathfrak{a}),\tag{7}$$

then the closed form solution of *TL*(*α k*+1 ) is given by the following lemma.

**Lemma 1.** *[32,41] The solution TL*(*α k*+1 ) *of* (7) *is give by*

$$[T\_L(\mathfrak{a}^{k+1})]\_i = \begin{cases} [s\_L(\mathfrak{a}^k)]\_i & \text{if } [s\_L(\mathfrak{a}^k)]\_i^2 > \frac{2\lambda}{L}; \\ 0, & \text{if } [s\_L(\mathfrak{a}^k)]\_i^2 \le \frac{2\lambda}{L}. \end{cases} \tag{8}$$

*where sL*(*α*) = *<sup>α</sup>* <sup>−</sup> <sup>1</sup> *<sup>L</sup>*∇ *f*(*α*)*, and* [.]*<sup>i</sup> refers to the i-th element of a vector.*

In (8), the parameter *L* needs to be tuned. The upper bound on Lipschitz constant *L<sup>f</sup>* is unknown or may not be easily calculated, thus we use the line search method to search *L* as suggested in [41] until the objective value descends.

*Homotopy Strategy*: many works [13,26,39] have verified that the sparse coding approaches benefit from a good starting point. Therefore, we use a recursive process automatically tunes regularization factor *λ*. This process begins from a large initial value *λ* 0 . At the end of each *λ*-tuning iterations indexed by *k*, an optimal solution *α k* is obtained given *λ k* . Then *λ* is updated as *λ <sup>k</sup>*+<sup>1</sup> = *ρλ<sup>k</sup>* , where *ρ* ∈ [0, 1], and *α k* is used as the initial solution for the next iteration *k* + 1. The process stops once *λ* is small enough (given a positive lower-bound target, the stop condition is *λ<sup>k</sup>* ≤ *λtarget*). An outline of HIHT algorithm is described as Algorithm 1.

**Algorithm 1** {*α* ? , *L* ?} ← *H IHT*(*L*0, *<sup>λ</sup>*0, *<sup>α</sup>* 0 )

**(Input:)** *L*0, *λ*0, *α* 0 , *D*, *Lmin*, *Lmax*; //*L*<sup>0</sup> ∈ [*Lmin*, *Lmax*] **(Output:)** *α* ? , *L* ? ; initialize *ρ* ∈ (0, 1), *η* > 0, *γ* > 1, *e* > 0, *k* ← 0; **repeat** *i* ← 0; *α <sup>k</sup>*,0 = *α k* ; *Lk*,0 ← *L<sup>k</sup>* ; **repeat** *An L-tuning iteration indexed by i α <sup>k</sup>*,*i*+<sup>1</sup> <sup>←</sup> *<sup>T</sup>Lk*,*<sup>i</sup>* (*α k*,*i* ); **while** *θλ<sup>k</sup>* (*α k*,*i* ) − *θλ<sup>k</sup>* (*α k*,*i*+1 ) < *η* 2 ||*α <sup>k</sup>*,*<sup>i</sup>* <sup>−</sup> *<sup>α</sup> k*,*i*+1 ||<sup>2</sup> **do** *Lk*,*<sup>i</sup>* ← *min*{*γLk*,*<sup>i</sup>* , *Lmax*}; *α <sup>k</sup>*,*i*+<sup>1</sup> <sup>←</sup> *<sup>T</sup>Lk*,*<sup>i</sup>* (*α k*,*i* ); **end while** *Lk*,*i*+<sup>1</sup> ← *Lk*,*<sup>i</sup>* ; *i* ← *i* + 1; **until**||*α <sup>k</sup>*,*<sup>i</sup>* <sup>−</sup> *<sup>α</sup> k*,*i*+1 ||2 <sup>2</sup> ≤ *e α <sup>k</sup>*+<sup>1</sup> <sup>←</sup> *<sup>α</sup> k*,*i* ; *Lk*+<sup>1</sup> ← *Lk*,*<sup>i</sup>* . *λk*+<sup>1</sup> ← *ρλ<sup>k</sup>* ; *k* ← *k* + 1; **until** *λk*+<sup>1</sup> ≤ *λtarget α* ? <sup>←</sup> *<sup>α</sup> k* ; *L* ? <sup>←</sup> *<sup>L</sup><sup>k</sup>* .

#### *3.2. Proposed Method*

Figure 1 illustrates the schematic diagram of this proposed method. As it can be seen, for the given training dataset *X* = {*x*1, *x*2, ..., *xN*} ∈ *R <sup>p</sup>*∗*<sup>N</sup>* and the over-complete dictionary *D*, the HIHT algorithm described in Section 3.1 is used to calculate the optimal sparse features *A* = {*α*1, *α*2, ..., *αN*} ∈ *R <sup>K</sup>*∗*<sup>N</sup>* of training data in the first step. After that, these optimal sparse features are used to train the SLNNs in second step.

As Figure 1 shows, the architecture of the neural network consists of an input layer, a feature layer and an output layer. The number of hidden neurons is the same as that of output neurons, which is set as the dimension of the sparse feature. Each hidden neuron is only connected to its corresponding output neuron with weight 1. Our goal is to obtain a optimal input weights *W*ˆ to make the outputs of hidden layer as equal to *A* as possible, that is

$$||\lg(\hat{\mathbf{W}}^T \mathbf{X}) - \mathbf{A}||\_F \to \mathbf{0},\tag{9}$$

where *g*(.) refers to a non-linear activation function.

**Figure 1.** The schematic diagram of this proposed method.

There are two strategies to optimize the input weights *W*:

(1) If the activation function is known, we chose *tanh* function as the activation function, where *g*(*x*) = *tanh*(*x*). We firstly calculate *arctanh*(*A*), and denote the result as *Z*, that is

$$\mathbf{Z} = \operatorname{arctanh}(\mathbf{A}),\tag{10}$$

then we formulate the objective function of the SLNNs as

$$\text{Minimize: } \qquad \frac{1}{2} \| \mathbf{Z} - \mathbf{W}^T \mathbf{X} \|\_{F}^{2} + \frac{\mathbf{C}\_1}{2} \| \mathbf{W} \|\_{F'}^{2} \tag{11}$$

where constant *C*<sup>1</sup> refers to the regularization factor used to control the trade-off between the smoothness of the mapping function and the closeness to *Z*.

By setting the derivative of (11) with respect to *W* to zero and solve this equality, then the optimal solution of *W* is obtained as follows:

$$
\hat{\mathbf{W}} = (\frac{\mathbf{I}}{\mathbf{C}\_1} + \mathbf{X}\mathbf{X}^T)^{-1}\mathbf{X}\mathbf{Z}^T. \tag{12}
$$

In addition, the sparse feature of a testing sample *xtest* can be quickly estimated as

$$\hat{\mathfrak{a}}\_{\text{test}} = \tanh(((\frac{\mathbf{I}}{\mathbf{C}\_1} + \mathbf{X}\mathbf{X}^T)^{-1}\mathbf{X}\mathbf{Z}^T)^T \mathfrak{x}\_{\text{test}}).\tag{13}$$

(2) If the activation function is unknown, a kernel trick based on Mercer's condition can be used to calculate the approximated sparse feature of testing data *xtest* directly instead of training the weights *W*,

$$\hat{\mathbf{a}}\_{\text{test}} = \mathcal{g}(\mathbf{W}^T \ast \mathbf{x}\_{\text{test}}) = \text{Ker}(\mathbf{x}\_{\text{test}} \mathcal{X})(\frac{\mathbf{I}}{\mathbb{C}\_1} + \Omega\_{\text{train}})^{-1} \mathbf{A} \tag{14}$$

where *Ωtrain* == *Ker*(*X*, *X*) and *Ker* stands for the kernel function.

In this proposed method, Gaussian function is used as the kernel function *Ker*:

$$\operatorname{Ker}(\mathfrak{x}\_1, \mathfrak{x}\_2) = \exp(-\frac{||\mathfrak{x}\_1 - \mathfrak{x}\_2||^2}{\sigma^2}),\tag{15}$$

where *σ* denotes the standard deviation of the Gaussian function.

#### **4. Results and Discussion**

#### *4.1. Data Sets Description*

Ten benchmark datasets taken from *UCI Machine Learning Repository* [47] and two image datasets: the Extended YaleB [48] and the AR dataset [49], are used to validate the proposed method. The ten UCI datasets include 5 binary-classification cases and 5 multiclassification cases. The details of these datasets are shown in Table 1. In this table, column "Random Perm" shows whether the training and testing data are randomly assigned or not. In the experiments, <sup>2</sup> 3 of samples per class are randomly selected for training, and the rest samples are responsible for testing if "Random Perm" is Yes.

**Table 1.** UCI Data sets Used in Our Experiments.


The extended YaleB dataset [48] contains 38 different people with 2414 frontal face images, and each class has about 64 samples. This dataset is challenging from varying expressions and illumination conditions, see Figure 2 for some examples. The random face feature descriptor generated in [7] is used as raw feature, in which a cropped image with 192 × 168 pixels was projected onto a 504-dimensional vector by a random normal distributed matrix. In the experiment, 50% of samples per class are randomly selected for training and the rest are responsible for testing.

**Figure 2.** Extended YaleB.

The AR face dataset contains over 126 people with more than 4000 face images. There are 26 images per person taken during two different sessions. The images have large variations in terms of disguise, facial expressions, and illumination conditions. A few samples from the AR dataset are shown in Figure 3 for illustration. A subset of 2600 images pertaining to 50 males and 50 females objects are used for experiment. For each object, 20 samples are randomly chosen for training and the rest for testing. The images with 165 × 120 pixels were projected onto a 540-dimensional vector by using a random projection matrix.

**Figure 3.** AR Face.

#### *4.2. Implementation Details*

The experiments are mainly divided into two parts: (1) The RMSE between the approximated sparse features and the optimal features of testing data is calculated to verify the approximation performance of this proposed method, and the results of several stateof-the-art fast approximation sparse coding methods are also reported for comparison. (2) Classification experiments are implemented to validate the recognition performance of the approximated sparse features estimated by the proposed SLNNs. The compared methods can be categorized as follows: (a) Different representation learning methods: ELM [50] with random feature mapping, and ScELM [51] with optimal sparse features computed by HIHT; (b) Different fast approximation sparse coding methods: PSD [8], LISTA [9], LAMP [12], and LVAMP [12], detailed descriptions to these methods are provided in Section 2.

Implementations of ELM, ScELM, PSD, and this proposed method are based on Matlab codes and others are based on Python. A random normal distributed matrix is used as the dictionary in each sparse coding algorithm, and the number of atoms or hidden nodes *K* is set to 100 if the dimension of dataset is less than 100, otherwise 1000. The parameter *C*<sup>1</sup> is searched for in the grid of {2 <sup>−</sup>25, 2−20, ..., 225}, and the *<sup>σ</sup>* is searched for in {100, 200, ..., 1000}. The number of hidden layers of LISTA, LAMP, and LVAMP are set as 6, 5, and 4, respectively, if not stated otherwise. Other parameters are default as the authors suggested. For the randomly training-testing assigned datasets, ten repeated trials are carried out in the following experiments, and the average result and standard deviation are recorded.

In object recognition experiments, the trained network of each method is used to compute the approximated sparse features for training and testing samples, and the approximated sparse features are used as the input of the classifier. The ridge regression model is used as the classifier in our experiments, whose objective function is

$$\text{Minimize: } \qquad \frac{1}{2} \| \mathbf{Y} - \boldsymbol{\mathcal{B}}^T \hat{\mathbf{A}} \|\_{F}^{2} + \frac{\mathbf{C}\_{2}}{2} \| \boldsymbol{\mathcal{B}} \|\_{F'}^{2} \tag{16}$$

where *Y* is the label matrix of training data *X*, and *β* is the weights of the classifier model. For a testing sample *xtest*, the predicted label for it is calculated as

$$\text{identity}(\mathfrak{x}\_{\text{test}}) = \arg\max\_{i} (\mathfrak{g}^{\text{T}} \mathfrak{a}\_{\text{test}}).\tag{17}$$

The hyper-parameter of the classifier *C*<sup>2</sup> is searched for in the grid of {2 <sup>−</sup>25, 2−20, ..., 225}, and a value with best validation accuracy is selected. We compare our method with others in terms of recognition accuracy and testing time, where the recognition accuracy is defined as the ratio of the number of correctly classified testing samples to that of all testing samples, and the testing time refers to the total spending time of testing samples' feature calculation and classification.

A standard PC is used in our experiments and its hardware configuration as follows:


#### *4.3. Root Mean Square Error Results*

For testing data *Xtest*, whose optimal sparse features computing by sparse coding algorithm is denoted as *Atest*, and the approximated sparse features computing by the fast approximation method is denoted as *A*ˆ *test*, the RMSE between *Atest* and *A*ˆ *test* is defined as

$$RMSE(A\_{test}\hat{A}\_{test}) = \sqrt{\frac{1}{N\_{test}K} ||A\_{test} - \hat{A}\_{test}||\_{F'}^2} \tag{18}$$

where *Ntest* denotes the amount of testing samples.

Some UCI datasets are used in this experiment, and we reported the results of our method, LISTA, LAMP and LVAMP to compare their approximation performance, Table 2 shows the results. As it can be seen from this table, our approach can achieve a lower RMSE result than other methods on the most datasets, which indicates that the approximated sparse features estimated by our approach are more closer to the optimal ones than that estimated by the compared methods. For the Glass dataset, our method has achieved a significant improvement, and for LiverDisorders, though the result of our approach is not the best one, it is very close to the best one.

**Table 2.** The root mean square error of compared methods on UCI datasets (bold one represents the best result).


#### *4.4. Objection Recognition Results*

#### 4.4.1. The Evaluation of HIHT

The existing literature on sparse coding only compared different sparse coding algorithms in terms of reconstruction error and convergence speed, but did not compare their classification performance when applying these algorithms in object recognition. To show why this paper uses the HIHT algorithm to compute the optimal sparse features, we implemented some experiments to validate the superiority of HIHT compared with several state-of-the-art sparse coding algorithms when used in object recognition. The compared methods include IHT, homotopy GPSR (HGPSR) [26], PGH [34], and PICASSO [39].

(1) *Effectiveness on Object Recognition*: the binary-classification datasets listed in Table 1 are used in this experiment. Firstly, the sparse coding algorithms are used to compute sparse features for the experimental datasets using the same dictionary, and the measure of *cross entropy* is used to show how different the sparse features are between class 1 and class 2. A higher value means that the sparse features computed by corresponding algorithm are more discriminative and more beneficial for object recognition. The measure of *cross entropy* is estimated as follows: we accumulate a histogram *h*(*α<sup>k</sup>* |*v*) along feature dimensions over all sparse features *α<sup>k</sup>* that belongs to the same class *v* (*v* ∈ {1, 2}), then normalize the histogram as the probability *p*(*v*) of class *v*, the *cross entropy* between class 1 and class 2 is estimated as

$$cross\ entropy(p(1), p(2)) = -\sum\_{k=1}^{K} p\_k(1)\log\frac{1}{p\_k(2)}\,\tag{19}$$

where *p<sup>k</sup>* (*v*) is the *p*-th element of the probability *p*(*v*).

Table 3 shows the *cross entropy* results. It can be seen that the HIHT algorithm can achieve the best result on the most datasets than the other four algorithms. It indicated that the sparse features computed by HIHT can distinguish different classes more effectively, which is more useful for classification, especially when a simple linear classifier is used.


**Table 3.** Cross entropy of sparse features between different classes.

Subsequently, we use these sparse coding algorithms to compute the optimal sparse features of training data to train the proposed SLNNs, and compare the final recognition results, which is shown in Table 4. From this table it can be seen that these sparse coding algorithms can achieve similar classification performance on most datasets when used in the proposed method, while HIHT outperforms the other three algorithms in some datasets (i.e., Glass and Vehicle) significantly. From the view of standard deviation, the results show that the optimal sparse features computed by HIHT are more robust to classification than other algorithms.

**Table 4.** Recognition Results of the Proposed Method Using different sparse coding algorithms.


(2) *Parameter Sensitivity*: In HIHT algorithm, different values of the regularization factor *λtarget* and dictionary *D* will product different sparse features, which will cause the proposed method to estimate different approximated sparse features and influence final recognition result. In this experiment, the sensitivities of *λtarget* and *D* in final recognition performance are verified, and the two face image datasets are used for testing.

Firstly, the influence of *λtarget* is investigated. By fixing other parameters (i.e., dictionary, parameters of the classifier), *<sup>λ</sup>target* is searched for in the grid of {10−10, 10−<sup>8</sup> , ..., 102}, and the corresponding recognition accuracy is recorded. From the results in Figure 4, we can conclude that the final recognition result is not very sensitive to the *λtarget*, so it is no need to spend much time turning *λtarget* when uses HIHT to compute the optimal sparse features in this proposed method.

**Figure 4.** The influence of the target value *λtarget* of HIHT on the recognition performance. (**a**) Extended YaleB. (**b**) AR Face.

Subsequently, we investigate the influence of *D*. An unsupervised learned dictionary by the Lagrangian dual method [52] is used to compare with a random dictionary generated by normal distribution. The number of iterations in dictionary learning is set as 5, and 10 times with random selection of training and testing data are repeated, the average accuracy is recorded for comparison. As Table 5 shows, the final recognition accuracy achieved by using learned dictionary are close to that by using random dictionary. However, the computational time of optimal sparse features calculation with dictionary learning is five times (equal to the number of iterations) that with the random dictionary. Thus, in the following experiments we use random dictionary to compute optimal sparse features in HIHT algorithm.

**Table 5.** Comparison of final recognition performance with or without dictionary learning in HIHT algorithm.


#### 4.4.2. Evaluation on UCI Datasets

The average recognition accuracies on UCI datasets are listed in Tables 6 and 7 presents the testing time. From these two tables we can conclude that the proposed approach outperforms other methods in terms of accuracy and testing time simultaneously. For most datasets, the approximated sparse features estimated by our approach can obtain the highest accuracy, and is approximately 100 times faster than ScELM (exact sparse coding algorithm), especially in high-dimensional datasets. Compared with other approximation sparse coding methods, our approach can achieve higher recognition accuracy with simpler network training, and the testing time of the proposed method and PSD are much less than LISTA, LAMP and LVAMP. It is worth noting that the performances of activation function *tanh* and kernel function of this approach are similar, but kernel function outperforms *tanh* when the dataset is a litter complex, (i.e., Satimage, Madelon), which will be confirmed in next experiments.


**Table 6.** The average accuracy of compared methods on UCI datasets (red is the best result and blue is the second one).

**Table 7.** The testing time of compared methods on UCI datasets.


Figure 5 (The Tanh and Kernel mean the *tanh* version and kernel version of our method, respectively.) shows the confusion matrices obtained by this proposed method, PSD, LISTA, LAMP and LVAMP on Satimage dataset, in which the kernel version of this proposed method achieved a much better recognition result than others. It can be seen from this figure that all methods almost fail to correctly classify the test samples of class 4 except the kernel version of our method. It indicates that the features computed by this proposed method are more discriminative than that of other approximation methods. Figure 6 shows two examples of the receiver operating characteristic (ROC) curves of the approximation methods, where the red lines report the performance of our approach. It is clear that the Areas Under ROC curves (AUC) of our approach is much higher than others.

**Figure 5.** Confusion matrices on Satimage dataset. (**a**) PSD. (**b**) LISTA. (**c**) LAMP. (**d**) LVAMP. (**e**) Tanh. (**f**) Kernel.

**Figure 6.** The receiver operating characteristic (ROC) curves of the approximate methods. (**a**) Madelon. (**b**) Breast.

4.4.3. Evaluation on Extended YaleB Dataset

Table 8 lists the recognition accuracies and testing time on Extended YaleB dataset, in which the famous sparse representation-based face recognition algorithm SRC [53] and collaborative representation-based classification (CRC) [54] are also used for comparison. Furthermore, a result obtained by raw features is set as the error bar (denoted as Baseline), and we set different number of hidden layers (denoted as *T*) for LISTA to show its influence on object recognition performance. As Table 8 shows, all methods beat the Baseline, indicating the benefit of feature learning. The kernel version of this proposed method obtains the best result with the value of 98.33%, and is 1.89% higher than the second one. In testing process, the proposed method is approximately 21 times faster than the deep learning-based approximation methods, and 182 times faster than SRC, also much faster than CRC. For LISTA, if the number of layers is small (*T* = 2), the recognition performance

will degrade much, and as the number of layers increases, the recognition results tend to be stable. Thus, the recognition performance is somewhat sensitive to the number of layers of deep network.

Figure 7 shows the patterns of confusion across classes obtained by this proposed method, in which coordinates in X-axis and Y-axis represent 38 face classes. Color at coordinates (*x*, *y*) represents the number of test samples whose ground truth are *x* while machine's output labels are *y*. From this figure it can be seen that our approach shows fewer points in the non-diagonal region (i.e., fewer false positives and false negatives), indicting that the proposed method can classify most testing samples correctly.


**Table 8.** Average Recognition Accuracy with Random-Face Features on the Extended YaleB Database.

**Figure 7.** Patterns of confusion on Extended YaleB. (**a**) Tanh. (**b**) Kernel.

#### 4.4.4. Evaluation on AR Dataset

For the AR dataset, a protocol (e.g., only five training samples per class or all training samples are used) is established in our experiments, and the corresponding results are list in Table 9. As we can see, the kernel version of this proposed method achieves the best result in both cases. In addition, the *tanh* version of this method gets comparable result with LAMP and ELM, but still better than SRC when all training samples were used. In terms of testing time, the proposed method is approximately 12 times faster than the deep learning-based approximation methods, 300 times faster than SRC, and 24 times faster than CRC. It is worth noting that the computational speed of kernel version is a little slower than that of *tanh* version, since it needs to compute the kernel matrix between testing samples and training samples, while it is still much faster than the deep learning-based approximation methods.

We use a confusion matrix to give the detailed evaluation at the class-level. Figure 8 shows the results, in which coordinates in *x*- and *y*-axis denote 100 face classes. Red point with coordinates (*x*, *y*) represents the misclassified test samples. It can be seen from this figure that this proposed method shows rare points in the non-diagonal region than other methods, indicating that this proposed method performs better than other methods in object recognition.


**Table 9.** Average Recognition Accuracy with Random-Face Features on the AR Face Database. The four column is the result when only 5 training samples per class are used.

**Figure 8.** Patterns of confusion on AR dataset. (**a**) SRC. (**b**) CRC. (**c**) ScELM. (**d**) PSD. (**e**) LISTA. (**f**) LAMP. (**g**) LVAMP. (**h**) Kernel.

To give an intuitive illustration, Figure 9 shows all misclassified images obtained by LAMP method (which achieves the second best result) and this proposed method (kernel version). It can be seen that images with exaggerated facial expressions is the main reason causing misclassification for both methods. Another interesting point can be seen that most images with facial "disguises" are misclassified by LAMP method while they are correctly recognized by this proposed method. It indicates that the approximate sparse features estimated by this proposed method is robustness to facial occlusion or corruption than LAMP.

**Figure 9.** All misclassified images produced by (**a**) LAMP and (**b**) Kernel.

#### **5. Conclusions**

This paper proposes a simple fast approximation sparse coding method for small-scale datasets object recognition task, in which the optimal sparse features of training data computed by HIHT algorithm are used as ground truth to train a succinct and special SLNNs, thus make the representation learning in object recognition task more practical and efficient. Extensive experimental results on publicly available datasets show that this approach outperforms the compared approximation methods in terms of approximation performance, recognition accuracy and computational time. The high recognition and computational efficiency makes the proposed method very promising for real-time applications. Moreover, experimental results have demonstrated that this proposed method is robust to parameters on recognition performance, that make it more practical. Future work includes supervised sparse coding algorithms and autonomously finding an over-complete dictionary.

**Author Contributions:** Conceptualization, Z.S. and Y.Y.; methodology, Z.S.and Y.Y.; validation, Z.S.; data curation, Z.S.; writing—original draft preparation, Z.S.; writing—review and editing, Z.S. and Y.Y.; funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Natural Science Foundation of China (NSFC) under Grant No. 61873067.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Publicly available datasets were analyzed in this study. These datasets can be found here: http://archive.ics.uci.edu/ml/index.php.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

