**1. Introduction**

The need to extract more detailed information from remote-sensing imagery has expanded from multispectral images to hyperspectral images that enable pixel-constituent-level analysis. Hyperspectral images have better spectral resolution than multispectral images due to their large number of narrow and contiguous spectral bands [1]. The detailed information provided by sensors faces a trade-off in which the sensors capture distinct materials on the Earth's surface mixed in one pixel. This is affected by one of the following factors [2–4]. The first factor is due to the low spatial resolution of the sensors; two or more separate materials occupy the same pixel. The other factor occurs when the sensors capture some distinct substances that have merged into a homogeneous mixture on the Earth's surface. This condition leads to a compelling solution, i.e., spectral unmixing.

The procedure of spectral unmixing works by decomposing the measured hyperspectral data into a collection of spectral signatures (spectral library) and a set of corresponding fractions (abundances) that represent the proportion of each spectral signature contained in the pixels [2,5–7]. The spectral signatures that exist in the mixed pixels are called endmembers. In general, endmembers correspond to familiar macroscopic objects in a scene, such as water, metal, and vegetation, as well as constituents of intimate mixtures in microscopic scale. Hyperspectral unmixing can be reconstructed from the linear mixture model (LMM) and nonlinear mixture model [2,8–10]. With the LMM, it is assumed

that the spectra of each mixed pixel are linear combinations of the endmembers contained in the pixel. Despite the fact that it holds only for macroscopic mixture conditions [8,11], it is widely used due to its computational tractability and flexibility in various applications.

With the LMM, several unmixing techniques have been introduced based on either geometry [12,13], statistics [12,14], nonnegative matrix factorization (NMF) [4,15–17], or sparse regression [12,18–21]. Although the geometry and statistical techniques are unsupervised and require only a little prior information about the data, they require an assumption that at least one pure pixel (a pixel containing only one endmember) exists for each endmember [22]. The NMF techniques do not require this assumption, however, they can obtain virtual endmembers with no physical meaning [22,23]. On the other hand, in the sparse regression techniques, additional informations are introduced as prior knowledge that are added to the objective functions in the optimization problems and called regularizers, e.g., considering the abundance sparsity [24–26], information of endmembers known to exist in the data [22], or total local spatial differences [27]. An abundance sparsity regularizer algorithm, called sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL), was introduced by Iordache et al. [26]. They applied the *L*1 norm (the sum of the absolute values of the matrix columns) to the abundance matrix, substituting the *L*0 norm (the number of nonzero elements of the matrix) to impose the sparsity. With the algorithm known as collaborative SUnSAL (CLSUnSAL), it is assumed that the pixels of a hyperspectral scene share the same active set of endmembers [28]. This assumption does not hold when an endmember is contained in several pixels instead of all pixels in the scene. For example, when the hyperspectral scene captures a location that contains locally homogeneous regions. Zhang et al. [29] proposed a local approach of the CLSUnSAL considering the fact that endmembers tend to be distributed uniformly in local spatial regions. Qu et al. [30] adopted joint sparsity combined with the low-rank model under the bilinear mixture model (BMM). The low-rank term corresponds to the low number of linearly independent columns of a matrix. They applied a local sliding window to the abundance matrix as the neighboring pixels tend to be homogeneous and constituted from the same materials.

Iordache et al. [27] proposed a spatial regularizer algorithm called sparse unmixing with the total variation regularizer (SUnSAL-TV), which uses an unmixing technique that is more powerful than the conventional unmixing ones. Nevertheless, this semi-supervised algorithm may produce over-smoothed results and blur in the edges. The spatial information is also imposed in the sparse unmixing task in a nonlocal procedure [11]. Tang et al. [22] introduced an algorithm called sparse unmixing using a priori information (SUnSPI). The required prior knowledge is that some spectral signatures (endmembers) in the hyperspectral scene are known in advance. Despite the fact that the performance is superior compared to that of conventional unmixing algorithms, it is difficult to guarantee whether the assumption can always hold. Field investigation or prior hyperspectral-data analysis may be needed to provide such information.

In a region with high spatial similarity, e.g., local spatial region, the correlation among pixels' spectral signatures can be reflected as linear dependence among their corresponding abundance vectors. The abundance matrix that is composed of these vectors should be low rank. This low-rankness has been recently applied for hyperspectral image denoising and recovery tasks [31–33], which results in superior performances. Furthermore, the low-rankness of the data also indicates high correlation among the abundance vectors corresponding to the pixels in such regions [30]. Giampouras et al. [34] proposed ADSpLRU algorithm by exploiting the low-rankness of abundance to the sparse unmixing problem to consider the spatial correlation of the abundance. However, they considered the low-rankness in the nonlocal fashion of the abundance dimension. In practice, to consider the local low-rankness of an image, Ono et al. [35] proposed the local color nuclear norm (LCNN). However, they locally applied the nuclear norm (the sum of the matrix singular values) only to the spatial dimension of RGB images. Yang et al. [36] also imposed the low-rank constraint for coupled sparse denoising and unmixing problems. However, the use of the nuclear norm is not local, and superior performance is more dominant in the denosing task rather than the unmixing one. To the best of our

knowledge, there is no sparse unmixing algorithm that takes into account the low-rankness of local spectral signatures (endmembers) in the abundance dimension, whereas the high correlation between the spectral signatures can be guaranteed by the spectral angle (SA), which is a spectral similarity assesment defined as the angle between two spectral vectors. In turn, one can observe the linearity of the data distribution in local regions in terms of spatial as well as abundance dimension. This priori may lead to a novel approach for the sparse unmixing algorithm.

In this study, we developed an algorithm, which is called joint local abundance sparse unmixing (J-LASU), in which we proposed the local abundance regularizer and implanted it to the sparse unmixing problem using the nuclear norm for 3D local regions and evaluated the effect. We used the 3D local block sliding through the three dimensions of the abundance maps and imposed the nuclear norm to promote the low-rank structure of the local abundance cube. We preserve the use of the total variation (TV) regularizer for spatial consideration. The proposed algorithm was tested on simulated data as well as real hyperspectral data and compared with other sparse unmixing algorithms, i.e., CLSUnSAL, SUnSAL-TV, and ADSpLRU. The major contribution of this study is imposing our local abundance regularizer to a hybrid of state-of-the-art unmixing techniques that take into account collaborative sparsity and spatial difference. We also applied the proposed J-LASU to several scenes with and without pure pixels.

In Section 2, we discuss the problem formulation of hyperspectral unmixing as an introduction to the problem formulation of our proposed algorithm. In Section 3, we describe the proposed J-LASU algorithm starting with convincing evidence of the proposed concept. In Section 4, we describe the experiment and analysis. In Section 5, we discuss the results and findings. Finally, we conclude the paper in Section 6.

*Variables and notation:* Column vectors are represented as boldface lowercase letters, e.g., **y**, whereas matrices are represented as boldface uppercase letters, e.g., **Y**. The following variables are frequently used in this paper:

