Next Article in Journal
State-Space Model Meets Linear Attention: A Hybrid Architecture for Internal Wave Segmentation
Next Article in Special Issue
Diagnosis of Potassium Content in Rubber Leaves Based on Spatial–Spectral Feature Fusion at the Leaf Scale
Previous Article in Journal
Single-Domain Generalization via Multilevel Data Augmentation for SAR Target Recognition Training on Fully Simulated Data
Previous Article in Special Issue
DGMNet: Hyperspectral Unmixing Dual-Branch Network Integrating Adaptive Hop-Aware GCN and Neighborhood Offset Mamba
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Conventional to Deep Learning Methods for Hyperspectral Unmixing: A Review

1
Equipment Management and Unmanned Aerial Vehicle Engineering School, Air Force Engineering University, Xi’an 710051, China
2
National Key Laboratory of Unmanned Aerial Vehicle Technology, Xi’an 710051, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 2968; https://doi.org/10.3390/rs17172968
Submission received: 7 July 2025 / Revised: 16 August 2025 / Accepted: 21 August 2025 / Published: 27 August 2025
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)

Abstract

Hyperspectral images often contain many mixed pixels, primarily resulting from their inherent complexity and low spatial resolution. To enhance surface classification and improve sub-pixel target detection accuracy, hyperspectral unmixing technology has consistently become a topical issue. This review provides a comprehensive overview of methodologies for hyperspectral unmixing, from traditional to advanced deep learning approaches. A systematic analysis of various challenges is presented, clarifying underlying principles and evaluating the strengths and limitations of prevalent algorithms. Hyperspectral unmixing is critical for interpreting spectral imagery but faces significant challenges: limited ground-truth data, spectral variability, nonlinear mixing effects, computational demands, and barriers to practical commercialization. Future progress requires bridging the gap to applications through user-centric solutions and integrating multi-modal and multi-temporal data. Research priorities include uncertainty quantification, transfer learning for generalization, neuromorphic edge computing, and developing tuning-free foundation models for cross-scenario robustness. This paper is designed to foster the commercial application of hyperspectral unmixing algorithms and to offer robust support for engineering applications within the hyperspectral remote sensing domain.

1. Introduction

With the emergence of the concept and technology of imaging spectrum, optical remote sensing has experienced a qualitative leap in hyperspectral imaging after panchromatic, color photography, and multispectral imaging [1]. Hyperspectral remote sensing employs imaging spectrometers to collect electromagnetic spectra, spanning ultraviolet, visible, near-infrared, and mid-infrared regions, from a distance [2]. It combines imaging and spectral analysis, capturing narrow spectral information in several hundred or even thousands of bands for each pixel in an image [3]. As a result, it can simultaneously obtain geometric, radiometric, and spectral information, enabling three-dimensional imaging. It is worth mentioning that in sampling spectra finely per pixel, it allows for the detailed inherent feature analysis of material signatures across a scene. Hyperspectral imaging identifies materials through their characteristic spectral signatures, serving as a powerful tool for precise material discrimination when detectable features exist within the sensor’s wavelength range. Many features hidden in abundant spectral information have been excavated, making the hyperspectral process a research hotspot in the international field, which has attracted the attention of a vast number of researchers engaged in remote sensing technology [4].
In recent decades, hyperspectral imaging technologies have witnessed tremendous improvements in terms of their structure, working methodology, sensors, and platform. For Earth observation applications, hyperspectral imaging can be accomplished using spaceborne, airborne, and ground-based sensing platforms [5,6,7].
Spaceborne platforms involve mounting hyperspectral imagers on satellites. These platforms offer advantages such as wide coverage and repetitive observation capabilities, albeit at the cost of lower spatial resolution. The representative spaceborne imagers are Hyperion, Compact High Resolution Imaging Spectrometer (CHRIS), Hyperspectral Image Suite (HISUI), PRecursore IperSpettrale della Missione Applicativa (PRISMA), Environmental Mapping and Analysis Program (EnMAP), Advanced Hyperspectral Imager on GaoFen-5 satellite (GF-5), Hyperspectral Image Suite (EMIT), and DLR Earth Sensing Imaging Spectrometer (DESIS) [8]. The airborne systems are critical early testers, pilot sensors for later satellites, and operate at much higher spatial resolution.
The airborne platform may be an aircraft, balloon, or helicopter that remains in the air. It may be completed from low altitudes to tens of kilometers, contingent on aircraft and sensor specifications. The representative airborne images are Airborne Visible/Near-Infrared Imaging Spectrometer (AVIRIS), Hyperspectral Digital Imagery Collection Experiment (HYDICE), Hyperspectral Mapper (HyMap), Digital Airborne Imaging Spectrometer (DAIS-7915), Compact Airborne Spectrographic Imager (CASI), et al. [9]. Among them, typical HyMap sensors have achieved high performance levels regarding SNR, band-to-band registration, and image quality [10]. Kruse et al. compared spaceborne and airborne hyperspectral data for mineral mapping and indicated that SNR and spatial resolution improvements are required for future spaceborne sensors to allow for the same level of mapping currently possible from airborne sensors [11]. However, airborne platforms are limited by their high costs for regional coverage and repetitive observations [1]. Compared to the expensive airborne data we had until recently, spaceborne sensors make data easier to obtain. For instance, users can access DESIS data through the Earth Sensor Portal at Teledyne, where they can task new data acquisitions [12,13].
Consequently, hyperspectral imaging based on spaceborne and airborne platforms faces significant challenges, including the demand for high-resolution data, immediate data availability, and regionalized data [14]. This demands an alternative solution to address these problems, one of which is to use an unmanned aerial vehicle [15]. It provides high spatial and temporal resolution data, spanning aerial and orbital platforms. Additionally, compared to manned aircraft carrying hyperspectral imaging, it has the advantages of low cost, good safety, and the ability to access hazardous or remote environments within practical connectivity and regulatory limits. Fortunately, some researchers have made their data publicly available, allowing for downloads from their research papers [16].
Owing to the abundant spectral–spatial information, hyperspectral imaging supports diverse application, such as agriculture production [17,18,19], mineral investigation [20,21,22], land use [23,24,25], environmental monitoring [26], oceanography [27], and forestry operations [28]. Furthermore, military applications span landmine detection [29], military camouflage recognition [30], and coastal defense area navigation [31]. Moreover, applying traditional methods to multispectral and hyperspectral imaging, the comparison of results shows the potential application of hyperspectral spatial information in mineral mapping [32].
However, hyperspectral imaging improves spectral resolution while restricting spatial resolution [33,34]. Mixed pixels commonly occur in hyperspectral imagery due to spatial resolution constraints and complex surface heterogeneity [35]. A mixed pixel contains multiple ground features, whose spectrum can be constructed from several pure components, called endmembers [36]. This mixing prevents direct material identification at the pixel level, reducing classification accuracy and target detection effectiveness. Fortunately, due to the inherent spectral information of tens to hundreds of bands, hyperspectral remote sensing enables superior sub-pixel quantification of specific species abundance compared to panchromatic and multispectral images. Hyperspectral unmixing (HU) addresses this challenge by decomposing mixed pixels into their constituent materials (endmembers) and quantifying their abundance fractions within each pixel, as shown in Figure 1. Compared to classification, which assigns each pixel a category, unmixing aims to quantify material proportions at the sub-pixel level or to detect sub-pixel level targets. Figure 2 shows partial statistics on hyperspectral unmixing over the past 10 years from 2010 to 2025, sourced from the Web of Science via https://www.webofscience.com (accessed on 3 August 2025). The precise search terms are hyperspectral unmixing and endmember extraction, respectively. The database filter is Web of Science Core Collection. It indicates that hyperspectral data processing has increased substantially in recent years, and research on unmixing and endmember extraction of hyperspectral data has also increased year by year. HU enables precise identification of endmember signatures and their abundance fractions per pixel, facilitating sub-pixel applications like precision classification, target detection, and risk management, advancing quantitative hyperspectral remote sensing.
Recent years have yielded comprehensive reviews of traditional HU methods. Specifically, reference [37] surveys nonlinear HU techniques, reference [2] examines spatial information integration, and [38] synthesizes linear model-based approaches in three aspects. The work in [39,40] comprehensively reviews primary HU algorithms and applications. Specifically, the authors in [40] critically compare advanced linear unmixing techniques across three aspects: supervised, semi-supervised, and unsupervised unmixing. References [41,42] give a linear and nonlinear model for HU, respectively. Reference [43] presents a comprehension survey of the nonnegative matrix factorization-based HU techniques. The work in [44] divides the solutions of HU into physics-based methods and data-driven approaches, surveying recent advances in deep learning architecture, prior integration, and loss function design. Concurrently, references [45,46,47] address endmember variability mitigation strategies, while [48] examines HU from a signal processing perspective. Finally, reference [47] comprehensively reviews spectral variability mechanisms.
In addition, significant reviews also address deep learning in remote sensing generally and hyperspectral imaging, such as [49,50]. Specifically, reference [51] synthesizes hyperspectral imaging–autoencoder integration, examining technical foundations, historical development, applications, hyperparameter optimization, and outstanding challenges. Meanwhile, reference [52] introduces an open-source Python 3.10 toolbox supporting supervised, semi-supervised, and blind hyperspectral unmixing. The authors also conduct experiments to illustrate the performance of these methods. In addition, the reviews on deep learning HU are briefly discussed in [53,54]. The article [54] also provides detailed architectural comparisons for HU and critically evaluates eleven autoencoder methodologies. Finally, reference [55] comprehensively examines interpretable AI for hyperspectral data. However, these reviews focus on deep learning based on a blind autoencoder or artificial intelligence and do not provide a comprehensive survey about HU in the referenced search results, especially the specific details and reviews on this combination.
To bridge the knowledge gap, this paper systematically addresses three core research questions:
  • What is the paradigm shift from conventional to deep learning models for key HU subproblems (e.g., endmember extraction, abundance estimation)?
  • What methodological frameworks and recent advances drive HU applications?
  • Where do critical research gaps lie, and what future directions could improve HU effectiveness?
This article aims to categorize HU techniques spanning traditional to deep learning-based approaches systematically. Additionally, it critically examines the benefits, limitations, and foundational frameworks of these methods. First, we synthesize core HU workflows, endmember number estimation, extraction, and abundance quantification, using traditional methodologies. Next, we analyze deep learning-based HU models. Through quantitative comparison of methods, we identify emerging opportunities and persistent challenges. Our contributions include the following:
(1)
New taxonomy. This paper proposes a comprehensive new taxonomy that categorizes primary traditional and modern HU methods into distinct categories in recent years.
(2)
Comprehensive overview. This paper provides the progress in traditional-based HU from three main workflows: endmember number estimation, endmember extraction, and abundance estimation. It categorizes the conventional unmixing approaches in each step, considering their main characteristics. A summary of typical conventional HU algorithms from each category will be made. Then, we comprehensively introduce the progress in deep-learning-based HU from five main architectures: autoencoder, convolutional neural network, generative model, transformer, and recurrent neural network. In each architecture, we discuss the basic framework and main process or strategy to implement HU.
(3)
Future trends. We identify persistent challenges and propose research pathways to enhance HU robustness and performance, drawing insights from recent seminal studies in this field.
The article is structured as follows. In Section 2, we give a general introduction to traditional-based HU methods. Section 3 presents the deep learning-based HU methods. Section 4 points out the challenges that need to be addressed in future studies and presents an outlook on future trends. Section 5 draws comprehensive conclusions.

2. Traditional-Based HU Approaches

Generally, there are linear models [56], nonlinear models [57], and bilinear models [58] for spectral unmixing. The linear mixing model refers to large-scale spectral mixing where the incident light reflects from one dominant material before reaching the sensor. This model relies on three critical assumptions: (1) spectral invariance, where the spectral signals are linearly contributed by a finite number of endmembers within each instantaneous field of view weighted by their covering percentage (abundance); (2) spatial homogeneity, where land covers comprise homogeneous, spatially distinct surfaces where single-scattering dominates (no multiple photon interactions); and (3) pixel independence, where the electromagnetic energy interactions occur only within pixels (no inter-pixel influence). Nonlinear mixing is formed by the interaction between multiple ground objects, which can be classical, close range, micro-scale, or multi-level [2]. However, because nonlinear mixing models require prior knowledge or corresponding feature information of the endmembers, the model has poor universality in the interaction between multiple ground objects. The nonlinear nature of mixing is not only governed by individual spectral distortions but also by nonlinear interactions of the materials [59]. Despite advances in nonlinear unmixing (e.g., bilinear models or deep learning), blind decomposition remains challenging due to model ambiguity, ill-posedness, and the lack of universal identifiability conditions [2,60]. The bilinear mixture model generalizes linear unmixing by treating pixel spectra as random variables, incorporating second-order endmember interactions [61,62]. However, due to the increased complexity caused by the introduction of model position variables, no direct solution is currently available. On the contrary, the linear model was proposed as the earliest of the three models and can approximate the light-scattering mechanism well in several special scenarios. It is easy to understand, has simple implementation, and is highly adaptable, making it the most widely studied.
The unmixing procedure comprises three main steps: the estimation of endmember numbers, extraction of endmembers, and estimation of abundance. The hyperspectral unmixing process is illustrated in Figure 3.

2.1. Endmember Number Estimation

It is common knowledge that the accuracy of the unmixing result depends highly on the accuracy and completeness of the extracted endmembers. If the number of endmembers is estimated too low, it leads to impure endmember extraction, and the extracted spectrum is still a mixture of different materials, which increases the reconstruction error between the original and synthesized hyperspectral data. Conversely, suppose the endmember number is estimated too much. In that case, it causes repeated endmember extraction and is vulnerable to corruption from noise, atmospheric effects, and spectral variability, which propagate into abundance estimation errors. The endmember number is determined via prior knowledge or estimation algorithms. Primary algorithmic approaches include intrinsic dimensionality-based algorithms, likelihood function-based algorithms, eigenvalue analysis-based algorithms, geometry-based algorithms, etc.

2.1.1. Intrinsic Dimensionality-Based Algorithm (ID)

This approach determines the minimum parameters needed to optimally represent the observed data’s information content [63]. The classical methods include the principal component analysis (PCA) [64,65] and maximum noise fraction (MNF) [66]. The core idea of these methods is to equate the number of endmembers to the intrinsic dimension essentially. PCA is a method of finding principal component information in an image by solving the variance contribution rate of each principal component. This method can remove redundancy in hyperspectral data, minimize information loss, and make it an uncorrelated component, making it more suitable for processing multi-band data. However, when the band count significantly exceeds the endmember count, especially when there are small targets in the image data, PCA will discard the endmembers in the subspace, which can easily ignore subtle spectral information and lead to significant errors.
MNF is similar to the PCA, equivalent to two consecutive PCAs, both of which perform linear transformations on the data. The difference is that after MNF transformation, spectral bands are ranked by descending signal-to-noise ratio (SNR), while the PCA is sorted in order of information content and gradually reduced. Since both algorithms are based on linear models and consider the endmembers to be uncorrelated signals, it can be approximated that the number of bands with a high SNR is equal to the number of endmembers. In advanced processing stages, the indegree distribution (IDD) method [67] leverages intrinsic dimensionality to identify optimal matches between target and synthesized hyperspectral imagery, enabling robust endmember count estimation. Experimental results confirm IDD’s effectiveness in complex unmixing scenarios. However, the algorithm relies on the spectral library and some prior knowledge and cannot automatically determine the endmembers.

2.1.2. Likelihood Function-Based Algorithms

There are algorithms based on likelihood function algorithms, such as Akaike information criterion (AIC) [68] and minimum description length (MDL) [69]. Because the endmember number estimation requires prior knowledge of mixed models or likelihood functions, incorrect prior information may cause model mismatch errors. In addition, when applied to hyperspectral data, AIC and MDL criteria frequently overestimate endmember counts due to high-dimensional noise amplification and model mismatch.

2.1.3. Eigenvalue Analysis-Based Method

It should be noted that the SNR of a hyperspectral imaging system has an evident influence on the identification result [70]. Eigenvalue-based methods determine endmember count by identifying significant eigenvalues in the hyperspectral data’s covariance matrix, where the number of eigenvalues exceeding a noise-dependent threshold corresponds to the estimated endmembers. The most representative algorithm is the virtual dimension (VD) [71] algorithm, which takes the minimum number of spectral difference signals as the final number of endmembers. However, it requires the calculation of the covariance matrix and correlation matrix, which is computationally intensive and sensitive to noise. The calculation results are easily affected by false alarm rates.
Many methods can be used to estimate VD, except for PCA. The literature uses AIC, MDL, and Harsanyi–Farrand–Chang (HFC) algorithms to assist in estimation and concludes that using HFC achieves the best effect [63]. The HFC algorithm, based on the Neyman Pearson theory, determines the endmember count by statistically separating signal from noise. It computes eigenvalues from two matrices: the correlation matrix (signal only) and the covariance matrix (signal + noise). In addition, this algorithm may cause random fluctuations in the results due to different threshold choices and does not consider the process of noise whitening, which often results in weak signals being overwhelmed by noise. Therefore, researchers have proposed the HFC method based on noise whitening (NWHFC) and noise subspace projection method (NSP) to address this problem [72]. Among them, NSP combines the advantages of HFC and WHFC and is suitable for data processing in small sample situations. The eigenvalue maximum likelihood function (ELM) [73] algorithm enhances the VD algorithm by exploiting the statistical distribution of adjacent eigenvalue differences from the covariance matrix. This parameter-free approach eliminates noise estimation requirements while improving endmember count accuracy. It can still provide more accurate estimates under low SNR conditions.
However, the maximum likelihood function is affected by noise, making the extreme values not obvious and the peak fluctuation not large, resulting in a small endmember count. To overcome these problems, the authors propose an approach based on the clustering principle in [74,75]. In these works, each endmember dominates a distinct subset of pixels, enabling cluster-based determination of the endmember count and spectral signatures. The normal component model (NCM) and Layered Bayesian Algorithm are used to determine the endmember count [76]. This algorithm is suitable for use when the number of endmembers and data is small. The minimum error hyperspectral signal identification (Hysime) algorithm estimates the signal-noise correlation matrix to identify an orthogonal basis that optimally represents the signal subspace under minimum mean square error criteria [77]. Due to the theoretical consideration of the high correlation between adjacent bands in hyperspectral images, this method can effectively estimate noise. The disadvantage is that the covariance matrix of the sample noise needs to be assessed, which requires a large amount of computation. Subsequently, the authors propose a hyperspectral Stein’s unbiased risk estimator (HySURE) to identify the hyperspectral signal subspace [78]. Unlike other previous methods, it leverages spatial–spectral coherence to achieve robust signal subspace identification, maintaining performance with water absorption bands (1350–1500 nm, 1800–2000 nm) and noise-contaminated data (SNR ≥ 15 dB).

2.1.4. Geometry-Based Algorithms

Endmember number estimation based on geometry employs two principal algorithms: the geometry convex hull algorithm (GENE-CH) and the geometry affine hull algorithm (GENE-AH) [79]. These methods operate on the fundamental principle that all observed pixel vectors reside within either (a) the convex hull of endmembers (GENE-CH) or (b) the affine hull of endmembers (GENE-AH). Consequently, sequentially extracted endmember vectors must lie within the hull structure defined by previously identified endmembers. Therefore, the geometry-based endmember number estimation algorithm is essentially a decision rule for algorithm termination for the successive extraction of endmembers. This algorithm is mainly suitable for images containing pure pixels and noise-free conditions and can achieve stable endpoint estimation results without randomly setting initial endpoints. The specific computational cost depends on the endmember extraction algorithm. Andreou et al. [80] propose an outlier detection algorithm (ODM) for noise estimation and whitening preprocessing of hyperspectral image data. The signal is believed to be located at the outlier of the noisy hypersphere, and thus, the endmembers are extracted to obtain good results. Since most endmember extraction methods require prior knowledge of the endmember count, recent research has shifted toward simultaneous estimation approaches that jointly optimize endmember number and spectral signatures [81].
In addition, Tao et al. combined endmember extraction algorithms with the proposed method to estimate the number of endmembers under unknown conditions [82]. Song et al. [83] use an iterative error minimization framework with spectral constraints, progressively reducing the endmember count from an overestimated initial value to the optimal number where reconstruction error stabilizes. However, due to the need for multiple iterations and the calculation of mean error, the computational cost was high, and the running speed was slow. Accurate endmember count estimation remains challenging, yet most extraction methods require this prior knowledge—a circular dependency limiting practical application. Overcoming this limitation through robust endmember cardinality estimation would enable truly unsupervised unmixing pipelines, dramatically enhancing adaptability across diverse sensing scenarios [84].

2.2. Endmember Extraction

Endmember extraction aims to extract relatively high content spectral vectors from hyperspectral remote sensing images and use these vectors as the image endmembers. Many researchers have proposed various algorithms for endmember extraction, but there is still no clear and detailed classification. According to the known and unknown endmembers, it can be categorized along two primary dimensions: (1) learning paradigm: supervised or unsupervised spectral unmixing and (2) implementation architecture: simultaneous (joint endmember–abundance estimation) or sequential (dimensionality reduction first, then endmember extraction and abundance estimation). In this article, endmember extraction algorithms are categorized into six groups based on their main characteristics: geometric simplex volume methods, statistical error methods, spatial projection methods, algorithms incorporating spatial information, sparse regression algorithms, and other intelligent endmember extraction algorithms.

2.2.1. Geometric Simplex Volume Methods

At present, geometric simplex endmember extraction methods are the most representative and mature traditional algorithms by researchers, including minimum volume transformation (MVT) [85], N-FINDR [86], iterative error algorithm (IEA) [87], simplex shrink-wrap algorithm (SSWA) [88], and sequential maximum angle convex pyramid method (SMACC) [89], iteratively confined endmember method (ICE) [90], vertex component analysis (VCA) [91], minimum volume constrained nonnegative matrix factorization under (MVC-NMF) [92], minimum volume simplex analysis (MSVA) [93], simplex recognition based on split-augmented Lagrangian (SISAL) [94], etc. These algorithms leverage convex geometry principles, modeling hyperspectral data as point clouds where endmembers form simplex vertices in feature space. This structure enables the identification of pure spectral signatures through vertex detection. The endmembers in the hyperspectral image correspond to the vertices of the simplex, and the pure pixels appear on the surface of the simplex and are close to the vertices. The required endmembers are obtained by finding the external minimum volume or the internal maximum volume of the simplex. However, these methods have nonlinear programming in the solving process, and their reliance on non-convex objective functions (e.g., simplex volume maximization) often traps solutions in local minima, preventing global optimum convergence. In addition, most methods based on geometric endmember extraction assume that the image should contain pure pixels. The drawback of using geometric endmember extraction is that when the data corresponding to the ground in hyperspectral images is highly mixed, this method will no longer be applicable. To address these problems, constraints, and further generative methods are proposed, such as [95,96,97], etc.

2.2.2. Statistical Error Methods

There are also many algorithms based on statistical error analysis, such as independent component analysis (ICA) [98], nonnegative matrix factorization (NMF) [99], nonnegative tensor factorization (NTF) [100], nonnegative matrix functional factorization (NMFF) [101], dependent component analysis (DECA) [102], Bayesian approaches [103], and robust collaborative nonnegative matrix factorization (R-CoNMF) [104]. The algorithm principle mainly introduces geometric constraints, statistical analysis, and calculation to obtain the minimum error and spectral unmixing results. These methods can achieve better results in processing highly mixed hyperspectral images than geometric methods but at the cost of high computational complexity. Because in most cases, endmember number and endmembers’ associated spectral reflectance signatures are unknown, performing spectral unmixing in this case is known as blind signal separation. ICA is a blind signal separation method, but its algorithm model requires complete independence of the source signal input, which contradicts the constraints of the abundance of hyperspectral data. Therefore, in some cases, it can only be used as an approximate solution for HU. In constructing linear models for hyperspectral data, the maximum posterior probability estimation is introduced into Bayesian analysis methods, and it can be concluded that the previously mentioned MVC-NMF and IEA can also be classified into statistical error methods.

2.2.3. Spatial Projection Methods

The spatial projection method includes algorithms such as pure pixel index (PPI) [105,106], subspace projection (OSP) [107], sequential projection algorithm (SPA) [108], support vector machine (SVM) [109], etc., which extract endmembers based on simplex vector projection. The first three algorithms transform hyperspectral data through projection or projection from high to low dimensions, which can significantly reduce computational complexity and processing time while enhancing SNR. However, there is no improvement in solving ill-posed problems in the inverse process of mixed pixel decomposition. Support vector machine (SVM) handles nonlinear classification by projecting low-dimensional data into higher-dimensional feature spaces via kernel functions, enabling linear separation while maintaining strong generalization capabilities. However, the selection of kernel functions and optimal parameter combinations remains a challenging problem to be solved.

2.2.4. Integrated Spatial Information Method

The representative algorithms contain automatic morphological endmember extraction (AMEE) [110], total variation separation and augmented Lagrangian sparse unmixing algorithm (SUVSAL-TV) [111], spatial–spectral endmember extraction (SSEE) [112], spatial preprocessing algorithm (SPP) [113], constrained NMF, etc. In [114], the Spatial Potential Energy-Weighted Maximum Simplex (SPEW) algorithm integrates spatial contextual constraints with spectral information through a potential energy model, optimizing endmember extraction via regularized simplex volume maximization. Compared with the previous classified methods, the integrated spatial information methods can incorporate the spatial information as constraints of the objective function of the solution process. By fully considering the continuous smoothing characteristics and spatial correlation of abundance changes between adjacent pixels, it can more reasonably reflect the shift in ground object content and avoid the problem of considerable variation of abundance values between different pixels. Among them, smoothing techniques applied to imagery and endmember signatures to reduce noise effects can refer to Savitsky–Golay smoothing filter [115]. Integrating spatial constraints during spectral unmixing substantially increases computational complexity, necessitating parallel computing strategies to achieve practical processing speeds.

2.2.5. Sparse Regression Methods

Since the sparse regression-based endmember extraction approaches acquire endmember and abundance fraction matrix simultaneously, the specific algorithms will be introduced in detail in the “Abundance Estimation” section.

2.2.6. Intelligent Endmember Extraction Algorithms

Researchers have proposed intelligent endmember extraction algorithms such as discrete particle swarm optimization endmember extraction (DPSO) [116], ant colony optimization endmember extraction (ACO-EE) [117], adaptive cuckoo endmember extraction algorithm (ACSEE) [118], endmember bundle extraction [119,120], etc. These methods can help HU achieve good performance. However, these methodological advances incur substantial computational costs and introduce multiple hyperparameters requiring expert tuning, limiting operational deployment. Consequently, deep learning will be separately discussed in Section 3 as a branch of the emerging intelligent algorithms.
Figure 4 shows partial statistics of the categories of hyperspectral endmember extraction methods obtained from the Web of Science database for 2021 and 2024. The search term is hyperspectral endmember extraction. The others in the figure refer to unclassified algorithms, including deep learning methods.
The literature analysis in 2021 revealed that methods based on statistical errors, particularly Nonnegative Matrix Factorization (NMF), are the most extensively researched endmember extraction techniques, followed by other methods and spatial projection techniques. Recently introduced sparse regression algorithms are also a focal point of current research. In contrast, the most classic geometric simplex volume model algorithm still accounts for a significant proportion due to its simplicity and ease of implementation. It should be emphasized that traditional intelligent endmember extraction algorithms do not include deep learning-based methods, which can be seen as occupying a large proportion. However, by 2024, deep learning methods had become mainstream. Crucially, using deep learning, effective integration with spatial constraints is required to achieve higher unmixing accuracy. Research on sparse regression methods and intelligent methods remains stable. In contrast, research based on statistical error methods shows a relative decreasing trend, and this is even more pronounced for spatial projection methods. Nevertheless, the subspace approach and non-convex surrogate are differentiable and analytically solvable, indicating their potential for efficient optimization. In the latest published literature [121], a novel approach called Graph-Regularized Oblique Projection Weighted NMF is introduced, which is grounded in a more precise separation of signal and noise subspaces, aiming to enhance the accuracy and robustness of the analysis. Therefore, it is inferred that the fusion of multiple algorithms and the introduction of new model algorithms will also be one of the future key research directions of endmember extraction technology.

2.3. Abundance Estimation

This stage involves determining the fraction of each component within a mixed pixel. The abundance estimation methods for generalized bilinear mixed models in nonlinear models are mainly based on the Bayesian method, gradient descent method (GDA) [122], semi-nonnegative matrix decomposition method (semi-NMF) [123], superpixel-based low-rank tensor factorization (SLRTF) [124], etc. However, the Bayesian method has considerable complexity in calculation. Semi-NMF is prone to becoming stuck in local minima and demands stringent initialization criteria, whereas the GDA algorithm merely applies to scenarios involving small datasets. For the abundance estimation algorithms of linear models, specific algorithms are capable of concurrently acquiring the endmember and the abundance matrix, such as the NMF, ICA, IEA algorithm (introduced in the previous section); orthogonal projection method [125]; sparse regression algorithm; etc. This section mainly introduces the sparse regression algorithm and the abundance estimation algorithm after obtaining the endmember matrix.

2.3.1. Sparse Regression Algorithm

Sparse regression algorithms can simultaneously obtain endmember matrix and corresponding abundance information. In recent years, semi-supervised unmixing methods have emerged in addition to supervised and unsupervised decomposition of hyperspectral mixed pixels. Sparse regression method is a kind of semi-supervised unmixing method [126]. Representative methods include sparse strategy based iterative constrained endmember extraction algorithm (SPICE) [127], constrained sparse regression problem (CSR), separation and augmentation Lagrange spectral unmixing algorithm (SUNSAL) [128], constrained base tracking denoising method (CBPDN) [128], L1/2 sparse constraint nonnegative matrix decomposition algorithm (L1/2-NMF) [129], double-weighted sparse regression pixel decomposition algorithm (DRSR) [130], local cooperative sparse regression method (LC-SR) [131], and elastic reweighted sparse regularized sparse unmixing method, termed ElSpaSU [132]. Fundamentally, reconstruction residuals, induced sparsity, and regularization terms that include spatial information represent three competing objectives requiring minimization. Building on this, researchers have lately sought to address the hyperspectral sparse unmixing issue by employing multi-objective evolutionary algorithms (MOEAs), such as pruning-based multi-objective sparse unmixing (PMoSU) [133], multi-objective group sparse hyperspectral unmixing (MO-GSU) [134], and evolutionary multitasking cooperative transfer (EMCT) framework [135]. Assuming that the spectral information within an image constitutes a linear combination of the spectra derived from several predefined pure pixels, the spectral subset most akin to the spectrum of each mixed pixel within the image is identified from the spectral library.
Unlike the methodologies above, the sparse regression approach initially constructs a spectral library comprising many spectral samples. Subsequently, it amalgamates prior knowledge with the linear blending of spectral curves contained within the library. It aligns each pixel spectrum of a specified hyperspectral image to ascertain the optimal subset. This process not only facilitates the estimation of the number of endmembers encompassed within the image but also allows for assessing the endmember count within the image. Consequently, it has recently been one of the forefront algorithms for hyperspectral mixed pixel unmixing. Should the constraint of the abundance summing to one be incorporated into the sparse regression algorithm, it translates to a fully constrained least-squares feasibility challenge while pursuing the optimal solution. However, most of these approaches are nonconvex and sensitive to the regularization parameters. Furthermore, the semi-supervised unmixing methods also faced the challenges caused by spectral variability. Subsequently, a groundbreaking semi-supervised unmixing algorithm grounded in a diffusion model is introduced. This algorithm acquires the spectral prior distribution from a specified spectral library. It exhibits the capacity to adjust to the spectral discrepancies between the actual underlying signatures and those present in the spectral library [136].
In essence, the mathematical basis of sparse unmixing is an NP-hard optimization problem. However, in real-world applications, it is challenging to satisfy the constraint that the sum of abundances equals one. Apart from issues like image noise and the disparity between the model and the image, the sparse regression algorithm proves superior to both constrained and fully constrained least squares in terms of unmixing performance. The success of hyperspectral image unmixing using the sparse regression algorithm hinges on identifying the correct number of endmembers and establishing an appropriate spectral library. Due to the inherent intercorrelation within the spectral library, it is challenging to obtain it from the same dataset. Furthermore, acquiring the spectral library is both time-consuming and labor-intensive, and it also requires consideration of the calibration issues between the spectral library and the dataset [2].
To solve these problems, dictionary learning [137] is introduced, learning directly from datasets without considering prior knowledge. In addition, the sparse regression algorithm is only applicable to the linear mixture model and cannot achieve good results for highly mixed hyperspectral data, and the utilization of spatial information is insufficient. For this reason, the literature [138] extends the sparse regression algorithm from the linear model to the bilinear mixed model and establishes a linear dictionary library and a bilinear dictionary library. The generalized low-rank representation (LRR) model fully considers the spatial information and can effectively estimate the abundance of hyperspectral images for bilinear mixture models. However, it is only applicable to bilinear mixture models, and it is sensitive to the regularization parameter with the sum of abundance conditions. Moreover, a series of bilinear mixture models based on unsupervised nonlinear spectral unmixing face challenges due to collinearity and the complexity of nonconvex problems. To overcome these obstacles, the authors propose a geometrical projection improved multi-objective particle swarm optimization (GPMOPSO) method, which enhances nonlinear unmixing through geometrical projection and multi-objective optimization. Among them, two advanced multi-objective optimization strategies are adopted: one focuses on minimizing the distance between endmembers, while the latter highlights the scarcity of plentiful values [139]. In contrast, existing techniques that rely on dictionary learning are insufficiently resilient in the presence of noise. To tackle this issue, the authors suggest a method grounded in online robust dictionary learning, referred to as EEORDL [140], and the online approach can decrease the computational time required for dictionary learning. In [141], the authors propose a spatial self-similarity regularization method, which can effectively use spatial high-order structure information, but its huge amount of computation restricts the practical application potential.

2.3.2. The Abundance Estimation Algorithm After Obtaining the Endmember Matrix

The algorithms utilized for abundance estimation, subsequent to obtaining the endmember matrix, primarily encompass the least squares method, artificial neural network method, among others. The least-squares algorithm stands as the most timeless abundance estimation technique, wherein the abundance estimation matrix is derived by determining the minimum value of the summation of squares of errors. The spectrum of these algorithms extends to unconstrained least squares (LS), sum-one-constrained least squares (SCLS), nonnegative constrained least squares (NCLS), and fully constrained least squares (FCLS) [142,143]. It can be seen that these four least-squares algorithms gradually increase the abundance of algebraic constraints, and although the amount of calculation is increased, the overall unmixing accuracy is gradually improved [142]. However, the least-squares algorithm has the following disadvantages: when there are many endmembers, the algorithm’s convergence rate is slow. The accuracy of the endmember matrix is highly required, and a small proportion of noise or changes in the endmember matrix may cause large deviations in the abundance estimation [144]. It neglects to harness the high-order correlation between the bands and is unable to accommodate the nonlinear mixed image data.
In particular, the mixture-tuned matched filtering (MTMF) has been proven to be an effective tool for unmixing according to a user-defined endmember [145]. A crucial step of MTMF process is to select threshold values for infeasibility scores within the range of MF values, which ultimately determine the final classification of the target spectra at each pixel [146]. MTMF exhibits both a high degree of sub-pixel detection capability and outstanding false alarm removal. However, the accuracy depends on the accuracy of the provided endmember and on the degree of linearity of spectral mixing involved [145]. Subsequently, researchers utilize discrete wavelet transform (DWT) [147], genetic algorithm and least mean square error (GA-LSEM) [148], kernel fully constrained least squares [149], primal–dual interior-point optimization [150], nonlinear unmixing algorithm with spatial regularization constraints [151], the vectorized code projection gradient descent method (VCPGD) [152], and the bounded projected optimal gradient method (BPOG) [153] to improve the least-squares abundance estimation accuracy. Specifically, a more flexible algorithm, Tucker tensor decomposition with rank estimation for sparse hyperspectral unmixing (TTDRE), is proposed to differentiate the low rankness of different dimensions of the abundance tensor while preserving the structural integrity [154]. Additionally, methods encompassing artificial neural networks, such as the radial basis function neural network (RBF), are included [155], alternating the direction method of multipliers (ADMM) neural network [156], recurrent neural network (RNN) [157], and other regression methods, such as the principal component analysis network (PCANet) [158] and support vector regression (SVR) [159], which also can realize abundance estimation. Compared to using linear unmixing methods, artificial neural network algorithms achieve better results in processing linear mixed data and nonlinear mixed data.
Figure 5 shows the classification and summary of typical traditional HU algorithms for hyperspectral images.

3. Deep Learning-Based HU Approaches

Over the last decade, with the advancement of computer technology, deep learning has unveiled new avenues for processing data in data-intensive fields, such as hyperspectral unmixing or classification [176]. In deep learning networks, the fundamental function is acquired through supervised training or discovering the latent structures within the data. Initially, for HU, the application of deep learning networks commences in a two-phase manner, specifically, extracting features from the data, which are subsequently utilized for the unmixing process. The network undergoes training utilizing the accessible information, including the number of endmembers and their distinctive characteristics. Likewise, the deep learning-based HU technique lacks a precise categorization strategy. Behnood et al. distinguish these by supervised, unsupervised, or semi-supervised learning [52]. This paper examines HU by leveraging five fundamental deep learning architectures: autoencoders, CNNs, generative models, transformers, and RNNs.

3.1. Autoencoders

Autoencoder, a symmetrical neural network widely studied in the field of HU, stands out as the most researched architecture among the five due to its proficiency in unsupervised feature learning, accounting for over 40% of the research. The statistical results are sourced from the Web of Science Core Collection database, and the access link is https://webofscience.clarivate.cn/ (accessed on 4 August 2025). An autoencoder is a fundamental building block in the field of deep learning, commonly used for the hierarchical construction of complex models. They possess the capability to organize data, compress it efficiently, and extract high-level features that are crucial for analysis. These features facilitate unsupervised learning and make it possible to draw out intricate, nonlinear patterns from the data [177]. An autoencoder comprises an input layer, an encoder, a latent feature representation, a decoder, and an output layer, as depicted in Figure 6. Generally, the encoder produces a condensed version of its input, and the decoder then rebuilds the input from this condensed version. Introducing a bottleneck within the network enables it to uncover and learn to utilize the inherent structure within the data, resulting in a lower dimensionality for the latent features compared to the input. For HU, the encoder translates the input spectrum into a latent layer code, which can be viewed as the fraction of abundance. The linear decoder layer reassembles the input as a convex combination of the endmembers’ columns (the weight matrix). However, the simple autoencoder fails to satisfy the abundance sum-to-one and nonnegative constraints. Hence, to improve the performance of HU using an autoencoder, a more sophisticated structure is needed to implement the linear mixing model fully. In recent times, the thorough and detailed research on autoencoders and their uses has resulted in the development of various spectral unmixing techniques, including nonnegative sparse autoencoders [178], stacked nonnegative sparse autoencoders [179], convolutional autoencoder unmixing [180], variational autoencoders [181], and so on.
Various approaches can be taken to fulfill the abundance nonnegative constraint, which means the encoder’s output must be nonnegative. One effective method is choosing an activation function for the final encoder layer that produces nonnegative values, such as the rectified linear unit (ReLU) or the sigmoid function. Another approach involves enforcing the abundance fractions’ sum-to-one constraint with nonnegative values. This can be achieved by normalizing the latent code vector obtained from the encoder’s last layer. Additionally, this constraint can be imposed through regularization by incorporating a penalty term into the loss function. However, this way increases the instability of the training process. In addition, it also increases the convergence time.

3.1.1. Cascade Autoencoder

A pioneering study employing a cascade of autoencoders for HU is [182]. It assumes the number of endmembers as the priority, then extracts them and estimates abundances. It fundamentally links a marginalized denoising autoencoder, reducing the anticipated observation data loss to secure optimal weights. The method is suitable for high-noise hyperspectral images but does not impose constraints to improve the method’s performance. Subsequently, inspired by the technique, many autoencoder-based models with constraints are applied. In reference [183], the authors suggest an entire autoencoder network where both the encoder and decoder are kept independent, with the decoder being restricted to nonnegativity only. Additionally, the technique applies a constraint to enhance the estimation of the number of endmembers and the abundance fraction estimation. Gao [184] et al. propose a cycle-consistency unmixing network that employs two cascaded autoencoders to elevate unmixing efficiency. Furthermore, it incorporates a new self-perception loss function, which includes two spectral reconstruction terms and an abundance reconstruction term, to enhance the unmixing procedure further. Nevertheless, the method spends more time on extensive data and should consider spatial–spectral information to improve performance [184]. In reference [185], a cascaded hybrid convolutional autoencoder nonlinear unmixing network, called CHCANet, effectively leverages convolutional combinations to profoundly investigate the spectral–spatial data within hyperspectral images and maintain the material details via self-awareness.

3.1.2. Sparse Autoencoder

Since the number of bands in a hyperspectral image is much greater than the number of endmembers to estimate, the sparse autoencoder (SAE) stands out due to its unique feature of restricting the simultaneous activation of neural nodes [54]. This attribute allows it to acquire a sparse depiction of the input data by incorporating a sparsity constraint into the loss function. The main objective of the SAE is to reduce the difference between the input data and the reconstructed data while strictly adhering to the constraints on the sparsity of the latent representation. Given that the number of bands in hyperspectral images exceeds the number of endmembers, sparse autoencoders employ a sparse bottleneck layer. In other words, the bottleneck layer contains significantly fewer units than the input layer, and it must be sparse by a regularization term in the loss function. Initial studies typically utilized fully connected layers to construct the model, such as sparse autoencoders. Nonetheless, these approaches employed fully connected layers to design the HU model and processed pixels individually, neglecting the inherent spatial correlation in imaging. To overcome this limitation, autoencoder-based methods typically add spatial regularization terms, such as total variation, to the loss function. Another sparse autoencoder, named EndNet [186], is proposed to extract endmembers. It is based on a two-stage autoencoder network with additional layers and a projection metric, including the Kullback–Leibler divergence function, to achieve an optimum solution. A rectified activation function and a normalization layer effectively apply sparsity. To avoid overfitting because the network tends to generate hidden abstract representations constantly for highly correlated materials, the method uses hard-response selection and regularization, improving the sparsity for fractional abundances. In addition, the authors also combine nonnegative sparse autoencoders to improve HU’s robustness [187].

3.1.3. Robust Autoencoder

The robust autoencoder (RAE) is utilized to improve the resilience of autoencoders in handling input data that is noisy or corrupted [177]. These robust models are especially advantageous when dealing with real-world datasets containing noise, outliers, or other imperfections [188]. Hyperspectral images frequently suffer from contamination by outliers or noise, which inevitably impacts their unmixing performance. To tackle this challenge, denoising autoencoders have been introduced, such as marginalized denoising autoencoders and autoencoder networks with regularization by denoising (RED) [189].

3.1.4. Denoising Autoencoder

The denoising autoencoder is an autoencoder variant with a similar structure, differing only in input data [190]. During training, noise is added to the input via stochastic mapping. The output, however, is the original, noise-free input signal. Thus, loss is calculated between the reconstructed and original inputs. Unlike traditional autoencoders, the denoising autoencoder can recover the original input from a noisy signal. Both denoising autoencoders and nonnegative sparse coding exhibit superior noise reduction and inherent self-adaptation capabilities. In [179], a stack of nonnegative sparse autoencoders (SNSA) was introduced, wherein the final autoencoder is tasked with unmixing, whereas the other parts of the network bolster robustness in the face of outliers.

3.1.5. Two-Stream Autoencoder

A two-stream autoencoder is a neural network architecture that consists of two parallel processing streams, each designed to capture different types of information from the input data. Many existing deep-learning-based unmixing methods are based on a linear mixing model, ignoring complex nonlinear light-scattering interactions [191]. In addition, these methods neglect characteristic differences between endmembers, hampering endmember separation. To circumvent the problems, the authors proposed BU-Net, a two-stream stacked autoencoder framework to enhance inter-class separability [191]. This novel approach addresses limitations of linear mixing models to improve the accuracy of HU. Similarly, the authors also used a two-stream deep network model, called the endmember-guided unmixing network (EGU-Net), which acquires an extra network from the pure or almost-pure endmembers to amend the weights of a separate unmixing network by incorporating spectral meaningful constraints and sharing network parameters [192]. Subsequently, researchers [193] represent a two-stream deep network, wherein the two autoencoder networks consist of a spatial network and a spectral network, respectively, operating in an end-to-end manner. This particular network is a pioneer in integrating spectral and spatial information into the HU through a two-stream network model. Analogously, by embracing a dual-stream architecture, references [194,195] concentrate on obtaining spatial attention to aid in the unmixing procedure. Particularly, inspired by [193], a spatial–spectral features integrated autoencoder network is introduced to enhance the performance of unmixing by utilizing the spatial variance information and the spectral data between pseudo-endmembers [196]. To achieve a balance between exploring spatial information and physically interpreting endmembers, a novel multi-domain dual-stream network, known as MdsNet, has been introduced, facilitating the production of accurate abundance maps [197]. In addition, there are also other nonlinear or bilinear autoencoder-based methods, such as [198,199,200,201,202]. Thus, the two-stream architecture can improve the performance of autoencoders by enabling them to grasp a broader spectrum of characteristics from the input data. This can lead to more accurate and robust models for various tasks.

3.1.6. Mask Autoencoder

The masked autoencoder (MAE) represents a variant of the autoencoder architecture, tailored specifically for sequence modeling tasks, particularly in the domains of computer vision and natural language processing (NLP) [177]. By leveraging a unique approach, MAE functions by taking in a series of data and randomly selecting a portion of its elements to be hidden or obscured. The main difficulty for the model is forecasting these hidden or absent elements based entirely on the contextual clues from the visible parts of the sequence. This cutting-edge approach highlights the model’s capacity to deduce missing data and showcases its skill in recognizing complex patterns and correlations within the datasets. The researchers suggest a mixed region mask approach that is appropriate for the HU task, employing a multiscale convolutional autoencoder as the foundational network for unmixing, to implement the mask approach, enhancing the method’s robustness in addressing ill-posed unmixing issues [203]. The mask mechanism enables the autoencoder to exhibit robust resistance to noise interference in highly mixed regions, achieving greater accuracy and stability than previous methodologies.

3.1.7. Convolutional Autoencoder

Traditional autoencoders analyze a single spectrum at a time and necessitate specific regularization to address spectral data. Conversely, convolutional autoencoders employ convolutional layers rather than conventional fully connected layers. These convolutional layers apply learnable filters to input patches, producing feature maps that exploit the spatial correlations present in hyperspectral imagery. Various unmixing methods grounded in convolutional autoencoders have been validated to efficiently integrate the spatial correlation among contiguous pixels, such as [184,204,205]. Subsequently, not limited to linear models, authors propose a multilinear mixing model using convolutional autoencoders, addressing the interactions between endmembers up to an infinite order [206]. This technique explicitly models the correlations among the endmembers, abundances, and transition probability. Experimental results confirm the method’s efficacy compared to the traditional multilinear mixing model. Conversely, the writers [207] created a network utilizing an autoencoder framework, incorporating a dual-stream CNN within the encoder to independently derive spectral and local spatial data. This network is capable of managing a range of intricate nonlinear situations at the same time. However, since convolutional operations are confined to local features defined by the size of the kernel, a significant amount of contextual information from the original image is discarded. These issues still exist in convolutional autoencoders, which cannot retain most initial information.

3.1.8. Semi-Supervised Autoencoder

It is worth mentioning that the above-mentioned autoencoder-based methods are blind unmixing. That is, the endmember and abundance matrix can be obtained simultaneously. Non-blind unmixing methods also exist that evaluate the abundances of endmembers, given that the endmembers are recognized as a priority. In this manner, these endmembers can be used as weights for a non-trainable linear decoder [54]. Consequently, the decoder is kept static during the training phase, focusing solely on training the encoder. This process operates similarly to a nonlinear regression approach, calculating the abundance fraction corresponding to the predefined endmembers. The representative algorithms can be found in [208,209,210]. They all utilize a convolutional encoder and a decoder with predetermined weights to execute non-blind HU. On the other hand, the work in [84] uses the autoencoder to calculate the quantity of endmembers and achieve favorable outcomes.
The authors of [54] critically contrast various blind autoencoder-based HU approaches. In general, there are many ways to enhance HU performance based on autoencoder frameworks, such as increasing the depth of the feedforward neural network, selecting appropriate activation and loss functions, introducing sparsity or other constraints, applying batch normalization and various regularization techniques, utilizing spatial information, and conducting hyperparameter selection [211]. Though those autoencoder-based HU methods have exhibited strong predictive efficiency and generalization abilities, the more complex task for HU is still mainly required. For example, the power of depth in feedback or forward neural networks is still not fully understood in deep learning theory [212]. Given that most of the methods described do not fully account for the combination of spatial and spectral features, it is essential to develop additional strategies to utilize the combined spatial and spectral characteristics. Furthermore, high intra-class variance and inter-class correlation limit the generalizability of the unmixing process. Additionally, the initialization of the networks impacts the outcome. Presently, most endmember matrices of unmixing are initialized using VCA or randomly. Improperly initialized weights may result in unstable unmixing results or cause the process to become stuck in a local minimum [213]. Thus, additional works such as pre-training and constraining, other constraints, and research into adaptive neural networks for HU frameworks based on autoencoders could be pursued further.

3.2. Convolutional Neural Network

The CNN draws inspiration from the biological visual system described in [214]. Abiding by the natural visual recognition mechanism introduced by Hubel and Wiesel, the Neocognitron [215] is acknowledged as the initial hierarchical model that is invariant to position in pattern recognition [216], which can be regarded as the predecessor of CNN [217,218]. Nowadays, it has been applied in hyperspectral classification and HU, widely benefiting from its automatic extraction of relevant features, which have been proven to be difficult or impossible to obtain using manual feature extractors. The fundamental strength of a CNN is its weight-sharing mechanism, which can markedly diminish the number of neural network parameters. Further, it prevents the emergence of over-fitting while reducing the complexity of computation [219].
CNN typically comprises one or more sets of convolutional, activation, and pooling layers, ultimately culminating in fully connected layers. The convolutional layer extracts features from the input data by convolving it with a learned kernel. Thanks to the kernel’s spatial sharing across all input data on each convolutional layer, the complexity of the model is reduced, making the network easier to train since fewer parameters need to be adjusted. Following the convolutional layer, an activation layer is applied, integrating nonlinearities into the network to capture nonlinear features from the input. Following this, a pooling layer reduces the feature map’s resolution, securing shift-invariance. Typically, the activation function is introduced after the pooling layer with each convolutional layer. Finally, the fully connected layer links every individual neuron from the preceding layer to every neuron in the present layer. In [220,221], the authors propose that the fully connected layer can be disregarded by using a global average pooling layer.
Generally speaking, there are three distinct categories of CNN structures for HU: (1) spectral CNN frameworks, called 1D-CNN; (2) spatial CNN frameworks, called 2D-CNN; and (3) spectral–spatial CNN frameworks, called 3D-CNN. Figure 7 shows the overall structure of the three methods.
In [222], Qi et al. proposed a deep spectral convolution network, equipped with a fixed spectral library, to carry out the estimation process based on extracted spectral features, aiming to obtain fractional abundances. Additionally, the authors developed a novel loss function, encompassing pixel reconstruction error, abundance cross-entropy, and abundance sparsity, to train the network end-to-end. While this method enhances the network’s utilization, it significantly relies on the quality of the spectral library and fails to incorporate spatial information. Since 1D-CNN only considers spectral feature learning, Wan et al. [223] combine the traditional root mean square error and L1 regularization in its loss function to constrain the sparsity of the predicted abundance and minimize training error.
It is noted that in hyperspectral images, spectral–spatial features can also be acquired by CNN-based frameworks. To fully utilize the spatial data, the authors of [224] initially apply the 2D-CNN framework to understand the spatial structures within the entire HU process, encompassing both endmember extraction and abundance estimation. It broadens the linear mixing model to encompass scenarios where neighboring pixels are involved in pixel reconstruction. However, 2D-CNN-based approaches ignore spectral correlations [225]. A significant number of training samples is required to fit the parameters adequately. Fortunately, 3D-CNNs have shown promising performance concerning hyperspectral classification in both hyperspectral spatial and spectral domains. In [205], a 3D-CNN-based architecture has exhibited exceptional performance in integrating spectral and spatial priors. However, it assumes that the endmembers are predetermined and fixed, disregarding potential spectral variability. Then, Zhang et al. [226] suggest utilizing the 3D-CNN architecture to perform unmixing, thoroughly harnessing the spectral and spatial features intrinsic to hyperspectral images. More precisely, it also introduces an autoencoder network, whose decoder is specifically designed to account for endmember variability. The loss function incorporates structured sparsity to regulate the encoder networks’ structured configuration.
However, there are several drawbacks to utilizing solely 2D-pure CNNs or 3D-CNNs; for example, the model cannot derive robust discriminative feature maps from spectral data if it solely employs 2D-CNNs. This encompasses the failure to capture information about the relationships between the bands or the need for a more sophisticated model. Similarly, the computational complexity and cost of a deep 3D-CNN are substantially higher. Moreover, it may produce inferior outcomes for classes with comparable textures throughout a wide range of spectral bands. To address the limitations mentioned above, Tao et al. [227] combine 2D and 3D convolutions to conduct HU, named CrossCUN. The method can estimate the abundance fraction without prior knowledge about endmember extraction. In addition, in [228], a new multi-branch convolutional architecture is proposed for HU, which benefits from the early and late fusion of spectral and spatial features to estimate the abundance fractions accurately. The paper uses the ground-truth abundances in a supervised way and does not address the endmember determination issue. The 1D and 2D branches extract spectral and spatial features, respectively, while the 3D branches achieve an early fusion of the features derived from these two domains. In contrast to networks that rely on spatial–spectral features (early fusion), this method is inclined to extract valuable features. To overcome the practical challenge of obtaining expensive or difficult-to-acquire labels in real-world scenarios, the authors present a unique approach named CNN-SsN, which integrates 3D-CNN and semi-supervised learning techniques, specifically designed for unmixing tasks where labeled training samples are scarce [226].
Although various CNN-based unmixing techniques have been tailored for blind or supervised unmixing purposes, most of them depend heavily on precise endmember spectra, or they will malfunction when pure pixels are absent from the hyperspectral imagery. This results from their failure to use the linear simplex’s geometrical features. Conversely, it has been demonstrated that simplex volume minimization offers benefits for blind hyperspectral unmixing in situations lacking pure pixels [229]. To integrate the geometric features of the linear simplex with the spatial correlation among neighboring pixels, the reference [230] proposes a minimum simplex convolution network for deep HU. Different from [231,232], an exclusive presentation of geometrical information is achieved via simplex volume penalized least-squares optimization. This method encapsulates both spatial and geometrical data. Specifically, the spatial data is integrated by a convolutional operator, coupled with an implicit application of a regularizer on the abundances.
Similarly, sparse unmixing can be applied to CNN-driven hyperspectral unmixing, which employs sparse regression methods to estimate the fractional abundances, leveraging a comprehensive and meticulously crafted library of pure spectra. In [233], a deep learning algorithm is proposed using a CNN for sparse unmixing. However, the primary drawback of sparse unmixing lies in its reliance on the spectral library, which can significantly impact the accuracy of abundance estimation. Furthermore, a significant hurdle in utilizing sparse unmixing is addressing mismatches between the actual reflectance spectra and those in the library, stemming from variations in acquisition conditions.
In [234], the authors combine the ideas of model-based and data-driven approaches, introducing a deep interpretable fully convolutional neural network designed for sparse hyperspectral unmixing. Specifically, they unfold the iterations of the traditional sparse unmixing algorithm to offer guidance for the network architecture and to integrate prior knowledge within the network. Additionally, they employ 2D convolutional layers to extract spatial information across different scales automatically.
Above all, we have reviewed the recent developments of CNNs for HU. Although CNN-based HU architecture has achieved great success concerning unmixing performance, some aspects still need further investigation. For example, a significant amount of contextual information in the original hyperspectral image is lost during the convolution process since it is restricted to local features determined by the dimension of the kernel size [235]. In the setting of HU, this raises a significantly more severe problem, especially when a substantial volume of contextual information is lost due to the final number of endmembers being much smaller than the initial count of spectra. Additionally, the bulk of methods utilize batch normalization to avert overfitting. This requirement entails using expensive computational resources and extends the time required for gradient computations [236].

3.3. Generative Model

A generative model actually is the extension of a deep learning-based method for HU, such as generative adversarial network, adversarial autoencoders, variational autoencoder, or other generative models. Generative adversarial networks (GANs) address HU through a competitive training framework between two neural networks, generator and discriminator. The networks engage in a min–max game: The generator has to generate the abundance of corresponding endmember to deceive the discriminator, and the discriminator network has to discriminate the true and false abundance [237]. Through thousands of iterations, the generator learns to output abundance maps.
Adversarial autoencoders utilize concepts from GANs to transform a conventional autoencoder into a valuable generative model [237,238]. Since traditional autoencoder-based methods’ unmixing performance is significantly impacted by noise and initialization conditions, Jin et al. [239] suggest an adversarial autoencoder network to tackle these issues. Within this framework, the adversarial training phase conveys spatial information to the network, thereby boosting the model’s robustness. Remarkably, shadow pollution is a grievous obstacle for unmixing applications. SC-CycleGAN [240] introduces a first-order total variation loss, which can further enhance the compensation for the spectral information of shadow pixels. Specifically, Sun et al. [241] employ a generative adversarial autoencoder (GAA) to develop a supervised unmixing method that can substantially reduce the impacts of shadow for unmixing. It should be noted that implementing GAA for anti-shadow needs input endmembers as priors. Conversely, variability in spectral data impacts the effectiveness of hyperspectral image analysis and should not be overlooked. To tackle this issue, a refined deep spectral convolutional neural network has been suggested for estimating abundance [242]. Another framework integrates a multinomial mixture model for enhanced abundance estimation with a Wasserstein GAN (WGAN) for stable parameter optimization. The architecture addresses explicitly endmember spectral uncertainty caused by physical variability through a deep generative model trained directly on pure pixel information from unlabeled data [243].
Another instance of using a variational autoencoder for generating HU is demonstrated in [244] to create a synthetic hyperspectral image with adjustable spectral variability. Additionally, a technique for creating synthetic hyperspectral images with adjustable spectral variability through applying a GAN is introduced [245]. The writers similarly employ a variational autoencoder to conduct blind HU on the created images, showcasing the impact of enhancing spectral variability. Similarly, a probabilistic generative model for hyperspectral unmixing (PGMSU) is proposed to address endmember variability [246]. Nevertheless, this technique disregarded the spatial information. Thereafter, the deep generative model for spatial–spectral unmixing (DGMSSU) was proposed [181]. It is a deep generative model, built upon variational autoencoder techniques combined with Bayesian inference methods. Its framework integrates CNN, graph neural networks (GNN), and a self-attention mechanism. Nonetheless, the execution of DGMSSU requires a considerable amount of computing resources and training data [247]. Apart from deep learning-based methods that address the spectral variability, traditional unmixing methods can also improve the unmixing accuracy caused by spectral variability. For specific details, readers can refer to [47].
With generative models persistently showing their effectiveness in grasping intricate spatial and spectral patterns, one key avenue for exploration involves the integration of attention mechanisms or a generative new model to enhance the interpretability of the features [248]. As mentioned in [53], one could explore the concealed algebraic structures and topological properties to create a more transparent network for HU. Conversely, in addition to conventional model-based and data-driven learning approaches [249], merging traditional model-based techniques with deep neural networks allows for integrating physical models into the architecture of deep networks by applying deep unrolling methods. The resulting unrolled networks inherently leverage the strengths of both approaches, such as enhanced interpretability, superior generalization skills, potent learning capabilities, and efficient computation. These networks effectively mitigate the complexities of modeling fundamental physical processes and the extensive need for numerous training samples.

3.4. Transformer

Recently, methods for hyperspectral unmixing based on deep learning frequently use visual models directly, often overlooking the distinctive characteristics of hyperspectral images. Transformers can record global contextual feature dependencies and partially restore the lost data [250,251,252]. Transformer models, as the most recent trend, are increasingly drawing focus in the domain of hyperspectral unmixing [253].
Generally, transformer typically consists of several modules: patch embedding, positional encoding, multi-head attention, layer normalization, and MLP blocks. Transformers process hyperspectral data by first dividing the spectral cube into fixed-size band patches. Each patch undergoes linear projection to create embedded representations while preserving spectral order through positional encoding—a mathematical technique that assigns unique wavelength-based identifiers using sine/cosine functions. The core innovation is multi-head attention, which finds global contextual feature dependencies. For instance, the model calculates weighted relationships with all other bands through query–key–value matrices. Specifically, it computes compatibility scores between bands to determine how much attention band A should pay to band B. The weighted outputs are normalized and processed through multilayer perceptrons to generate abundance maps. This global receptive field enables modeling long-range band dependencies that convolutional networks miss due to their limited local focus. A classification token aggregates global spectral information during implementation for final abundance prediction.
In response, the article introduces a novel double-aware transformer, UnDAT [254], designed explicitly for hyperspectral unmixing. The transformer is intended to simultaneously utilize hyperspectral images’ inherent region homogeneity and spectral correlation. In contrast to CNNs, the transformer primarily utilizes self-attention mechanisms to examine and encode pixel positions, thereby effectively capturing the global information within hyperspectral images. In [255], the writers suggest the multiscale aggregation transformer network (MAT-Net), employing an encoder–decoder structure to utilize spectral and spatial data fully. Inspired by [256], the writers suggested a novel unmixing technique that integrates a convolutional autoencoder with a transformer. The transformer, equipped with an attention mechanism, is utilized to identify the global contextual feature to guarantee a more accurate estimation of endmembers and abundances [257].
Vision transformer (ViT) has recently been a popular topic for hyperspectral unmixing. However, the conventional ViT approach of partitioning images into non-overlapping fixed-size patches disregards pixel-level spatial continuity. This fragmentation disrupts local structural relationships and hinders the model’s capacity to grasp detailed spatial dependencies, resulting in suboptimal feature representations for dense prediction tasks such as unmixing. To overcome these limitations, a self-supervised relation-aware ViT (SRViT) is introduced [258]. Yao et al. [259] predict excellent performance in unmixing tasks by introducing a transformer-DSSCR-ViT with a cross-attention mechanism. In addition, the transformer architectures employed in existing models process only single-scale image patches, resulting in limited scale diversity and suboptimal representation capacity. To address these limitations, authors propose a multiscale convolutional-crossing transformer (MSCC-ViT) incorporating convolutional cross-attention (CCA) [260]. This network extracts multiscale spatial–spectral features and enables efficient cross-scale fusion. Integrating attention mechanisms with convolutional operations simultaneously captures local context and non-local dependencies.
On the other hand, in [261], the spectral–spatial morphological attention transformer has been a significant source of inspiration. Integrating attention mechanisms with spectral and spatial morphological convolution operations forms a trainable spectral and spatial morphological network, enhancing the linkage between the structural and shape information of hyperspectral image tokens and classification tokens. Furthermore, standalone transformers may concentrate solely on contextual data, potentially missing finer details and relying entirely on neural networks, which might hinder the comprehensive capture of global data. To address the problems, the authors [262] propose a TCCU-Net: transformer and CNN collaborative unmixing network for HU. In addition, a novel attention-based residual network (e.g., a CNN) with scattering transform features is proposed for hyperspectral unmixing to extract deep-level features and process the resulting high-order information, respectively [263]. In [264,265], a transformer and a convolutional autoencoder are combined to estimate endmembers and abundances effectively. Further, in [266], the authors suggest a cutting-edge autoencoder, the transformer-enhanced CNN, which is designed to enhance unmixing performance by leveraging a feature-rich transformation mechanism. This mechanism, known as the TCN, merges CNN’s ability to model local features with the transformer’s capacity for global context modeling. To address the issue of spatial feature information loss, they introduce a unique U-shaped transformer network that employs shifted windows, referred to as UST-Net [257]. UST-Net selectively emphasizes discriminative spatial information utilizing multi-head self-attention blocks, incorporating shifted windows. Distinct from patch-based unmixing methodologies, it handles full-resolution images from start to finish, efficiently reducing artifacts stemming from spatial discontinuities at patch borders.

3.5. Recurrent Neural Network

More recently, multi-temporal has attracted the attention of remote sensing in HU since it leverages information in different time sequences of hyperspectral image data. It reveals the dynamic progression of endmembers and the corresponding abundance within a scene. Owing to the temporal correlation between the endmember and abundance, it can significantly enhance the quality of unmixing outcomes. In [267], a dynamical hyperspectral unmixing approach utilizing a variational RNN is proposed to derive a physically precise representation of abundance dynamics, employing a Gaussian distribution. Furthermore, a novel low-dimensional parameterization technique is adopted to tackle spatial and temporal endmember variability. Although it can substantially augment the input sequence for RNNs, it may lead to an over-fitting concern [268]. Considering the constraints of using a linear model, the authors suggest a non-symmetric network to discover the nonlinear relationships within the data. Specifically, the long short-term memory network (LSTM) structure is introduced to capture spectral correlation information, and a spatial regularization is further included to improve the spatial continuity of results [200].
Figure 8 shows the summary of deep learning-based HU algorithms.

4. Discussion

Significant advances have been made for HU in recent years, encompassing contributions with experimental and theoretical motivations. Generally speaking, traditional methods employ a computational process that is relatively straightforward, and consequently, they incur a low computational time cost. Deep learning-based methods have more complex models and parameters to tune to obtain better results. However, it should be noted that it is not definitive to state that deep learning-based approaches are inherently superior to traditional ones. For instance, a linear unsupervised nonnegative autoencoder using the mean square error loss function performs similarly to conventional NMF-based methods [54]. Some supervised deep learning-based methods still extract endmembers by traditional methods. Instead, we would prefer the deep learning methods to be the extension of conventional methods since available deep learning methods frequently employ the results of traditional methods as the initial weights. In addition, the common problem these two types of methods face is that the unmixing results differ in many diverse datasets due to various scenarios, such as pure pixel assumption or noise outlier in the data. Furthermore, the deep neural networks need the ground truth information for training to achieve better generalization. It is widely recognized that securing an extensive array of labeled data is challenging.
Assuming that we have an unmixing problem to handle, the first thing we need to consider is how we can obtain the data and whether the data meets the demands of the application. Several public real hyperspectral images have reference endmembers and abundance maps, as described in [35,269]. In addition, many scientific institutions provide spectral libraries, as mentioned in [270], which are particularly suitable for mineralogical. Selecting hyperspectral data with optimal core parameter configuration by data users significantly influences the results of applications. Hyperspectral imaging sensors’ spectral resolution, spatial resolution, and SNR have an essential impact on downstream applications, but their interplay must be considered due to the inherent tradeoffs [271]. Spectral resolution and SNR affect accuracy for datasets with numerous categories, with higher spectral resolution and SNR correlating with improved accuracy. However, the impact of SNR diminishes as the number of classes and classification difficulty decrease [271]. Different application purposes have various demands for the spatial resolution. A spatial resolution of 30 m can be used for mapping tundra peatland environments [272], but a resolution of less than 5 m is required to develop accurate maps [273]. In addition to mapping peatland plants, centimeter-level resolution is needed [274]. Chakraborty et al. compared the consistency of new-generation satellite hyperspectral sensors and a high-resolution airborne hyperspectral dataset for mineral exploration applications. They suggested that inconsistent results can be produced by applying the same analyses to data from different sensors [275].
Consequently, we should make a decision on which kinds of methods to use. It should depend on the application or prior knowledge of the physics of the problem. For instance, linear models may be suitable for dealing with macroscopic earth observation problems. Linear models are more general than nonlinear models. However, it may fail for a dataset with more endmembers, and the effectiveness may vary for different datasets [40]. When dealing with microscopic problems or intimate mixtures, nonlinear models such as Hapke or deep learning-based methods should be considered. Deep learning-based methods, which integrate one or more different frameworks, tend to obtain better results. Typically, GANs perform supervised nonlinear unmixing without explicitly knowing the nonlinear mixing model [237]. Then, when we have prior information on the material in the scene and a well-designed endmember library, semi-supervised unmixing, or supervised unmixing could be helpful. In addition, semi-supervised methods are also suitable for capturing the spectral variability. The success of semi-supervised unmixing lies in the quality of the endmember library. Experiments show that supervised methods based on conventional approaches may achieve better results than unsupervised autoencoder-based methods, especially under a no-pure pixel scenario [40]. If we are not confident in the endmembers, we ought to employ unsupervised or blind unmixing to tackle the simultaneous resolution for estimating the quantity of endmembers, their spectra, and the corresponding abundances.
It is also important to highlight that the Spectral Image Processing System (SIPS) and Environment for Visualizing Images (ENVI), as early software promoters, are still widely used in unmixing implementation [276,277]. Moreover, there are also other Python or Matlab tools for hyperspectral imaging provided in [2,40,52]. Users can also download PySptools 0.15.0 from the PySptools Project Page hosted by Sourceforge.net or from the PyPI packages repository. Thus, exploring the interactions between core data parameters, methods, and applications can help bridge the knowledge gap between the front-end hyperspectral imager, middle model, and back-end applications.
Currently, HU technology has been preliminarily applied in various fields but has not been systematically applied in engineering. The challenges of HU technology are as follows.

4.1. Limited Availability of Ground Truth and Training Data

It is commonly recognized that collecting a large volume of labeled data or ground truth is expensive and time-consuming. This is definitely the biggest obstacle for the HU problem. However, with a considerable amount of training data, such techniques can extract the most intrinsic properties, revealing material maps. It has become popular for learning-based approaches. Due to their supervised nature, deep neural networks require ground truth information for training and generalization.
Researchers are exploring developing semi-supervised or unsupervised deep learning methods to address this critical issue. Conversely, active learning [278] and few-shot learning [279] have emerged as prominent topics in contemporary research due to their ability to predict category labels with minimal labeled samples accurately. Active learning systematically reduces the training sample size by selecting the most representative instances from unlabeled data using uncertainty measures and structural information [4]. The objective is to enhance informational content while minimizing redundancy, accelerating the learning process with fewer samples.
Another strategy for managing limited labeled data involves investigating cross-domain transfer learning in hyperspectral imaging. This approach will concentrate on optimizing domain adaptation techniques to address distribution discrepancies among various hyperspectral datasets. By designing more efficient adaptive methods, we can lessen the reliance on labeling within the target domain and improve the model’s generalization capabilities across different environments. Simultaneously, research into integrating domain-specific knowledge with transfer learning and developing a tailored transfer learning framework to enhance model performance in complex scenarios will represent a significant direction for future exploration.

4.2. Establishment of High-Precision Remote Sensing Mixing Model

HU is the inverse problem, which is solved by unmixing to find the corresponding endmember and abundance fraction of each mixed pixel spectrum. Due to environmental conditions, observation noise, variable endmembers, and imprecise algorithm models, the extraction results of endmembers have poor accuracy and instability. For the areas with complex terrain and fine ground features, the hyperspectral images often show nonlinear mixing, including classical, multi-level, micro-level, and close-range mixing. Although many researchers have proposed corresponding algorithms for nonlinear models, they can only solve a specific type of mixing and cannot adapt a model algorithm to multiple mixing effects [280]. Suppose the nonlinear unmixing algorithm based on a physical model is established. In that case, the inverse process of radiation transfer theory (RTT) is often introduced in the solution process. It is a completely ill-conditioned problem, and its parameters are difficult or even impossible to obtain [281]. The inversion of the model, characterized by well-constrained particle sizes, can estimate common mineral abundance with an error margin of less than 10% [282]. However, specific model parameters are inaccessible beyond lab settings or without thorough field research [283]. In addition, the image contains both linear and nonlinear mixing in many cases.
Consequently, it is essential to set up a suitable algorithmic model. For instance, the integration of spatial and spectral data can be employed to estimate the mixing model by utilizing numerous model clusters, thereby enhancing the accuracy of unmixing. Among them, the fully unsupervised nonlinear model has been an urgent problem in hyperspectral unmixing, which can directly obtain endmembers and abundance from images without prior knowledge. In addition, array signal processing will be another direction of future research. For example, in the sparse representation regression unmixing algorithm, the spectral library data can be effectively sorted to improve the algorithm’s accuracy. In deep learning-based methodologies, two core research trends within the field of HU are (1) the advancement of novel deep architectures and (2) endeavors to address the challenge posed by limited ground-truth data [228]. The latter issue can be tackled using semi-supervised, weakly supervised, and unsupervised learning methods.

4.3. Endmember Spectral Variability

Spectral variability may be caused by fluctuating illumination, environmental influences, atmospheric effects, material degradation, object contamination, and additional factors [46]. There are two forms of spectral variability. These involve the variability within an endmember group, termed intra-class variability, and the similarity across endmember groups, known as inter-class variability [284]. In particular, the precision of sub-pixel abundance estimation decreases linearly about intra-class variability. This is logical, considering that as intra-class variability increases, so does the probability that the actual spectral characteristics of the endmembers within a given pixel will diverge from the fixed endmembers employed in the mixing model. Conversely, the resemblance among the endmembers (e.g., crops and weeds) gives rise to a strong correlation among their spectral signatures, resulting in an unstable inverse matrix and a significant decline in estimation accuracy [285]. Taking mineral information as an example, its spectral information is not only determined by mineral crystal structure and chemical composition but also depends on factors such as particle size, shape, surface state, occurrence state in rocks, and mineral mixing, resulting in complex and changeable spectral information of the same mineral. At the same time, due to the influence of illumination and other factors, there is an issue where the spectral information of various ground objects is identical, undeniably heightening the challenge of endmember extraction and the subsequent processes of mineral mapping and mineral identification. Furthermore, with the current hyperspectral data, whether airborne or aerospace, it is difficult to ensure the consistency of spectra at the same position between different bands, which often leads to discontinuity of information between bands and misidentification.
To solve this issue, minimizing or eliminating the spectral variability of endmembers is a valuable method for enhancing the precision of unmixing. Interestingly, multi-temporal hyperspectral unmixing has garnered increased interest in scholarly works. This method leverages sequential hyperspectral image data, captured across various seasons and under differing acquisition conditions, to uncover the dynamic evolution of endmembers [286,287,288,289]. However, addressing both spatial and temporal spectral variability is still challenging. Supervised techniques necessitate a prior understanding of libraries with spectral signatures. However, it is difficult or costly to collect. Unsupervised approaches estimate endmembers and abundance for all time instants from the multi-temporal hyperspectral image sequence. Though these are pretty useful in practice, it is difficult to design. Therefore, it is necessary to further study the spectrum generation mechanism and gradually improve the application of hyperspectral remote sensing quantification. Researchers can propose methods based on spectral features or multi-endmember approaches to enhance the accuracy of mixed pixel classification in light of the spectral variability inherent in endmember spectrum differences within the same category [290]. Moreover, various band ratios and manifold transformations can be helpful for unmixing [283,291].

4.4. Commercialization of Remote Sensing Products

Currently, most algorithms are developed from the perspective of hyperspectral image data modeling. While numerous new algorithms have emerged to address conventional techniques’ limitations, specific solutions remain scarce when viewed through an application-oriented lens. The processing of hyperspectral remote sensing data should be driven by user needs. Presently, primary users of hyperspectral remote sensing include sectors such as agriculture, forestry, and mineral exploration; this also facilitates sub-pixel target detection [292,293]. From the standpoint of various users engaged in hyperspectral processing methods, it is notable that most still rely on classical algorithms without incorporating newer generation methodologies [70,275]. Therefore, the challenges faced by HU technology are closely related to user application requirements. How to better combine new algorithms with application requirements and solve practical problems is the development direction of HU in the future.
For instance, applying multispectral images in agriculture is relatively mature [294]. In the application of multispectral images, the sensitive bands of green light, red light, and near infrared of crops are reflected in a few bands, and while utilizing hyperspectral data, the scope of a single band is broadened to encompass numerous bands. The wavelength range of 400–1000 nm is the most common range for agricultural applications [295]. Consequently, leveraging the abundant spectral information within hyperspectral imagery to exploit the distinctiveness of sensitive bands while mitigating the impact of others, emphasizing key details, and thereby efficiently extracting endmember information poses a pressing issue that requires resolution [296].
In forestry, due to the significant influence of terrain, slope, and climate as well as extensive mixed forests, mixed pixels are seriously mixed on low-spatial-resolution images. The research on tree species group identification in images is still in development and has not been widely used. The authors present a unique dataset for the simultaneous learning of continuous and categorical forest variables from hyperspectral imagery, which surpasses the constraints of commonly utilized small-scale hyperspectral datasets [297].
In geologic remote sensing, near-infrared and shortwave infrared bands are significantly advantageous in extracting alteration information and mineral identification [298,299]. Moreover, due to the complexity of mineral spectral information, the ability to extract, identify, and distinguish endmembers of some minerals with minor spectral differences needs to be improved [300]. In terms of mineral mapping, while engineering capabilities and conditions are technically available for implementation, a significant challenge lies with limitations related to data sources that hinder broader promotion and large-scale application.
Consequently, with advancements in new hyperspectral payload technology, a substantial volume of hyperspectral data will meet user requirements for engineering applications. Concurrently, research into novel algorithms should prioritize user application needs as a guiding principle to address real-world challenges effectively and facilitate practical business applications utilizing hyperspectral remote sensing data. It is anticipated that with ongoing advancements in HU technology in the future, users across diverse industries will gain access to vast amounts of high-efficiency hyperspectral remote sensing data, thus paving the way for imminent commercial applications of algorithmic software.

4.5. Real-Time Performance of the Algorithm

With the swift advancement of hyperspectral imagers, the spatial clarity and spectral definition have been significantly improved. It makes it difficult for ordinary computers and small computers to process big data effectively and quickly. For this reason, in addition to using parallel field programmable gate arrays (FPGAs), graphics processing units (GPUs), and other hardware processing platforms, National Aeronautics and Space Administration (NASA) and the University of Mississippi have adopted high-performance supercomputer (HPC) parallel processors specifically for processing hyperspectral data [301,302]. Some researchers also use deeply pipelined reconfigurable processing elements for compute-intensive tasks involved in spectral unmixing [303]. Therefore, to improve the use of HU technology to achieve sub-pixel level target detection or real-time environment monitoring and other applications, based on the introduction of new HPC (including GPU, FPGA, DSP, etc.), hyperspectral data is capable of being segmented and processed. Additionally, the algorithm can be refined to boost its real-time performance [304,305]. It is the key development direction of hyperspectral mixed pixel decomposition technology. It is essential for real-time processing applications such as environmental disasters and sub-pixel target recognition [306].
Figure 9 displays the development trend and application of mixed pixel decomposition in hyperspectral remote sensing.
As technology advances, the capabilities of hyperspectral imagers are expected to enhance further in the times ahead. More management departments will apply hyperspectral data to promote business applications, one of the future development trends of hyperspectral remote sensing. Therefore, how to select hyperspectral data reasonably and effectively is closely related to user requirements. Based on varying business application needs, selecting the appropriate hyperspectral data analysis method is crucial to attain optimal processing outcomes rather than merely pursuing data with enhanced spatial resolution, finer spectral detail, and broader wavelength ranges in a haphazard manner. Simultaneously, enhancing the performance of hyperspectral data will undoubtedly extend the duration of data processing. Because the current data cannot meet the application requirements, the hyperspectral mixed pixel decomposition technology can be further studied. Future research could focus on developing more robust and generalizable models that perform well on various datasets and under different conditions. Moreover, data fusion techniques merge multispectral imagery’s high spatial resolution with hyperspectral data’s fine spectral detail to achieve application objectives. Innovations like big data and cloud computing also foster cloud platforms for hyperspectral processing, promoting public service applications.

5. Conclusions

HU has excellent potential in hyperspectral remote sensing image processing and interpretation and has made remarkable progress. This review systematically categorizes HU methodologies spanning conventional approaches to modern deep learning architectures. We critically evaluate the strengths and limitations of prevalent algorithms for endmember extraction, abundance estimation, and spectral variability mitigation. A novel taxonomy is introduced to classify traditional and deep learning-based HU methods, highlighting paradigm shifts toward autoencoders, CNNs, generative models, transformers, and RNNs. Furthermore, we analyze the synergistic relationship between high-precision hybrid inversion models and the commercialization of remote sensing products. Persistent challenges—including spectral variability, nonlinear mixing, and computational efficiency—underscore the need for robust, generalizable algorithms.
Due to the limited space of this paper, many algorithms are not included. In general, it can be reflected from the workload of various researchers that the establishment of a hyperspectral mixing model, the calculation of the amount of data processing, and the accuracy are significantly improved, but the wealth of algorithms all trying to do the same thing is an indication that there is a problem we have not really managed to solve. In terms of remote sensing products, since many algorithms are based on simulation data, most of them use more classical data for real data verification, forming a small number of remote sensing products that can be directly used. In addition, the lack of ground measurement data seriously affects the research and development of inversion algorithms and product verification. HU’s primary challenge lies in developing techniques that exhibit strong robustness and uniformity across diverse datasets. Therefore, developing new calculation and verification methods to improve field measurement accuracy is the future research direction.
With the rapid development of all aspects of hyperspectral satellites, higher requirements have been put forward for the mixed pixel decomposition technology of hyperspectral remote sensing images, which plays a crucial role in enhancing the Earth’s monitoring and management processes. It also puts forward an urgent demand for the application in engineering and the real-time functioning of the mixed pixel decomposition algorithm. Future advancements must integrate multi-modal data fusion, uncertainty quantification, and real-time processing capabilities to bridge the gap between theoretical innovation and operational deployment in Earth observation. Here, we give some recommendations for future studies.

5.1. Improved Uncertainty Estimation

Future research should develop novel techniques to improve the accuracy and reliability of uncertainty estimation in hyperspectral imaging. The uncertainty associated with the outputs of models and/or algorithms applied to hyperspectral cubes is rarely reported [270]. However, this information is crucial for evaluating the performance of a model in situations where ground truth data is typically absent. One potential solution could involve adapting the work of White et al. [307], who have worked with similar datasets. This adaptation may include exploring advanced Bayesian methods, ensemble techniques, or integrating domain-specific knowledge to capture and quantify uncertainty more effectively. At each pixel, reflectance curves are observed; Bayesian modeling that accounts for both these functions and their spatial dependencies makes it possible to estimate their distribution along with quantifying the uncertainties associated with fitted parameters.

5.2. Transfer Learning for Unmixing

Future research could investigate the application of transfer learning in deep learning-based models, whereby pre-trained models on large datasets can be fine-tuned for specific tasks or domains. The conventional approach to solving inverse problems in low (spatial) resolution satellite imagery typically involves the consideration of multiple parameters, such as meteorological data and registration processes. In contrast, parameters learned by deep networks using natural images—characterized by high resolution—can be effectively leveraged to estimate the underlying function for extracting unmixed components from hyperspectral data. This represents a promising research direction for the HU problem, as numerous well-established pre-trained networks are readily available for natural images. Furthermore, it may offer a viable solution to address the challenge of limited training data availability.

5.3. Neuromorphic Computing for Edge Intelligence

A significant trend in utilizing GPUs and embedded systems for hyperspectral data analysis has emerged. Despite the numerous advantages, only a limited number of ground-based hyperspectral data recording platforms are capable of real-time analysis, primarily restricted to laboratory environments. Integrating FPGAs and GPUs into the analysis pipeline can significantly enhance processing speeds. Historically, the availability of open-source software for data acquisition and analysis has been limited, with most studies relying on proprietary solutions. There is a pressing need to develop open-source software, libraries, and toolboxes tailored for hyperspectral data acquisition. These tools should be designed to support sensors mounted on unmanned aerial vehicles and unmanned ground vehicles, ensuring compatibility with embedded systems such as the NVIDIA Jetson series and other high-performance portable platforms.

5.4. Multi-Modal and Multi-Temporal Integration

Integrating hyperspectral data with other remote sensing modalities, such as LiDAR, multispectral, or visible imagery, can yield richer feature representations and enhance the accuracy and robustness of unmixing results, compensating for the limitations of relying on a single data source. Future research will aim to develop advanced data fusion techniques capable of addressing the heterogeneity of multiple data sources, particularly in large-scale and real-time applications, while optimizing computational efficiency to meet practical deployment requirements. Additionally, multi-temporal hyperspectral datasets or time-series data can be explored as a novel approach to improve the performance of HU using deep learning. This approach offers the intrinsic advantage of providing more data for decision-making or recognition tasks, a fundamental requirement in deep learning-based methodologies.

5.5. Channel-Adaptive and Tuning-Free Foundation Large Models

Current approaches for hyperspectral data interpretation frequently rely on specialized models specifically designed for individual datasets or tasks. In specific applications, the availability of hyperspectral data is often constrained, which presents substantial challenges to model training due to insufficient data. Moreover, hyperspectral data’s complex and heterogeneous structure complicates knowledge transfer across different datasets, thereby significantly limiting cross-scenario generalization capabilities. To address this limitation, HU can benefit from developing large-scale models such as SpectralGPT [308] and HyperFree [309]. These models exhibit robust feature learning and generalization abilities, enabling them to more effectively capture intricate patterns in hyperspectral image (HSI) data. Concurrently, optimizing computational efficiency, reducing inference latency, and preserving high performance under resource-constrained conditions will constitute critical research directions.

Author Contributions

Conceptualization, J.Z.; methodology, J.Z.; investigation, J.Z., H.Q. and P.Z.; visualization J.Z., H.Q. and P.Z.; writing—review and editing, J.Z., H.Q. and P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Youth Innovation Team of Shaanxi University.

Acknowledgments

The authors would like to thank the editor and reviewers for their reviews, which improved the content of this paper. We would also like to thank B.X. Zhao of Air Force Engineering University for giving valuable suggestions on the paper revision.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mukhtar, S.; Arbabi, A.; Viegas, J. Advances in Spectral Imaging: A Review of Techniques and Technologies. IEEE Access 2025, 13, 35848–35902. [Google Scholar] [CrossRef]
  2. Bioucas-Dias, J.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  3. Aburaed, N.; Alkhatib, M.; Marshall, S.; Zabalza, J.; Ahmad, H. A Review of Spatial Enhancement of Hyperspectral Remote Sensing Imaging Techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2275–2300. [Google Scholar] [CrossRef]
  4. Song, Y.; Zhang, J.; Liu, Z.; Xu, Y.; Quan, S.; Sun, L.; Wang, X. Deep Learning for Hyperspectral Image Classification: A Comprehensive Review and Future Predictions. Inform. Fusion 2025, 123, 103285. [Google Scholar] [CrossRef]
  5. Lodhi, V.; Chakravarty, D.; Mitra, P. Hyperspectral Imaging for Earth Observation: Platforms and Instruments. J. Indian Inst. Sci. 2018, 98, 429–443. [Google Scholar] [CrossRef]
  6. Bioucas-Dias, J.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  7. Zhang, B.; Wu, Y.; Zhao, B.; Chanussot, J.; Hong, D.; Yao, J.; Gao, L. Progress and Challenges in Intelligent Remote Sensing Satellite Systems. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1814. [Google Scholar] [CrossRef]
  8. Qian, S.E. Hyperspectral Satellites, Evolution, and Development History. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7032–7056. [Google Scholar] [CrossRef]
  9. Jia, J.; Wang, Y.; Chen, J.; Guo, R.; Shu, R.; Wang, J. Status and Application of Advanced Airborne Hyperspectral Imaging Technology: A Review. Infrared Phys. Technol. 2020, 104, 103115. [Google Scholar] [CrossRef]
  10. Cocks, T.; Jenssen, R.; Stewart, W.I.; Shields, T. The HyMap Airborne Hyperspectral Sensor: The System, Calibration, and Performance. In Proceedings of the 1st EARSEL Workshop on Imaging Spectroscopy, Zurich, Switzerland, 6–8 October 1998; Available online: https://artefacts.ceda.ac.uk/neodc_docs/Hymap_specs.pdf (accessed on 7 August 2025).
  11. Kruse, F.A.; Boardman, J.W.; Huntington, J.F. Comparison of Airborne and Satellite Hyperspectral Data for Geologic Mapping. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, Orlando, FL, USA, 1–5 April 2002. [Google Scholar]
  12. German Aerospace Center (Formerly DLR) and Teledyne Brown, TCloud: Teledyne Technologies. 2025. Available online: http://tcloudhost.com/ (accessed on 7 August 2025).
  13. Aneece, I.; Foley, D.; Thenkabail, P.; Oliphant, A.; Teluguntla, P. New Generation Hyperspectral Data from DESIS Compared to High Spatial Resolution PlanetScope Data for Crop Type Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7846–7858. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Wang, X.; Wang, S.; Zhang, L. Advances in Spaceborne Hyperspectral Remote Sensing in China. Geo-spat. Inf. Sci. 2021, 24, 95–120. [Google Scholar] [CrossRef]
  15. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef]
  16. Cerra, D.; Pato, M.; Alonso, K.; Köhler, C.; Schneider, M.; de los Reyes, R.; Carmona, E.; Richter, R.; Kurz, F.; Reinartz, P.; et al. DLR HySU-A Benchmark Dataset for Spectral Unmixing. Remote Sens. 2021, 13, 2559. [Google Scholar] [CrossRef]
  17. Barma, S.; Damarla, S.; Tiwari, S. Semi-Automated Technique for Vegetation Analysis in Sentinel-2 Multi-Spectral Remote Sensing Images Using Python. In Proceedings of the 4th International Conference. Electronic Communications Aerospace Technology, Coimbatore, India, 5–7 November 2020; pp. 946–953. [Google Scholar]
  18. Mills, S. Evaluation of Aerial Remote Sensing Techniques for Vegetation Management in Power-Line Corridors. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3379–3390. [Google Scholar] [CrossRef]
  19. Ciȩżkowski, W.; Sikorski, P.; Babańczyk, P.; Sikorska, D.; Chormański, J. Algorithm for Urban Spontaneous Green Space Detection Based on Optical Satellite Remote Sensing. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 4430–4433. [Google Scholar]
  20. Benson, M.; Faundeen, J. The U.S. Geological Survey Remote Sensing and Geoscience Data: Using Standards to Serve US All. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium Taking Pulse Planet: Role Remote Sensing Manage Environmental, Honolulu, HI, USA, 24–28 July 2000; pp. 1202–1204. [Google Scholar]
  21. O’Connor, E.; McDonald, A. Applications of Remote Sensing for Geological Mapping in Eastern Egypt. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium Remote Sensing: Moving Toward 21st Century, Edinburgh, UK, 12–16 September 1988; pp. 631–632. [Google Scholar]
  22. Fu, B.; Shi, P.; Fu, H.; Ninomiya, Y.; Du, J. Geological Mapping Using Multispectral Remote Sensing Data in the Western China. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5583–5586. [Google Scholar]
  23. Lin, Y.; Zhang, L.; Wang, N. A New Time Series Change Detection Method for Landsat Land Use and Land Cover Change. In Proceedings of the 10th International Workshop Analysis Multitemporal Remote Sensing Images, Shanghai, China, 5–7 August 2019; pp. 1–4. [Google Scholar]
  24. Bounouh, O.; Essid, H.; Farah, I. Prediction of Land Use/Land Cover Change Methods: A Study. In Proceedings of the International Conference Advanced Technologies Signal and Image Processing, Fez, Morocco, 22–24 May 2017; pp. 1–7. [Google Scholar]
  25. Alem, A.; Kumar, S. Deep Learning Methods for Land Cover and Land Use Classification in Remote Sensing: A Review. In Proceedings of the 8th International Conference Reliability Infocom Technologies and Optimization (Trends and Future Directions), Noida, India, 4–5 June 2020; pp. 903–908. [Google Scholar]
  26. Wei, L.; Wang, Z.; Huang, C.; Zhang, Y.; Wang, Z.; Xia, H.; Cao, L. Transparency Estimation of Narrow Rivers by UAV-Borne Hyperspectral Remote Sensing Imagery. IEEE Access 2020, 8, 168137–168153. [Google Scholar] [CrossRef]
  27. Wang, C.; Wang, X.; Silva, J. Studies of Internal Waves in the Strait of Georgia Based on Remote Sensing Images. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium 2020, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 3549–3551. [Google Scholar]
  28. Jeong, S.W.; Lee, I.H.; Kim, Y.G.; Kang, K.S.; Shim, D.; Hurry, V. Spectral Unmixing of Hyperspectral Images Revealed Pine Wilt Disease Sensitive Endmembers. Physiol. Plant. 2025, 177, 70090. [Google Scholar] [CrossRef] [PubMed]
  29. SHimoni, M.; Haelterman, R.; Perneel, C. Hyperspectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–107. [Google Scholar] [CrossRef]
  30. Zhao, J.; Zhou, B.; Wang, G.; Ying, J.; Liu, J.; Chen, Q. Spectral Camouflage Characteristics and Recognition Ability of Targets Based on Visible/Near-Infrared Hyperspectral Images. Photonics 2022, 9, 957. [Google Scholar] [CrossRef]
  31. Seo, D.; Lee, D.; Park, S.; Oh, S. Hyperspectral Image-Based Identification of Maritime Objects Using Convolutional Neural Networks and Classifier Models. J. Mar. Sci. Eng. 2025, 13, 6. [Google Scholar] [CrossRef]
  32. Kruse, F.A.; Perry, S.L. Improving Multispectral Mapping by Spectral Modeling with Hyperspectral Signatures. J. Appl. Remote Sens. 2009, 3, 33504. [Google Scholar]
  33. Ju, S.; Zou, J.; Ma, R. Research Progress in Unmanned Aerial Vehicle-Borne Hyperspectral Imaging Payload. In Proceedings of the SPIE 12797, Second International Conference on Geographic Information and Remote Sensing Technology (GIRST 2023), Qingdao, China, 21–23 July 2023; p. 1279723. [Google Scholar]
  34. Shi, Z.; Chen, C.; Xiong, Z.; Liu, D.; Wu, F. HSCNN: Advanced CNN-Based Hyperspectral Recovery from RGB Images. In Proceedings of the IEEE/CVF Conference Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1052–10528. [Google Scholar]
  35. Jin, Q. Gaussian Mixture Model for Hyperspectral Unmixing with Low-Rank Representation. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium 2019, Yokohama, Japan, 28 July–2 August 2019; pp. 294–297. [Google Scholar]
  36. Smith, M.O.; Adams, J.B.; Sabol, D.E. Spectral Mixture Analysis—New Strategies for the Analysis of Multispectral Data. In Imaging Spectrometry—A Tool for Environmental Observations. Eurocourses: Remote Sensing; Springer: Dordrecht, The Netherlands, 1994; Volume 4. [Google Scholar] [CrossRef]
  37. Heylen, R.; Parente, M.; Gader, P. A Review of Nonlinear Hyperspectral Unmixing Methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1844–1868. [Google Scholar] [CrossRef]
  38. Shi, C.; Wang, L. Incorporating Spatial Information in Spectral Unmixing: A Review. Remote Sens. Environ. 2014, 149, 70–87. [Google Scholar] [CrossRef]
  39. Quintano, C.; Fernández-Manso, A.; Shimabukuro, Y.; Pereira, G. Spectral Unmixing. Int. J. Remote Sens. 2012, 33, 5307–5340. [Google Scholar] [CrossRef]
  40. Rasti, B.; Zouaoui, A.; Mairal, J.; Chanussot, J. Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–31. [Google Scholar] [CrossRef]
  41. Wei, J.; Wang, X. An Overview on Linear Unmixing of Hyperspectral Data. Math. Probl. Eng. 2020, 2020, 3735403. [Google Scholar] [CrossRef]
  42. Yang, B.; Wang, B. Review of Nonlinear Unmixing for Hyperspectral Remote Sensing Imagery. J. Infrared Millim. Waves 2017, 36, 173–185. [Google Scholar]
  43. Feng, X.; Li, H.; Wang, R. Hyperspectral Unmixing Based on Nonnegative Matrix Factorization: A Comprehensive Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4414–4436. [Google Scholar] [CrossRef]
  44. Chen, J.; Zhao, M.; Wang, X.; Richard, C.; Rahardja, S. Integration of Physics-Based and Data-Driven Models for Hyperspectral Image Unmixing: A Summary of Current Methods. IEEE Signal Process. Mag. 2023, 40, 61–74. [Google Scholar] [CrossRef]
  45. Zare, A.; Ho, K. Endmember Variability in Hyperspectral Analysis: Addressing Spectral Variability During Spectral Unmixing. IEEE Signal Process. Mag. 2014, 31, 95–104. [Google Scholar] [CrossRef]
  46. Somers, B.; Asner, G.; Tits, L.; Coppin, P. Endmember Variability in Spectral Mixture Analysis: A Review. Remote Sens. Environ. 2011, 115, 1603–1616. [Google Scholar] [CrossRef]
  47. Borsoi, R.; Imbiriba, T.; Bermudez, J.; Richard, C.; Chanussot, J. Spectral Variability in Hyperspectral Data Unmixing: A Comprehensive Review. IEEE Geosci. Remote Sens. Mag. 2021, 9, 223–270. [Google Scholar] [CrossRef]
  48. Ma, W.; Bioucas-Dias, J.; Chan, T.; Gillis, N.; GAder, P.; Plaza, A. A Signal Processing Perspective on Hyperspectral Unmixing: Insights from Remote Sensing. IEEE Signal Process. Mag. 2014, 31, 67–81. [Google Scholar] [CrossRef]
  49. Zhu, X. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
  50. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review. J. Imag. 2019, 5, 52. [Google Scholar] [CrossRef] [PubMed]
  51. Garima, J.; Ritu, R.; Harshita, M.; Arun, S. Integration of Hyperspectral Imaging and Autoencoders: Benefits, Applications, Hyperparameter Tuning and Challenges. Comput. Sci. Rev. 2023, 50, 100584. [Google Scholar]
  52. Behnood, R.; Alexandre, Z.; Julien, M.; Jocelyn, C. HySUPP: An Open-Source Hyperspectral Unmixing Python Package. In Proceedings of the IGARSS 2023—2023 IEEE International Geoscience Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023. [Google Scholar]
  53. Bhatt, J.; Joshi, M. Deep Learning in Hyperspectral Unmixing: A Review. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2189–2192. [Google Scholar]
  54. Palsson, B.; Sveinsson, J.; Ulfarsson, M. Blind Hyperspectral Unmixing Using Autoencoders: A Critical Comparison. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1340–1372. [Google Scholar] [CrossRef]
  55. Hong, D. Interpretable Hyperspectral Artificial Intelligence: When Nonconvex Modeling Meets Hyperspectral Remote Sensing. IEEE Geosci. Remote Sens. Mag. 2021, 9, 52–87. [Google Scholar] [CrossRef]
  56. Feng, X.; Li, H.; Li, J.; Du, Q.; Plaza, A.; Emery, W. Hyperspectral Unmixing Using Sparsity-Constrained Deep Nonnegative Matrix Factorization with Total Variation. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6245–6257. [Google Scholar] [CrossRef]
  57. Heylen, R.; Gader, P. Nonlinear Spectral Unmixing with a Linear Mixture of Intimate Mixtures Model. IEEE Trans. Geosci. Remote Sens. Lett. 2013, 11, 1195–1199. [Google Scholar] [CrossRef]
  58. Luo, W.; Gao, L.; Zhang, B. A New Algorithm for Bilinear Spectral Unmixing of Hyperspectral Images Using Particle Swarm Optimization. IEEE J. Sel. Top. Signal Process. 2016, 15, 5776–5790. [Google Scholar] [CrossRef]
  59. Chen, J.; Richard, C.; Honeine, P. Nonlinear Unmixing of Hyperspectral Data Based on a Linear-Mixture/ Nonlinear-Fluctuation Model. IEEE Trans. Signal Process. 2012, 61, 480–492. [Google Scholar] [CrossRef]
  60. Févotte, C.; Dobigeon, N. Nonlinear Hyperspectral Unmixing with Robust Nonnegative Matrix Factorization. IEEE Trans. Image Process. 2015, 24, 4810–4819. [Google Scholar] [CrossRef]
  61. Zhang, L.; Zhang, B.; Gao, L.; Li, J.; Plaza, A. Normal Endmember Spectral Unmixing Method for Hyperspectral Imagery. IEEE J. Sel. Top. Signal Process. 2014, 8, 2598–2606. [Google Scholar] [CrossRef]
  62. Luo, W.; Gao, L.; Zhang, R.; Marinoni, A.; Zhang, B. Bilinear Normal Mixing Model for Spectral Mixing. IET Image Process. 2019, 13, 344–354. [Google Scholar] [CrossRef]
  63. Chang, C.I.; Du, Q. Estimation of Number of Spectrally Distinct Signal Sources in Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2004, 42, 608–619. [Google Scholar] [CrossRef]
  64. Jolliffe, I. Principal Component Analysis; Springer Verlag: New York, NY, USA, 1986; pp. 111–137. [Google Scholar]
  65. Machidon, A.; Frate, F.; Picchiani, M.; Machidon, O.; Orgrutan, P. Geometrical Approximated Principle Component Analysis for Hyperspectral Image Analysis. Remote Sens. 2020, 12, 1698. [Google Scholar] [CrossRef]
  66. Green, A.; Berman, M.; Switzer, P.; Craig, M. A Transformation for Ordering Multispectral Data in Terms of Image Quality with Implications for Noise Removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef]
  67. Heylen, R.; Parente, M.; Scheunders, P. Estimation of Number of the Endmembers in a Hyperspectral Image via the Hubness Phenomenon. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2191–2200. [Google Scholar] [CrossRef]
  68. Akaike, H. A New Look at the Statistical Model Identification. IEEE Trans. Autom. Control 1974, 19, 716–723. [Google Scholar] [CrossRef]
  69. Rissanen, J. Modeling by Shortest Data Description. Automatica 1978, 14, 465–471. [Google Scholar] [CrossRef]
  70. Jia, J.; Chen, J.; Zheng, X.; Wang, Y.; Guo, S.; Sun, H.; Chen, Y. Tradeoffs in the Spatial and Spectral Resolution of Airborne Hyperspectral Imaging Systems: A Crop Identification Case Study. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–18. [Google Scholar] [CrossRef]
  71. Bajorski, P. Second Moment Linear Dimensionality as an Alternative to Virtual Dimensionality. IEEE Trans. Geosci. Remote Sens. 2011, 49, 672–678. [Google Scholar] [CrossRef]
  72. Halimi, A.; Honeine, P.; Kharouf, M.; Richard, C.; Tourneret, J.Y. Estimating the Intrinsic Dimension of Hyperspectral Images Using a Noise-Whitened Eigengap Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3811–3821. [Google Scholar] [CrossRef]
  73. Luo, B.; Chanussot, J.; Douté, S.; Zhang, L. Empirical Automatic Estimation of the Number of Endmembers in Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2013, 10, 24–28. [Google Scholar]
  74. Prades, J.; Safont, G.; Salazar, A.; Vergara, L. Estimation of the Number of Endmembers in Hyperspectral Images Using Agglomerative Clustering. Remote Sens. 2020, 12, 3585. [Google Scholar] [CrossRef]
  75. Prades, J.; Addisson, S.; Safont, G.; Vergara, L. Determining the Number of Endmembers of Hyperspectral Images Using Clustering. In Proceedings of the CSCI, Las Vegas, NV, USA, 16–18 December 2020; pp. 1664–1668. [Google Scholar]
  76. Eches, O.; Dobigeon, N.; Tourneret, J. Estimating the Number of Endmembers in Hyperspectral Images Using the Normal Compositional Model and a Hierarchical Bayesian Algorithm. IEEE J. Sel. Top. Signal Process. 2010, 4, 582–591. [Google Scholar] [CrossRef]
  77. Bioucas-Dias, J.; Nascimento, J. Hyperspectral Subspace Identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
  78. Rasti, B.; Ulfarsson, M.; Sveinsson, J. Hyperspectral Subspace Identification Using SURE. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2481–2485. [Google Scholar] [CrossRef]
  79. Ambikapathi, A.; Chan, T.; Chi, C.; Keizer, K. Hyperspectral Data Geometry Based Estimation of Number of Endmembers Using p-Norm-Based Pure Pixel Identification Algorithm. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2753–2769. [Google Scholar] [CrossRef]
  80. Andreou, C.; Karathanassi, V. Estimation of the Number of Endmembers Using Robust Outlier Detection Method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 247–256. [Google Scholar] [CrossRef]
  81. Tao, X.; Paoletti, M.; Plaza, A. Endmember Estimation with Maximum Distance Analysis. Remote Sens. 2021, 13, 713. [Google Scholar] [CrossRef]
  82. Tao, X.; Cui, T.; Plaza, A.; Ren, P. Simultaneously Counting and Extracting Endmembers in a Hyperspectral Image Based on Divergent Subsets. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8952–8966. [Google Scholar] [CrossRef]
  83. Song, A.; Chang, A.; Choi, J.; Choi, S.; Kin, Y. Automatic Extraction of Optimal Endmembers from Airborne Hyperspectral Imagery Using Iterative Error Analysis (IEA) and Spectral Discrimination Measurements. Sensors 2015, 15, 2593–2613. [Google Scholar] [CrossRef] [PubMed]
  84. Shahid, K.; Schizas, I. Spatial-Aware Hyperspectral Nonlinear Unmixing Autoencoder with Endmember Number Estimation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 15, 20–41. [Google Scholar] [CrossRef]
  85. Craig, M. Minimum-Volume Transforms for Remotely Sensed Data. IEEE Trans. Geosci. Remote Sens. 1994, 2, 542–552. [Google Scholar] [CrossRef]
  86. Winter, M. N-FINDR: An Algorithm for Fast Autonomous Spectral Endmember Determination in Hyperspectral Data. In Proceedings of the Imaging Spectrometry V, Denver, CO, USA, 18–23 July 1999; p. 3753. [Google Scholar]
  87. Neville, R.; Staenz, K.; Szeredi, T.; Lefebvre, J.; Hauff, P. Automatic Endmember Extraction from Hyperspectral Data for Mineral Exploration. In Proceedings of the Fourth International Airborne Remote Sensing Conference and Exhibition/21st Canadian Symposium on Remote Sensing, Ottawa, ON, Canada, 21–24 June 1999. [Google Scholar] [CrossRef]
  88. Fuhrmann, D. Simplex Shrink-Wrap Algorithm. In Proceedings of the SPIE 3718, Automatic Target Recognition IX, Orlando, FL, USA, 7–9 April 1999; SPIE: Pune, MH, India, 1999. [Google Scholar]
  89. Gruninger, J.; Ratkowski, A.; Hoke, M. The Sequential Maximum Angle Convex Cone (SMACC) Endmember Model. In Proceedings of the SPIE Volume 5425, Algorithms and Technologies for Multispectral, Hyperspectral, and ultraspectral Imagery X, Orlando, FL, USA, 12–16 April 2004; pp. 1–14. [Google Scholar]
  90. Berman, M.; Kiiveri, H.; Lagerstrom, R.; Ernst, A.; Dunne, R.; Huntington, J. ICE: A Statistical Approach to Identifying Endmembers in Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2008, 42, 2085–2095. [Google Scholar] [CrossRef]
  91. Nascimento, J.; Dias, M. Vertex Component Analysis: A Fast Algorithm to Unmix Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  92. Miao, L.; Qi, H. Endmember Extraction from Highly Mixed Data Using Minimum Volume Constrained Nonnegative Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2007, 45, 765–777. [Google Scholar] [CrossRef]
  93. Li, J.; Bioucas-Dias, J. Minimum Volume Simplex Analysis: A Fast Algorithm to Unmix Hyperspectral Data. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; pp. 250–253. [Google Scholar]
  94. Bioucas-Dias, J. A Variable Splitting Augmented Lagrangian Approach to Linear Spectral Unmixing. In Proceedings of the 1st Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Grenoble, France, 26–28 August 2009; IEEE: New York, NY, USA, 2009; pp. 1–4. [Google Scholar]
  95. Shah, D.; Zaveri, I. Dispersion Index Based Endmember Extraction for Hyperspectral Unmixing. IETE J. Res. 2023, 69, 2837–2845. [Google Scholar] [CrossRef]
  96. Zhang, X.; Wang, Y.; Xue, T. Quadratic Clustering-Based Simplex Volume Maximization for Hyperspectral Endmember Extraction. Appl. Sci. 2022, 12, 7132. [Google Scholar] [CrossRef]
  97. Shah, D.; Zaveri, T. Convex Geometry and K-Medoids Based Noise-Robust Endmember Extraction Algorithm. J. Appl. Remote Sens. 2020, 14, 034521. [Google Scholar] [CrossRef]
  98. Bayliss, J.; Gualtieri, J.; Cromp, R. Analyzing Hyperspectral Data with Independent Component Analysis. In Proceedings of the 26th AIPR Workshop: Exploiting New Image Sources and Sensors, Washington, DC, USA, 15–17 October 1997; SPIE: Bellingham, WA, USA, 1998; pp. 133–143. [Google Scholar]
  99. Zhao, M.; Gao, T.; Chen, J. Hyperspectral Unmixing via Nonnegative Matrix Factorization with Handcrafted and Learned Priors. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  100. Qian, Y.; Xiong, F.; Zeng, S.; Zhou, J.; Tang, Y. Matrix-Vector Nonnegative Tensor Factorization for Blind Unmixing of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1776–1792. [Google Scholar] [CrossRef]
  101. Wang, T.; Li, J.; Ng, M.K.; Wang, C. Nonnegative Matrix Functional Factorization for Hyperspectral Unmixing with Non-Uniform Spectral Sampling. IEEE Trans. Geosci. Remote Sens. 2023, 62, 1–13. [Google Scholar]
  102. Nascimento, J.; Bioucas-Dias, J. Hyperspectral Unmixing Algorithm via Dependent Component Analysis. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; IEEE: New York, NY, USA, 2007; pp. 4033–4036. [Google Scholar]
  103. Dobigeon, N.; Moussaoui, S.; Tourneret, J.; Carteret, C. Bayesian Separation of Spectral Sources Under Non-Negativity and Fully Additivity Constraints. Signal Process. 2009, 89, 2657–2669. [Google Scholar] [CrossRef]
  104. Li, J.; Plaza, A.; Liu, L. Robust Collaborative Nonnegative Matrix Factorization for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6076–6090. [Google Scholar] [CrossRef]
  105. Boardman, J. Automating Spectral Unmixing of AVIRIS Data Using Convex Geometry Concepts. In Proceedings of the 4th Annual JPL Airborne Geoscience Workshop, Washington, DC, USA, 25–29 October 1993; Colorado University: Boulder, CO, USA, 1993; pp. 11–14. [Google Scholar]
  106. Boardman, J.; Kruse, F.; Green, R. Mapping Target Signatures via Partial Unmixing of AVIRIS Data. In Proceedings of the 5th JPL Airborne Earth Science Workshop, Pasadena, CA, USA, 23–26 January 1995. [Google Scholar]
  107. Harsanyi, J.; Chang, C. Hyperspectral Image Classification and Dimensionality Reduction: An Orthogonal Subspace Projection Approach. IEEE Trans. Geosci. Remote Sens. 1994, 32, 779–785. [Google Scholar] [CrossRef]
  108. Araújo, M.; Saldanha, T.; Galvão, R.; Yoneyama, T.; Chame, H.; Visani, V. The Successive Projections Algorithm for Variable Selection in Spectroscopic Multicomponent Analysis. Chemom. Intell. Lab. Syst. 2001, 57, 65–73. [Google Scholar] [CrossRef]
  109. Wu, B.; Zhang, L.; Li, P. Unmixing Hyperspectral Imagery Based on Support Vector Nonlinear Approximating Regression. J. Remote Sens. 2006, 10, 312–318. [Google Scholar]
  110. Plaza, A.; Martínez, P.; Pérez, R.; Plaza, J. Spatial/ Spectral Endmember Extraction by Multidimensional Morphological Operations. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2025–2041. [Google Scholar] [CrossRef]
  111. Rudin, L.; Osher, S.; Fatemi, E. Nonlinear Total Variation Based Noise Removal Algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  112. Rogge, D.; Rivard, B.; Zhang, J.; Sanchez, A.; Harris, J.; Feng, J. Integration of Spatial Spectral Information for the Improved Extraction of Endmembers. Remote Sens. Environ. 2007, 110, 287–303. [Google Scholar] [CrossRef]
  113. Zortea, M.; Plaza, A. Spatial Preprocessing for Endmember Extraction. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2679–2693. [Google Scholar] [CrossRef]
  114. Song, M.; Li, Y.; Xu, D. Spatial Potential Energy Weighted Maximum Simplex Algorithm for Hyperspectral Endmember Extraction. Remote Sens. 2022, 14, 1192. [Google Scholar] [CrossRef]
  115. Savitzky, A.; Golay, M.J.E. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  116. Gao, L.; Zhuang, L.; Wu, Y.; Sun, X.; Zhang, B. A Quantitative and Comparative Analysis of Different Preprocessing Implementations of DPSO: A Robust Endmember Extraction Algorithm. Soft Comput. 2014, 20, 4669–4683. [Google Scholar] [CrossRef]
  117. Gao, L.; Gao, J.; Li, J.; Plaza, A.; Zhuang, L.; Sun, X.; Zhang, B. Multiple Algorithm Integration Based on Ant Colony Optimization for Endmember Extraction from Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2569–2582. [Google Scholar] [CrossRef]
  118. Zhao, H.; Jiang, Y.; Wang, T.; Cui, W.; Li, X. A Method Based on the Adaptive Cuckoo Search Algorithm for Endmember Extraction from Hyperspectral Remote Sensing Images. Remote Sens. Lett. 2016, 7, 289–297. [Google Scholar] [CrossRef]
  119. Somers, B.; Zortea, M.; Plaza, A.; Asner, G. Automated Extraction of Image-Based Endmember Bundles for Improved Spectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 396–408. [Google Scholar] [CrossRef]
  120. Ye, C.; Liu, S.; Xu, M.; Du, B.; Wan, J.; Sheng, H. An Endmember Bundle Extraction Method Based on Multiscale Sampling to Address Spectral Variability for Hyperspectral Unmixing. Remote Sens. 2021, 13, 3941. [Google Scholar] [CrossRef]
  121. Hashemi-Nazari, Y.; Tajaddini, A.; Saberi-Movahed, F.; Alonso-Fernandez, F.; Tiwari, P. Robust Oblique Projection and Weighted NMF for Hyperspectral Unmixing. Pattern Recogn. 2025, 170, 112029. [Google Scholar] [CrossRef]
  122. Halimi, A.; Altmann, Y.; Dobigeon, N.; Tourneret, J. Unmixing Hyperspectral Images Using the Generalized Bilinear Model. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 1886–1889. [Google Scholar]
  123. Yokoya, N.; Chanussot, J.; Iwasaki, A. Nonlinear Unmixing of Hyperspectral Data Using Semi-Nonnegative Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1430–1437. [Google Scholar] [CrossRef]
  124. Li, H.C.; Feng, X.R.; Wang, R.; Gao, L.; Du, Q. Superpixel-Based Low-Rank Tensor Factorization for Blind Nonlinear Hyperspectral Unmixing. IEEE Sens. J. 2024, 24, 13055–13072. [Google Scholar] [CrossRef]
  125. Tao, X.; Paoletti, M.; Han, L.; Haut, J.; Ren, P.; Plaza, J.; Plaza, A. Fast Orthogonal Projection for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5523313. [Google Scholar] [CrossRef]
  126. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse Unmixing of Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  127. Zare, A.; Gader, P. Sparsity Promoting Iterated Constrained Endmember Detection in Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2007, 4, 446–450. [Google Scholar] [CrossRef]
  128. Bioucas-Dias, J.; Figueiredo, M. Alternating Direction Algorithms for Constrained Sparse Regression: Application to Hyperspectral Unmixing. In Proceedings of the 2nd WHISPERS, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  129. Qian, Y.; Jia, S.; Zhou, J.; Robles-Kelly, K. Hyperspectral Unmixing via L~1/2~ Sparsity Constrained Nonnegative Matrix Factorization. IEEE Tans. Geosci. Remote Sens. 2011, 49, 4283–4297. [Google Scholar] [CrossRef]
  130. Zheng, C.; Li, H.; Wang, Q.; Chen, C. Reweighted Sparse Regression for Hyperspectral Unmixing. IEEE Tans. Geosci. Remote Sens. 2016, 52, 479–488. [Google Scholar] [CrossRef]
  131. Zhang, S.; Li, J.; Liu, K.; Deng, C.; Liu, L.; Plaza, A. Hyperspectral Unmixing Based on Local Collaborative Sparse Regression. IEEE Geosci. Remote Sens. Lett. 2016, 13, 631–635. [Google Scholar] [CrossRef]
  132. Wang, J.; Zhang, Q.; Zhang, Y. Elastic Reweighted Sparsity Regularized Sparse Unmixing for Hyperspectral Image Analysis. Digit. Signal Process. 2024, 155, 104723. [Google Scholar] [CrossRef]
  133. Xu, X.; Pan, B.; Chen, Z.; Shi, Z.; Li, T. Simultaneously Multiobjective Sparse Unmixing and Library Pruning for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3383–3395. [Google Scholar] [CrossRef]
  134. Wei, Y.; Xu, X.; Pan, B.; Li, T.; Shi, Z. A Multiobjective Group Sparse Hyperspectral Unmixing Method with High Correlation Library. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7114–7127. [Google Scholar] [CrossRef]
  135. Li, J.; Gong, M.; Wei, J.; Zhang, Y.; Zhao, Y.; Wang, S.; Jiang, X. Evolutionary Multitasking Cooperative Transfer for Multiobjective Hyperspectral Sparse Unmixing. Knowl. Based Syst. 2024, 285, 111306. [Google Scholar] [CrossRef]
  136. Deng, K.; Qian, Y.; Nie, J.; Zhou, J. Diffusion Model Based Hyperspectral Unmixing Using Spectral Prior Distribution. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–6. [Google Scholar] [CrossRef]
  137. Elad, M.; Aharon, M. Image Denoising via Sparse and Redundant Representations Over Learned Dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  138. Qu, Q.; Nasrabadi, N.; Tran, T. Abundance Estimation for Bilinear Mixture Models via Joint Sparse and Low-Rank Representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4404–4423. [Google Scholar]
  139. Zhang, H.; Yang, B. Geometrical Projection Improved Multi-Objective Particle Swarm Optimization for Unsupervised Nonlinear Hyperspectral Unmixing. Int. J. Remote Sens. 2024, 45, 1850–1883. [Google Scholar] [CrossRef]
  140. Song, X.; Wu, L. A Novel Hyperspectral Endmember Extraction Algorithm Based on Online Robust Dictionary Learning. Remote Sens. 2019, 11, 1792. [Google Scholar] [CrossRef]
  141. Feng, R.; Zhong, Y.; Zhang, L. An Improved Nonlocal Sparse Unmixing Algorithm for Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2015, 12, 915–919. [Google Scholar] [CrossRef]
  142. Heinz, D.; Chang, C.; Althouse, M. Fully Constrained Least-Squares Based Linear Unmixing Hyperpsectral Image Classification. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Hamburg, Germany, 28 June–2 July 1999; Volume 2, pp. 1401–1403. [Google Scholar]
  143. Heinz, D.C. Fully Constrained Least Squares Linear Spectral Mixture Analysis Method for Material Quantification in Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef]
  144. Gong, P.; Zhang, A. Noise Effect on Linear Spectral Unmixing. Ann. GIS 1999, 5, 52–57. [Google Scholar] [CrossRef]
  145. Boardman, J.W.; Kruse, F.A. Analysis of Imaging Spectrometer Data Using N-Dimensional Geometry and a Mixture-Tuned Matched Filtering Approach. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4138–4152. [Google Scholar] [CrossRef]
  146. Routh, D.; Seegmiller, L.; Bettigole, C.; Kuhn, C.; Oliver, C.D.; Glick, H.B. Improving the Reliability of Mixture Tuned Matched Filtering Remote Sensing Classification Results Using Supervised Learning Algorithms and Cross-Validation. Remote Sens. 2018, 10, 1675. [Google Scholar] [CrossRef]
  147. Li, J. Wavelet-Based Feature Extraction for Improved Endmember Abundance Estimation in Linear Unmixing of Hyperspectral Signals. IEEE Trans. Geosci. Remote Sens. 2002, 42, 644–649. [Google Scholar] [CrossRef]
  148. Farzam, M.; Beheshti, S.; Raahemifar, K. Calculation of Abundance Factors in Hyperspectral Imaging Using Genetic Algorithm. In Proceedings of the Electical and Computer Enginerring, Niagara Falls, ON, Canada, 4–7 May 2008. [Google Scholar]
  149. Broadwater, J.; Chellappa, R.; Banerjee, A.; Burlina, P. Kernel Fully Constrained Least Squares Abundance Estimates. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007. [Google Scholar]
  150. Chouzenoux, E.; Legendre, M.; Moussaoui, S.; Idier, J. Fast Constrained Least Squares Spectral Unmixing Using Primal-Dual Interior-Point Optimization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 59–69. [Google Scholar] [CrossRef]
  151. Chen, J.; Richard, C.; Honeine, P. Nonlinear Estimation of Material Abundances in Hyperspectral Images with L1-Norm Spatial Regularization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2654–2665. [Google Scholar] [CrossRef]
  152. Kizel, F.; Shoshany, M.; Netanyahu, N.; Even-Tzur, G.; Benediktsson, J. A Stepwise Analytical Projected Gradient Descent Search for Hyperspectral Unmixing and Its Code Vectorization. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4925–4943. [Google Scholar] [CrossRef]
  153. Li, C.; Ma, Y.; Huang, J.; Mei, X.; Liu, C.; Ma, Y. GBM Based Unmixing of Hyperspectral Data Using Bound Projected Optimal Gradient Method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 952–956. [Google Scholar] [CrossRef]
  154. Wu, L.; Huang, J.; Zhu, Z.Y. Tucker Tensor Decomposition with Rank Estimation for Sparse Hyperspectral Unmixing. Int. J. Remote Sens. 2024, 45, 3992–4022. [Google Scholar] [CrossRef]
  155. Guifoyle, K.; Althouse, M.; Chang, C. A Quantitative and Comparative Analysis of Linear and Nonlinear Spectral Mixture Models Using Radial Basis Function Neural Networks. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2314–2318. [Google Scholar] [CrossRef]
  156. Zhou, C.; Rodrigues, M. ADMM-Based Hyperspectral Unmixing Networks for Abundance and Endmember Estimation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–18. [Google Scholar] [CrossRef]
  157. Elman, J. Finding Structure in Time. Cogn. Sci. 1990, 14, 179–211. [Google Scholar] [CrossRef]
  158. Chan, T.; Jia, K.; Gao, S.; Lu, J.; Zeng, Z.; Yi, M. PCANet: A Simple Deep Learning Baseline for Image Classification? IEEE Trans. Image Process. 2015, 24, 5017–5032. [Google Scholar] [CrossRef]
  159. Smola, A.; Schölkopf, B. A Tutorial on Support Vector Regression. Stat. Comput. 2004, 14, 192–222. [Google Scholar] [CrossRef]
  160. Palla, P.; Shetty, A.; Narasimhadhan, A. Subtractive Clustering and Phase Correlation Similarity Measure for Endmember Extraction. Infrared Phys. Technol. 2020, 110, 103452. [Google Scholar] [CrossRef]
  161. Wu, K.; Feng, X.; Zhang, Y. A Novel Endmember Extraction Method Using Sparse Component Analysis for Hyperspectral Remote Sensing Imagery. IEEE Access 2018, 6, 75206–75215. [Google Scholar] [CrossRef]
  162. Wang, L.; Wang, S.; Li, X. Twin Support Vector Machine-Based Hyperspectral Unmixing and Its Uncertainty Analysis. J. Remote Sens. 2020, 14, 046504. [Google Scholar] [CrossRef]
  163. Yang, K.; Liu, E.; Zhang, W.; Wang, G.; Xia, T. A Novel Algorithm on Endmember Extraction Based on Surf and Considering the Dimension of Hyperspectral Image. Sci. Technol. Eng. 2016, 16, 66–71. [Google Scholar]
  164. Shen, X.; Bao, W. Hyperspectral Endmember Extraction Using Spatially Weighted Simplex Strategy. Remote Sens. 2019, 11, 2147. [Google Scholar] [CrossRef]
  165. Zhao, Y.; Zhou, Z.; Wang, D. Group Endmember Extraction Algorithm Based on Gram-Schmidt Orthogonalization. J. Appl. Remote Sens. 2019, 13, 026504. [Google Scholar] [CrossRef]
  166. Su, Y.; Sun, X. Improved Discrete Swarm Intelligence Algorithms for Endmember Extraction from Hyperspectral Remote Sensing Images. J. Appl. Remote Sens. 2016, 10, 045018. [Google Scholar] [CrossRef]
  167. Zhao, H.; Hao, X.; Hu, X. The Spatial-Spectral-Environment Extraction Endmember Algorithm and Application in the MODIS Fractional Snow Cover Retrieval. Remote Sens. 2020, 12, 3693. [Google Scholar] [CrossRef]
  168. Liu, R.; Zhu, X. Endmember Bundle Extraction Based on Multiobjective Optimization. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8630–8645. [Google Scholar] [CrossRef]
  169. Li, J.; Li, H.; Gong, M. Multi-Fidelity Evolutionary Multitasking Optimization for Hyperspectral Endmember Extraction. Appl. Soft Comput. 2021, 111, 107713. [Google Scholar] [CrossRef]
  170. Liu, R.; Wang, P.; Qu, B. Endmember Bundle Extraction Based on Improved Multiobjective Particle Swarm Optimization. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  171. Liu, R.; Du, B.; Zhang, L. Multiobjective Optimized Endmember Extraction for Hyperspectral Images. Remote Sens. 2017, 9, 558. [Google Scholar] [CrossRef]
  172. Xu, M.; Zhang, L.; Song, D. A Mutation Operator Accelerated Quantum-Behaved Particle Swarm Optimization Algorithm for Hyperspectral Endmember Extraction. Remote Sens. 2017, 9, 197. [Google Scholar] [CrossRef]
  173. Kale, K.V.; Solankar, M.M.; Nalawade, D.B. Hyperspectral Endmember Extraction Techniques, Processing and Analysis of Hyperspectral Data; IntechOpen: London, UK, 2020; pp. 1–136. [Google Scholar] [CrossRef]
  174. Yang, L.; Peng, J.; Su, H.; Xu, L.; Wang, Y.; Yu, B. Combined Nonlocal Spatial Information and Spatial Group Sparsity in NMF for Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1767–1771. [Google Scholar] [CrossRef]
  175. Rasti, B.; Zouaoui, A.; Mairal, J. Sunaa: Sparse Unmixing Using Archetypal Analysis. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  176. Deshpande, V.; Bhatt, J. A Practical Approach for Hyperspectral Unmixing Using Deep Learning. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar]
  177. Berahmand, K.; Daneshfar, F.; Salehi, E.S.; Li, Y.; Xu, Y. Autoencoders and Their Applications in Machine Learning: A Survey. Artif. Intell. Rev. 2024, 57, 28. [Google Scholar] [CrossRef]
  178. Lemme, A.; Reinhart, R.F.; Steil, J.J. Online Learning and Generalization of Parts-Based Image Representations by Non-Negative Sparse Autoencoders. Neural Netw. 2012, 33, 194–203. [Google Scholar] [CrossRef]
  179. Su, Y.; Marinoni, A.; Li, J.; Plaza, J.; Gamba, P. Stacked Nonnegative Sparse Autoencoders for Robust Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1427–1431. [Google Scholar] [CrossRef]
  180. Palsson, B.; Sigurdsson, J.; Sveinsson, J.R.; Ulfarsson, M.O. Hyperspectral Unmixing Using a Neural Network Autoencoder. IEEE Access 2018, 6, 25646–25656. [Google Scholar] [CrossRef]
  181. Shi, S.; Zhang, L.; Altmann, Y.; Chen, J. Deep Generative Model for Spatial-Spectral Unmixing with Multiple Endmember Priors. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  182. Guo, R.; Wang, W.; Qi, H. Hyperspectral Image Unmixing Using Autoencoder Cascade. In Proceedings of the 7th WHISPERS, Tokyo, Japan, 2–5 June 2015; pp. 1–4. [Google Scholar]
  183. Qu, Y.; Qi, H. UDAS: An Untied Denoising Autoencoder with Sparsity for Spectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1698–1712. [Google Scholar] [CrossRef]
  184. Gao, L.; Han, Z.; Hong, D.; Zhang, B.; Chanussot, J. CyCU-Net: Cycle-Consistency Unmixing Network by Learning Cascaded Autoencoders. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  185. Wu, J.; Zhao, J.; Long, H. Cascaded Hybrid Convolutional Autoencoder Network for Spectral-Spatial Nonlinear Hyperspectral Unmixing. Int. J. Remote Sens. 2024, 45, 9267–9286. [Google Scholar] [CrossRef]
  186. Ozkan, S.; Kaya, B.; Akar, G. Endnet: Sparse Autoencoder Network for Endmember Extraction and Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2018, 57, 482–496. [Google Scholar] [CrossRef]
  187. Su, Y.; Mariononi, A.; Li, J.; Plaza, A.; Gamba, P. Nonnegative Sparse Autoencoder for Robust Endmember Extraction from Remotely Sensed Hyperspectral Images. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017; pp. 205–208. [Google Scholar]
  188. Lindenbaum, O.; Aizenbud, Y.; Kluger, Y. Probabilistic Robust Autoencoders for Outlier Detection. arXiv 2021, arXiv:2110.00494. [Google Scholar]
  189. Zhao, M.; Chen, J.; Dobigeon, N. AE-RED: A Hyperspectral Unmixing Framework Powered by Deep Autoencoder and Regularization by Denoising. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5512115. [Google Scholar] [CrossRef]
  190. Li, P.; Pei, Y.; Li, J. A Comprehensive Survey on Design and Application of Autoencoder in Deep Learning. Appl. Soft Comput. 2023, 138, 110176. [Google Scholar] [CrossRef]
  191. Cao, C.; Song, W.; Xiang, H. A Two-Stream Stacked Autoencoder with Inter-Class Separability for Bilinear Hyperspectral Unmixing. IEEE Trans. Comput. Imag. 2024, 10, 357–371. [Google Scholar] [CrossRef]
  192. Hong, D.; Gao, L.; Yao, J. Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6518–6531. [Google Scholar] [CrossRef]
  193. Qi, L.; Gao, F.; Dong, J. SSCU-Net: Spatial-Spectral Collaborative Unmixing Network for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  194. Qi, L.; Chen, Z.; Gao, F.; Dong, J.; Gao, X.; Du, Q. Multiview Spatial-Spectral Two-Stream Network for Hyperspectral Image Unmixing. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  195. Qi, L.; Qin, X.; Gao, F.; Dong, J.; Gao, X. SAWU-Net: Spatial Attention Weighted Unmixing Network for Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  196. Wang, B.; Yao, H.; Song, D.; Zhang, J.; Gao, H. SSF-Net: A Spatial-Spectral Features Integrated Autoencoder Network for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 1781–1794. [Google Scholar] [CrossRef]
  197. Hu, J.; Wang, T.; Jin, Q.; Peng, C.; Liu, Q. A Multi-Domain Dual-Stream Network for Hyperspectral Unmixing. Int. J. Appl. Earth Obs. Geoinf. 2024, 135, 104247. [Google Scholar] [CrossRef]
  198. Li, H.; Borsoi, R.; Imbiriba, T.; Closas, P.; Bermudez, J.; Erdogmus, D. Model-Based Deep Autoencoder Networks for Nonlinear Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  199. Shahid, K.; Schizas, I. Unsupervised Hyperspectral Unmixing via Nonlinear Autoencoders. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
  200. Zhao, M.; Yan, L.; Chen, J. LSTM-DNN Based Autoencoder Network for Nonlinear Hyperspectral Image Unmixing. IEEE J. Sel. Top. Signal Process. 2021, 15, 295–309. [Google Scholar] [CrossRef]
  201. Su, Y.; Xu, X.; Li, J.; Qi, H.; Gamba, P.; Plaza, A. Deep Autoencoders with Multitask Learning for Bilinear Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8615–8629. [Google Scholar] [CrossRef]
  202. Wang, M.; Zhao, M.; Chen, J.; Rahardja, S. Nonlinear Unmixing of Hyperspectral Data via Deep Autoencoder Networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1467–1471. [Google Scholar] [CrossRef]
  203. Xu, M.; Xu, J.; Liu, S.; Sheng, H.; Yang, Z. Multi-Scale Convolutional Mask Network for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 3687–3700. [Google Scholar] [CrossRef]
  204. Liu, B.; Kong, W.; Wang, Y. Deep Convolutional Asymmetric Autoencoder-Based Spatial-Spectral Clustering Network for Hyperspectral Image. Wirel. Commun. Mob. Comput. 2022, 2022, 2027981. [Google Scholar] [CrossRef]
  205. Khajehrayeni, F.; Ghassemian, H. Hyperspectral Unmixing Using Deep Convolutional Autoencoders in a Supervised Scenario. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 567–576. [Google Scholar] [CrossRef]
  206. Fang, T.; Zhu, F.; Chen, J. Hyperspectral Unmixing Based on Multilinear Mixing Model Using Convolutional Autoencoders. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  207. Zhang, M.; Yang, M.; Xie, H.; Yue, P.; Zhang, W.; Jiao, Q.; Xu, L.; Tan, X. A Global Spatial-Spectral Feature Fused Autoencoder for Nonlinear Hyperspectral Unmixing. Remote Sens. 2024, 16, 3149. [Google Scholar] [CrossRef]
  208. Ozkan, S.; Akar, G. Deep Spectral Convolution Network for Hyperspectral Unmixing. In Proceedings of the 25th IEEE International Conference Image Processing, Athens, Greece, 7–10 October 2018; pp. 3313–3317. [Google Scholar]
  209. Su, Y.; Zhu, Z.; Gao, L.; Plaza, A.; Li, P.; Sun, X.; Xu, X. DAAN: A Deep Autoencoder-Based Augmented Network for Blind Multilinear Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  210. Jin, D.; Yang, B. Graph Attention Convolutional Autoencoder-Based Unsupervised Nonlinear Unmixing for Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 7896–7906. [Google Scholar] [CrossRef]
  211. Hua, Z.; Li, X.; Qiu, Q.; Zhao, L. Autoencoder Network for Hyperspectral Unmixing with Adaptive Abundance Smoothing. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1640–1644. [Google Scholar] [CrossRef]
  212. Mhaskar, H.; Liao, Q.; Poggio, T. When and Why Are Deep Networks Better than Shallow Ones? In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2343–2349. [Google Scholar] [CrossRef]
  213. Su, L.; Liu, J.; Yuan, Y. A Multi-Attention Autoencoder for Hyperspectral Unmixing Based on the Extended Linear Mixing Model. Remote Sens. 2023, 15, 2898. [Google Scholar] [CrossRef]
  214. Hubel, D.; Wiesel, T. Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex. J. Physiol. 1962, 160, 106–154. [Google Scholar] [CrossRef]
  215. Fukushima, K. Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position. Biol. Cybern. 1983, 36, 193–202. [Google Scholar] [CrossRef]
  216. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intel. Neurosc. 2018, 2018, 7068349. [Google Scholar] [CrossRef]
  217. Gu, J. Recent Advances in Convolutional Neural Networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  218. Ahmad, M.; Shabbir, S.; Roy, S. Hyperspectral Image Classification-Traditional to Deep Models: A Survey for Future Prospects. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 15, 968–999. [Google Scholar] [CrossRef]
  219. Zhang, X.; Sun, Y.; Zhang, J. Hyperspectral Unmixing via Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1755–1759. [Google Scholar] [CrossRef]
  220. Lin, M.; Chen, Q.; Yan, S. Network in Network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
  221. Gao, H.; Yang, Y.; Li, C.; Zhou, H.; Qu, X. Joint Alternate Small Convolution and Feature Reuse for Hyperspectral Image Classification. ISPRS Int. J. Geo- Inf. 2018, 7, 349. [Google Scholar] [CrossRef]
  222. Qi, L.; Li, J.; Wang, Y.; Lei, M.; Gao, X. Deep Spectral Convolution Network for Hyperspectral Image Unmixing with Spectral Library. Signal Process. 2020, 176, 107672. [Google Scholar] [CrossRef]
  223. Wan, L.; Chen, T.; Plaza, A. Hyperspectral Unmixing Based on Spectral and Sparse Deep Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11669–11682. [Google Scholar] [CrossRef]
  224. Palsson, B.; Ulfarsson, M.; Sveinsson, J. Convolutional Autoencoder for Spectral-Spatial Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2020, 59, 535–549. [Google Scholar] [CrossRef]
  225. Liu, L.; Awwad, E.M.; Ali, Y.A.; Al-Razgan, M.; Maarouf, A.; Abualigah, L.; Hoshyar, A.N. Multi-Dataset Hyper-CNN for Hyperspectral Image Segmentation of Remote Sensing Images. Processes 2023, 11, 435. [Google Scholar] [CrossRef]
  226. Zhao, J.; Zhang, X.; Fan, J.; Meng, H. A 3-D-CNN and Semi-Supervised Based Network for Hyperspectral Unmixing. Int. J. Remote Sens. 2022, 45, 168–191. [Google Scholar]
  227. Tao, X.; Paoletti, M.; Han, L. A New Deep Convolutional Network for Effective Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6999–7012. [Google Scholar] [CrossRef]
  228. Tulczyjew, L.; Kawulok, M.; Longépé, N. A Multibranch Convolutional Neural Network for Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  229. Lin, C.; Ma, W.; Li, W. Identifiability of the Simplex Volume Minimization Criterion for Blind Hyperspectral Unmixing: The No-Pure-Pixel Case. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5530–5546. [Google Scholar]
  230. Rasti, B.; Koirala, B.; Scheunders, P.; Chanussot, J. Misicnet: Minimum Simplex Convolutional Network for Deep Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  231. Xu, M.; Du, B.; Fang, Y. Endmember Extraction from Highly Mixed Data Using Linear Mixture Model Constrained Particle Swarm Optimization. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5502–5511. [Google Scholar] [CrossRef]
  232. Zhuang, L.; Lin, C.; Bioucas-Dias, J. Regularization Parameter Selection in Minimum Volume Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9858–9877. [Google Scholar] [CrossRef]
  233. Behnood, R.; Bikram, K. SUnCNN: Sparse Unmixing Using Unsupervised Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar]
  234. Kong, F.; Chen, M.; Li, Y.; Li, D.; Zhang, Y. Deep Interpretable Fully CNN Structure for Sparse Hyperspectral Unmixing via Model-Driven and Data-Driven Integration. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  235. Ghosh, P.; Roy, S.K.; Koirala, B.; Rasti, B.; Scheunders, P. Deep Hyperspectral Unmixing Using Transformer Network. arXiv 2022. [Google Scholar] [CrossRef]
  236. Zhao, M.; Shi, S.; Chen, J.; Dobigeon, N. A 3D-CNN Framework for Hyperspectral Unmixing with Spectral Variability. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar]
  237. Tang, M.; Qu, Y.; Qi, H. Hyperspectral Nonlinear Unmixing via Generative Adversarial Network. In Proceedings of the IEEE International Geoscience Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2404–2407. [Google Scholar]
  238. Roy, S.; Haut, J.; Paoletti, M.; Dubey, S.; Plaza, A. Generative Adversarial Minority Oversampling for Spectral-Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  239. Jin, Q.; Ma, Y.; Fan, F.; Huang, J.; Mei, X.; Ma, J. Adversarial Autoencoder Network for Hyperspectral Unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 4555–4569. [Google Scholar] [CrossRef]
  240. Zhao, M.; Yan, L.; Chen, J. Hyperspectral Image Shadow Compensation via Cycle-Consistent Adversarial Networks. Neurocomputing 2021, 450, 61–69. [Google Scholar] [CrossRef]
  241. Sun, B.; Su, Y.; Sun, H.; Bai, J.; Li, P.; Liu, F.; Liu, D. Generative Adversarial Autoencoder Network for Anti-Shadow Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  242. Ozkan, S.; Akar, G. Improved Deep Spectral Convolution Network for Hyperspectral Unmixing with Multinomial Mixture Kernel and Endmember Uncertainty. arXiv 2019, arXiv:1808.01104. [Google Scholar] [CrossRef]
  243. Borsoi, R.; Imbiriba, T.; Bermudez, J. Deep Generative Endmember Modeling: An Application to Unsupervised Spectral Unmixing. IEEE Trans. Comp. Imag. 2019, 6, 374–384. [Google Scholar] [CrossRef]
  244. Palsson, B.; Ulfarsson, M.; Sveinsson, J. Synthetic Hyperspectral Images with Controllable Spectral Variability and Ground Truth. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  245. Palsson, B.; Ulfarsson, M.; Sveinsson, J. Synthetic Hyperspectral Images with Controllable Spectral Variability Using a Generative Adversarial Network. Remote Sens. 2023, 15, 3919. [Google Scholar] [CrossRef]
  246. Shi, S.; Zhao, M.; Zhang, L.; Altmann, Y.; Chen, J. Probabilistic Generative Model for Hyperspectral Unmixing Accounting for Endmember Variability. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  247. Hu, S.; Li, H. Hyperspectral Unmixing with Multi-Scale Convolution Attention Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 2531–2542. [Google Scholar] [CrossRef]
  248. Qi, L.; Yue, M.; Gao, F.; Cao, B.; Dong, J.; Gao, X. Deep Attention-Guided Spatial-Spectral Network for Hyperspectral Image Unmixing. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  249. Xiong, F.; Zhou, J.; Tao, S.; Lu, J.; Qian, Y. SNMF-Net: Learning a Deep Alternating Neural Network for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
  250. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31th ICNIPS, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  251. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  252. Guo, J.; Han, K.; Wu, H.; Xu, C.; Tang, Y.; Xu, C.; Wang, Y. Cmt: Convolutional Neural Networks Meet Vision Transformers. arXiv 2021, arXiv:2107.06263. [Google Scholar]
  253. Bhakthan, S.M.; Loganathan, A. A Hyperspectral Unmixing Model Using Convolutional Vision Transformer. Earth Sci. Inform. 2024, 17, 2255–2273. [Google Scholar] [CrossRef]
  254. Duan, Y.; Xu, X.; Li, T.; Pan, B.; Shi, Z. UnDAT: Double-Aware Transformer for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
  255. Wang, P.; Liu, R.; Zhang, L. MAT-Net: Multiscale Aggregation Transformer Network for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  256. Chen, C.; Fan, Q.; Panda, R. Crossvit: Cross-Attention Multiscale Vision Transformer for Image Classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 357–366. [Google Scholar]
  257. Yang, Z.; Xu, M.; Wan, J. UST-Net: A U-Shaped Transformer Network Using Shifted Windows for Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
  258. Su, Y.; Gao, L.; Plaza, A.; Sun, X.; Jiang, M.; Yang, G. SRViT: Self-Supervised Relation-Aware Vision Transformer for Hyperspectral Unmixing. IEEE Trans. Neural Netw. Learn. Syst. 2025, 1–14. [Google Scholar] [CrossRef]
  259. Yao, J.; Hong, D.; Chanussot, J.; Meng, D.; Zhu, X.; Xu, Z. Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral Super-Resolution. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 208–224. [Google Scholar]
  260. Xu, C.; Ye, F.; Kong, F.; Li, Y.; Lv, Z. MSCC-ViT:A Multiscale Visual-Transformer Network Using Convolution Crossing Attention for Hyperspectral Unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 18070–18082. [Google Scholar] [CrossRef]
  261. Roy, S.; Deria, A.; Shah, C.; Haut, J.; Du, Q.; Plaza, A. Spectral-Spatial Morphological Attention Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503615. [Google Scholar] [CrossRef]
  262. Chen, J.; Yang, C.; Zhang, L.; Yang, L.; Bian, L.; Luo, Z.; Wang, J. TCCU-Net: Transformer and CNN Collaborative Unmixing Network for Hyperspectral Image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 8073–8089. [Google Scholar] [CrossRef]
  263. Zeng, J.; Ritz, C.; Zhao, J.; Lan, J. Attention-Based Residual Network with Scattering Transform Features for Hyperspectral Unmixing with Limited Training Samples. Remote Sens. 2020, 12, 400. [Google Scholar] [CrossRef]
  264. Ghosh, P.; Roy, S.K.; Koirala, B.; Rasti, B.; Scheunders, P. Hyperspectral Unmixing Using Transformer Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  265. Kong, F.; Zheng, Y.; Li, D.; Li, Y.; Chen, M. Window Transformer Convolutional Autoencoder for Hyperspectral Sparse Unmixing. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  266. Ge, Y.; Han, L.; Paoletti, M.E.; Haut, J.M.; Qu, G. Transformer-Enhanced CNN Based on Intensive Feature for Hyperspectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  267. Borsoi, R.; Imbiriba, T.; Closas, P. Dynamical Hyperspectral Unmixing with Variational Recurrent Neural Networks. IEEE Trans. Image Process. 2023, 32, 2279–2294. [Google Scholar] [CrossRef]
  268. Ullah, F.; Ullah, I.; Khan, R.U.; Khan, S.; Khan, K.; Pau, G. Conventional to Deep Ensemble Methods for Hyperspectral Image Classification: A Comprehensive Survey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 3878–3916. [Google Scholar] [CrossRef]
  269. Remote Sensing Laboratory, University of Tehran. Available online: https://rslab.ut.ac.ir/data (accessed on 7 August 2025).
  270. Zapiola, A.G.; Boselli, A.; Menafoglio, A.; Vantini, S. Hyperspectral Unmixing Algorithms for Remote Compositional Surface Mapping: A Review of the State of the Art. arXiv 2025, arXiv:2507.14260. [Google Scholar] [CrossRef]
  271. Jia, J.; Zheng, X.; Wang, Y.; Chen, Y.; Karjalainen, M.; Dong, S.; Hyyppä, J. The Effect of Artificial Intelligence Evolving on Hyperspectral Imagery with Different Signal-to-Noise Ratio, Spectral and Spatial Resolutions. Remote Sens. Environ. 2024, 311, 114291. [Google Scholar] [CrossRef]
  272. Treat, C.C.; Marushchak, M.E.; Voigt, C.; Zhang, Y.; Tan, Z.; Zhuang, Q.; Virtanen, T.A.; Räsänen, A.; Biasi, C.; Hugelius, G.; et al. Tundra Landscape Heterogeneity, Not Interannual Variability, Controls the Decadal Regional Carbon Balance in the Western Russian Arctic. Glob. Change Biol. 2018, 24, 5188–5204. [Google Scholar] [CrossRef] [PubMed]
  273. Schneider, J.; Grosse, G.; Wagner, D. Land Cover Classification of Tundra Environments in the Arctic Lena Delta Based on Landsat 7 ETM+ Data and Its Application for Upscaling of Methane Emissions. Remote Sens. Environ. 2009, 113, 380–391. [Google Scholar] [CrossRef]
  274. Palace, M.; Herrick, C.; DelGreco, J.; Finnell, D.; Garnello, A.J.; McCalley, C.; McArthur, K.; Sullivan, F.; Varner, R.K. Determining Subarctic Peatland Vegetation Using an Unmanned Aerial System (UAS). Remote Sens. 2018, 10, 1498. [Google Scholar] [CrossRef]
  275. Chakraborty, R.; Rachdi, I.; Thiele, S.; Booysen, R.; Kirsch, M.; Lorenz, S.; Gloaguen, R.; Sebari, I. A Spectral and Spatial Comparison of Satellite-Based Hyperspectral Data for Geological Mapping. Remote Sens. 2024, 16, 2089. [Google Scholar] [CrossRef]
  276. Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A.F. The Spectral Image Processing System (SIPS)-Interactive Visualization and Analysis of Imaging Spectrometer Data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  277. NV5 Geospatial Software. Available online: https://www.nv5geospatialsoftware.com/ (accessed on 7 August 2025).
  278. Yanik, E.; Sezgin, T.M. Active Scene Learning. arXiv 2019, arXiv:1903.02832. [Google Scholar] [CrossRef]
  279. Zhang, Q.; Peng, J.; Sun, W.; Liu, Q. Few-Shot Learning with Mutual Information Enhancement for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar]
  280. Rasti, B.; Koirala, B.; Scheunders, P. Hapkecnn: Blind Nonlinear Unmixing for Intimate Mixtures Using Hapke Model and Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  281. Hapke, B. Theory of Reflectance and Emittance Spectroscopy; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  282. Mustard, J.; Pieters, C. Quantitative Abundance Estimates from Bidirectional Reflectance Measurements. J. Geophys. Res. 1987, 92, 617–626. [Google Scholar] [CrossRef]
  283. Siebels, K.; Goita, K.; Germain, M. Estimation of Mineral Abundance from Hyperspectral Data Using a New Supervised Neighbor-Band Ratio Unmixing Approach. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6754–6756. [Google Scholar] [CrossRef]
  284. Zou, J.; Lan, J.; Shao, Y. A Hierarchical Sparsity Unmixing Method to Address Endmember Variability in Hyperspectral Image. Remote Sens. 2018, 10, 738. [Google Scholar] [CrossRef]
  285. Lan, J.; Zou, J.; Hao, Y.; Zeng, Y.; Zhang, Y.; Dong, M. Research Progress on Unmixing of Hyperspectral Remote Sensing Imagery. J. Remote Sens. 2018, 22, 13–27. [Google Scholar] [CrossRef]
  286. Henrot, S.; Chanussot, J.; Jutten, C. Dynamical Spectral Unmixing of Multitemporal Hyperspectral Images. IEEE Trans. Image Process. 2016, 25, 3219–3232. [Google Scholar] [CrossRef]
  287. Thouvenin, P.; Dobigeon, N.; Tourneret, J. Online Unmixing of Multitemporal Hyperspectral Images Accounting for Spectral Variability. IEEE Trans. Image Process. 2016, 25, 3979–3990. [Google Scholar] [CrossRef]
  288. Borsoi, R.; Imbiriba, T.; Closas, P.; Bermudez, J.; Richard, C. Kalman Filtering and Expectation Maximization for Multitemporal Spectral Unmixing. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
  289. Liu, H.; Lu, Y.; Wu, Z.; Du, Q.; Chanussot, J.; Wei, Z. Bayesian Unmixing of Hyperspectral Image Sequence with Composite Priors for Abundance and Endmember Variability. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  290. Chen, J.; Ma, L.; Chen, X.; Rao, Y. Research Progress of Spectral Mixture Analysis. J. Remote Sens. 2016, 20, 1102–1109. [Google Scholar]
  291. Koirala, B.; Rasti, B.; Bnoulkacem, Z.; Scheunders, P. Nonlinear Spectral Unmixing Using Bézier Surfaces. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  292. Sun, S.; Liu, J.; Sun, S. Hyperspectral Subpixel Target Detection Based on Interaction Subspace Model. Pattern Recog. 2023, 139, 109464. [Google Scholar] [CrossRef]
  293. Song, X.; Zou, L.; Wu, L. Detection of Subpixel Targets on Hyperspectral Remote Sensing Imagery Based on Background Endmember Extraction. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2365–2377. [Google Scholar] [CrossRef]
  294. Zhang, Y.; Zhao, D.; Liu, H. Research Hotspots and Frontiers in Agricultural Multispectral Technology: Bibliometrics and Scientometrics Analysis of the Web of Science. Front. Plant Sci. 2022, 13, 955340. [Google Scholar] [CrossRef]
  295. Ram, B.G.; Oduor, P.; Igathinathane, C.; Howatt, K.; Sun, X. A Systematic Review of Hyperspectral Imaging in Precision Agriculture: Analysis of Its Current State and Future Prospects. Comput. Electron. Agric. 2024, 222, 109037. [Google Scholar] [CrossRef]
  296. Yu, F.; Zhao, D.; Xu, T. Characteristic Analysis and Decomposition of Mixed Pixels from UAV Hyperspectral Images in Rice Tillering Stage. Spectrosc. Spect. Anal. 2022, 42, 947–953. [Google Scholar]
  297. Mottus, M.; Pham, P.; Halme, E. IAIGA: A Novel Dataset for Multitask Learning of Continuous and Categorical Forest Variables from Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–11. [Google Scholar] [CrossRef]
  298. Sun, Y.; Tian, S.; Di, B. Extracting Mineral Alteration Information Using WorldView-3 Data. Geosci. Front. 2017, 8, 1051–1062. [Google Scholar] [CrossRef]
  299. Van der Meer, F. Near-Infrared Laboratory Spectroscopy of Mineral Chemistry: A Review. Int. J. Appl. Earth Obs. 2018, 65, 71–78. [Google Scholar]
  300. Zhu, L.; Qin, K.; Zhao, Y. Research on Improved Stacked Sparse Autoencoders for Mineral Hyperspectral Endmember Extraction. Spectrosc. Spect. Anal. 2021, 41, 1288–1293. [Google Scholar]
  301. Lee, C.; Gasster, S.; Plaza, A.; Chang, C.; Huang, B. Recent Developments in High Performance Computing for Remote Sensing: A Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 508–527. [Google Scholar] [CrossRef]
  302. Gonzalez, C.; Resano, J.; Plaza, A.; Mozos, D. FPGA Implementation of Abundance Estimation for Spectral Unmixing of Hyperspectral Data Using the Image Space Reconstruction Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 248–261. [Google Scholar] [CrossRef]
  303. Lo, Y.C.; Wu, Y.C.; Yang, C.H. A 44.3-mW 62.4-fps Hyperspectral Image Processor for Spectral Unmixing in MAV Remote Sensing. IEEE J. Solid-State Circuits 2024, 60, 1818–1829. [Google Scholar] [CrossRef]
  304. Li, Z.; Chen, J.; Raharadja, S. GPU Implementation of Graph- Regularized Sparse Unmixing with Superpixel Structures. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 2023, 16, 3378–3389. [Google Scholar] [CrossRef]
  305. Gao, J.; Sun, Y.; Zhang, W. Multi-GPU Based Parallel Design of the Ant Colony Optimization Algorithm for Endmember Extraction from Hyperspectral Images. Sensors 2019, 19, 598. [Google Scholar] [CrossRef]
  306. Shen, X.; Bao, W.; Liang, H. Superpixel-Guided Preprocessing Algorithm for Accelerating Hyperspectral Endmember Extraction Based on Spatial-Spectral Analysis. J. Appl. Remote Sens. 2021, 15, 026514. [Google Scholar] [CrossRef]
  307. White, P.A.; Frye, H.; Christensen, M.F.; Gelfand, A.E.; Silander, J.A. Spatial Functional Data Modeling of Plant Reflectances. Ann. Appl. Stat. 2022, 16, 1919–1936. [Google Scholar] [CrossRef]
  308. Hong, D.; Zhang, B.; Li, X.; Li, Y.; Li, C.; Yao, J.; Yokoya, N.; Li, H.; Ghamisi, P.; Jia, X.; et al. SpectralGPT: Spectral Remote Sensing Foundation Model. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5227–5244. [Google Scholar] [CrossRef] [PubMed]
  309. Li, J.; Liu, Y.; Wang, X.; Peng, Y.; Sun, C.; Wang, S.; Zhong, Y. Hyperfree: A Channel-Adaptive and Tuning-Free Foundation Model for Hyperspectral Remote Sensing Imagery. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 10–17 June 2025; pp. 23048–23058. [Google Scholar]
Figure 1. Schematic of hyperspectral unmixing.
Figure 1. Schematic of hyperspectral unmixing.
Remotesensing 17 02968 g001
Figure 2. Publications in Web of Science since 2010 with keywords “hyperspectral unmixing” and “endmember extraction”.
Figure 2. Publications in Web of Science since 2010 with keywords “hyperspectral unmixing” and “endmember extraction”.
Remotesensing 17 02968 g002
Figure 3. A schematic diagram of the hyperspectral unmixing process.
Figure 3. A schematic diagram of the hyperspectral unmixing process.
Remotesensing 17 02968 g003
Figure 4. Partial statistics of the categories of hyperspectral endmember extraction methods via the Web of Science.
Figure 4. Partial statistics of the categories of hyperspectral endmember extraction methods via the Web of Science.
Remotesensing 17 02968 g004
Figure 5. The summary of typical conventional HU algorithms [2,38,63,73,126,142,145,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175].
Figure 5. The summary of typical conventional HU algorithms [2,38,63,73,126,142,145,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175].
Remotesensing 17 02968 g005
Figure 6. The general architecture of an autoencoder.
Figure 6. The general architecture of an autoencoder.
Remotesensing 17 02968 g006
Figure 7. General structure of CNN with 1D, 2D, and 3D.
Figure 7. General structure of CNN with 1D, 2D, and 3D.
Remotesensing 17 02968 g007
Figure 8. The summary of deep learning-based HU algorithms [54,177,182,184,188,189,190,192,196,203,206,207,222,223,225,226,227,235,237,241,245,254,257,262,267,268].
Figure 8. The summary of deep learning-based HU algorithms [54,177,182,184,188,189,190,192,196,203,206,207,222,223,225,226,227,235,237,241,245,254,257,262,267,268].
Remotesensing 17 02968 g008
Figure 9. The development trend and application of hyperspectral unmixing.
Figure 9. The development trend and application of hyperspectral unmixing.
Remotesensing 17 02968 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zou, J.; Qu, H.; Zhang, P. Conventional to Deep Learning Methods for Hyperspectral Unmixing: A Review. Remote Sens. 2025, 17, 2968. https://doi.org/10.3390/rs17172968

AMA Style

Zou J, Qu H, Zhang P. Conventional to Deep Learning Methods for Hyperspectral Unmixing: A Review. Remote Sensing. 2025; 17(17):2968. https://doi.org/10.3390/rs17172968

Chicago/Turabian Style

Zou, Jinlin, Hongwei Qu, and Peng Zhang. 2025. "Conventional to Deep Learning Methods for Hyperspectral Unmixing: A Review" Remote Sensing 17, no. 17: 2968. https://doi.org/10.3390/rs17172968

APA Style

Zou, J., Qu, H., & Zhang, P. (2025). Conventional to Deep Learning Methods for Hyperspectral Unmixing: A Review. Remote Sensing, 17(17), 2968. https://doi.org/10.3390/rs17172968

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop