Next Article in Journal
A Priori Estimation of Radar Satellite Interferometry’s Sensitivity for Landslide Monitoring in the Italian Emilia-Romagna Region
Previous Article in Journal
Research Advances and Prospects of Underwater Terrain-Aided Navigation
Previous Article in Special Issue
Multiscale Feature Search-Based Graph Convolutional Network for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation

by
Ava Vali
,
Sara Comai
* and
Matteo Matteucci
Department of Electronic Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2561; https://doi.org/10.3390/rs16142561
Submission received: 20 May 2024 / Revised: 3 July 2024 / Accepted: 5 July 2024 / Published: 12 July 2024
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images II)

Abstract

:
Hyperspectral imaging holds significant promise in remote sensing applications, particularly for land cover and land-use classification, thanks to its ability to capture rich spectral information. However, leveraging hyperspectral data for accurate segmentation poses critical challenges, including the curse of dimensionality and the scarcity of ground truth data, that hinder the accuracy and efficiency of machine learning approaches. This paper presents a holistic approach for adaptive optimized hyperspectral-based land cover and land-use segmentation using automated machine learning (AutoML). We address the challenges of high-dimensional hyperspectral data through a revamped machine learning pipeline, thus emphasizing feature engineering tailored to hyperspectral classification tasks. We propose a framework that dissects feature engineering into distinct steps, thus allowing for comprehensive model generation and optimization. This framework incorporates AutoML techniques to streamline model selection, hyperparameter tuning, and data versioning, thus ensuring robust and reliable segmentation results. Our empirical investigation demonstrates the efficacy of our approach in automating feature engineering and optimizing model performance, even without extensive ground truth data. By integrating automatic optimization strategies into the segmentation workflow, our approach offers a systematic, efficient, and scalable solution for hyperspectral-based land cover and land-use classification.

1. Introduction

Hyperspectral imaging (HSI) sensors gather extensive information through 3D hyperspectral images, with spatial dimensions for object visual characteristic detection and the spectral dimension helping with material identification. The extraction of such information from hyperspectral images is beyond a manual task and requires advanced computer-aided techniques. Each hyperspectral image consists of numerous narrow-band images of the same scene, thus requiring tailored image processing methods for thorough analysis. Accordingly, while conventional image principles and processing techniques apply to hyperspectral data, their comprehensive utilization requires additional adaptation and effort.
Remote sensing not only serves as the origin and primary domain of HSI technology but also drives most of its advancements and applications. Its foremost application lies under earth observation to detect and monitor the physical characteristics of areas and objects on the earth. Monitoring changes in land cover is key for better formulating and managing regulations to prevent or mitigate damage that results from human activities [1]. Furthermore, monitoring subtle yet significant alterations in land cover aids in predicting and even preventing natural disasters and hazardous events [2]. The continuous temporal availability of remote sensing data can substantially facilitate the automatic extraction, mapping, and monitoring of terrestrial objects and land covers. Being sensitive to narrow spectral bands across a continuous spectral range, HSI arguably emerges as the most promising method for acquiring remote sensing data, as it is remarkably informative and holds the potential to revolutionize earth monitoring capabilities [3].
In recent decades, the exponential growth of computational power has shifted Artificial Intelligence (AI) to the forefront as the most influential and transformative technology of our time. AI harnesses the capabilities of computing systems for training and inference, which facilitates a broad spectrum of applications. Machine Learning (ML), a prominent subdiscipline of AI, employs statistical algorithms to emulate human learning processes by leveraging available data. Through this process, Machine Learning (ML) produces statistical models capable of making predictions for new unseen data. A fundamental task within ML is classification, where objects are identified and categorized. Recent advancements in ML, particularly in image classification and segmentation, underscore the immense potential of these techniques in hyperspectral image analysis [4]. Research indicates that ML methods surpass traditional approaches in hyperspectral image analysis, which typically involves manual or semimanual examination of the spectral information to identify objects and materials [5]. Unlike conventional methods, ML autonomously explores the relationship between the spectral information and desired outcomes during the learning phase, thus exhibiting robustness against noise and outliers in the dataset. Among ML methodologies, supervised learning stands out as the preferred approach due to its simplicity, speed, cost-effectiveness, and reliability.
Semantic segmentation, the primary task in hyperspectral image analysis, entails assigning one or multiple labels to every pixel in a given image, thus generating segmented maps as output. This process utilizes both spectral and spatial information to exploit the physical and chemical characteristics of constituent objects and areas. Hyperspectral segmentation essentially performs pixel-level classification, thus distinguishing it from patch-level classification, which assigns labels to pixel patches. While Deep Learning (DL), a subgroup of ML methodologies, has significantly advanced semantic segmentation in RGB images in recent years [6], hyperspectral image segmentation presents additional complexity due to its spectral dimension [7], which usually contains the most informative details to distinguish the objects and areas within the image. Therefore, standard DL models need customization with additional considerations so that they can properly meet the requirements of the hyperspectral image segmentation task.
Despite the significant potential of ML in hyperspectral image analysis, several critical challenges persist that call for further research and development. One such challenge is the curse of dimensionality [8], which is a phenomenon wherein the computational complexity of a problem escalates drastically with the increase in variables, dimensions, or features. This challenge is particularly pronounced in hyperspectral image analysis due to its spectral dimension, which leads to sparsity and exacerbates the curse of dimensionality. Consequently, efficient exploitation of information from hyperspectral images requires adopting proper strategies for dimensionality reduction. Despite ongoing efforts, devising approaches that effectively reduce dimensionality while maximizing the preservation of valuable information remains a challenge, which is crucial for enhancing the practicality of ML solutions in real-world scenarios [9].
Another critical obstacle in hyperspectral image analysis, particularly for DL techniques, is the ground truth scarcity [10]. DL methods often require extensive training data with a ground truth, which is typically challenging to obtain. This scarcity not only hampers model training but also leads to overfitting and low model performance. Consequently, classical ML techniques like Support Vector Machines (SVMs) may outperform DL in scenarios with limited training data [11]. Some strategies, including data augmentation [12], semisupervised learning [13,14], and transfer learning [15,16,17], aim to address this challenge by respectively augmenting existing datasets, labeling unlabeled data, or transferring knowledge from pretrained models to new datasets so that they can partially mitigate the impact of ground truth scarcity on DL performance.
Ensuring the robustness, reliability, and generalizability of ML models poses another significant challenge [18,19]. Current datasets often lack the necessary variability to develop robust models capable of performing reliably under diverse conditions. Factors such as acquisition time, setup variations, sensor resolutions, and noise levels are frequently overlooked, thus leading to the development of models based on limited data scenarios. Building robust models that can accommodate a wide range of conditions remains a pressing challenge, which is essential for ensuring the practicality of ML solutions in real-world applications.
Furthermore, the accelerating pace of AI development raises concerns regarding computational limitations. Moore’s Law [20], which has historically explained computational progress, is nearing its physical boundaries, which calls for immediate innovative alternative approaches to sustain future advancements. Incorporating more transistors on a microchip is no longer possible, thus approaching physical limits to further miniaturization [21]. Meanwhile, memory production faces similar constraints, as the demand, particularly driven by AI and the Internet of Things (IoT), outpaces the production capacities [22]. Addressing these challenges requires concerted efforts in software optimization, algorithmic innovations, and architectural advancements to ensure the continued progress of AI and ML technologies [23,24].
One of the most significant developments in DL is the emergence of end-to-end pipeline structures. These structures integrate the feature engineering process with training–validation stages, thus consolidating the conventional pipeline’s four main steps: preprocessing, feature engineering, training–validation, and postprocessing. Despite the growing popularity of end-to-end DL models for hyperspectral image segmentation [25], several concerns and challenges persist, thus making the classic four-stage ML pipeline structure more suitable for real-world applications, as explained in [7]. Dimensionality reduction poses another critical challenge, thus requiring the careful preservation of valuable information. Classical ML pipelines address this by conducting feature engineering, which not only streamlines data but also enhances model robustness and reliability by projecting features into a more accurate space, thereby helping to mitigate overfitting issues caused by the limited availability of ground truth data. Although feature engineering improves classifier performance and optimizes resource consumption, its heuristic nature does not guarantee optimal solutions. In contrast, end-to-end models leverage global optimization to identify optimal features during training and validation, thus offering potential improvements in efficiency and performance.
This paper proposes a framework that integrates the classic four-stage ML pipeline structure with end-to-end optimization capabilities, thus specifically tailored to address challenges encountered in hyperspectral segmentation tasks for real-world applications. Feature engineering enables dimensionality reduction, hence lowering the impact of ground truth scarcity on the model performance. We propose a strategy to decompose feature engineering into distinct inner steps, thus enabling the design and development of a framework that generates and optimizes multiple models through various combinations of these steps, including scenarios where feature engineering steps are omitted. This approach facilitates optimized model selection and enables comparative evaluations across different pipeline configurations.
Furthermore, we extend this framework concept into a prototype Automated Machine Learning (AutoML) system [26]. An AutoML framework automates the end-to-end process of applying machine learning to real-world problems, including data preprocessing, feature engineering, model selection, and hyperparameter tuning [27,28]. By incorporating various consolidated techniques at different stages of the pipeline, our system identifies the most suitable methods for specific prediction tasks and input data. Our holistic scheme ensures diverse optimization requirements, including data versioning, model selection, and hyperparameter tuning. This enhances the generalizability, reliability, robustness, repeatability, and tractability of the resulting models while also allowing for effective monitoring and the mitigation of overfitting. The efficient implementation of our optimization scheme facilitates resource management and minimizes the risk of system failure. Additionally, our integrated midprocess statistics reporting enables a systematic review of AutoML behavior and choices, thus providing deeper insights into the effectiveness of each step. Finally, we evaluate our framework using a well-established problem with a widely cited dataset in the literature, thus allowing readers to benchmark our results against state-of-the-art approaches.

2. Materials and Methods

2.1. Framework Overview

End-to-end DL approaches face challenges such as increased processing time inefficiencies and concerns regarding model robustness and generalizability, particularly with high-dimensional data like hyperspectral images. Wolpert’s “No Free Lunch” [29,30] theorem highlights the absence of a universally superior supervised learning algorithm, thus emphasizing the need to tailor approaches to individual classification problems. Therefore, despite end-to-end DL models showing a great capacity to generalize well in practice, it is theoretically unclear and is still being questioned [31,32,33,34]. DL models also require extensive training data, thus exacerbating the challenge of ground truth scarcity in hyperspectral image segmentation tasks. Despite attempts to mitigate these challenges through techniques like unsupervised learning, the curse of dimensionality and increasing model complexity hinder processing efficiency, thus making traditional four-stage machine learning pipelines more suitable for real-world applications.
Figure 1 shows the high-level workflow map we followed for designing and implementing the approach we propose in this research. It comprises three key phases: data engineering, model generation and training, and prediction and evaluation. In the data engineering phase, tasks involve preparing datasets by collecting, preprocessing, and splitting them into train and test sets. This phase is critical for ensuring model performance and the comparability of produced models. The model generation and training phase focuses on training models, tuning hyperparameters, and packaging models. This phase is resource-intensive and aims to enhance classifier performance through feature engineering methodologies. At its core is the model tweaking process, which systematically combines the different proposed steps of feature engineering with diverse classifiers to optimize hyperparameters. This iterative approach ensures that the resulting models are finely tuned for optimal performance across diverse evaluation metrics. The prediction and evaluation phase utilizes the optimized models to predict unseen data, thus extracting evaluation metrics to compare and analyze model performance and ultimately leading to conclusions.

2.2. Model Pipeline Configuration

As reasoned before, the proposed model pipeline configuration in this research adopts a four-stage ML structure comprising preprocessing, feature engineering, core classification/segmentation, and postprocessing steps, with the latter being optional and deferred for later inclusion. This configuration supports two modes: training and prediction. During training, a trained pipeline, or model, is generated and then utilized in the prediction mode to assign labels to new data.

2.2.1. Data Preprocessing and Feature Engineering

Data preprocessing is a critical stage that refines datasets by removing noise and validating data correctness. While some studies include feature engineering tasks within data preprocessing, in this study, we separated them for clarity and conducted preprocessing tasks before data versioning.
In general, feature engineering, as the core of the four-stage ML pipeline structure, aims to optimize features to improve model performance. In the context of high-dimensional data classification tasks, such as hyperspectral image segmentation, feature engineering becomes particularly crucial due to the mathematical complexities introduced by data high dimensionality. By removing redundant information and reducing dimensionality, feature engineering enhances the computational efficiency, performance, and reliability of ML models. The proposed framework assesses different types of feature engineering methodologies, including feature transformation, feature selection, and feature extraction, with each serving distinct roles in enhancing the classification performance. Feature transformation involves mathematical operations to improve feature consistency, while feature selection reduces data dimensionality by selecting relevant subsets. Feature extraction projects data into a lower-dimensional feature space, thus further enhancing computational efficiency. By systematically optimizing the feature engineering steps, the framework aims to improve classification performance and optimize resource consumption, thus providing valuable insights about the dataset and its potential applications.
The framework employs a brute-force assembly process within an iterative loop for hyperparameter tuning, thus generating a set of models with different combinations of feature engineering steps, which are schematically shown in Figure 2. These combinations are strategically positioned within the model pipeline to maximize the effectiveness. Feature transformation is placed at the beginning to preprocess data effectively, while feature extraction, which incorporates a form of feature selection, is positioned last. This positioning ensures the optimal utilization of feature engineering methodologies and avoids redundancy in the pipeline. By delineating clear categories of feature engineering and their respective roles, the framework offers a systematic approach to assess and optimize feature engineering for diverse ML tasks, thus contributing to enhanced model performance and resource efficiency. Our chosen approach and methodology for each type of feature engineering are elucidated as follows:
  • Feature transformation (FT): In this study, we employed two main techniques for feature transformation, Normalization (Min–Max scaling) and Standardization (standard scaling), due to their effectiveness in enhancing classification accuracy. Normalization is suitable for datasets with small standard deviations and non-normal feature distributions, while standardization helps transform data to a normal distribution, thus improving convergence and classification performance. Other scaling techniques like Maximum Absolute and Robust scalers were excluded due to the dataset characteristics, while Quantile and Power transformer scalers were deemed unsuitable for the study’s purposes [35,36,37].
  • Feature selection (FS): The automatic feature selection approaches typically involve a combination of feature subset search methods and evaluation techniques to rank or prioritize features based on their correlations or importance in predictive tasks. These methods are categorized into Wrappers, Filters, and Embedded methods [38]. Wrappers utilize a predictor to assess the usefulness of feature subsets, thus often leading to overfitting, while Filters rely on correlation coefficients or Mutual Information among features, though they may fail to find the best feature set in certain scenarios due to insufficient sample size. Embedded methods, on the other hand, embed the feature subset generator within model training to reduce overfitting and increase efficiency.
Embedded feature selection methods aim to optimize both the goodness of fit and the number of features, which is often achieved through direct objective optimization or thresholding techniques. Linear models with LASSO regularization and Random Forest are examples of direct objective optimization approaches, while thresholding methods like Ridge regularization offer an alternative solution. However, selecting the optimal threshold or number of features remains a challenge, thus often requiring empirical tuning. Another group of embedded methods utilizes nested subsets to manipulate feature subset search, thus employing forward selection or backward elimination techniques. While forward selection is computationally efficient, backward elimination provides a more accurate subset in a general context. Therefore, we employed the Recursive Feature Elimination with Crossvalidation approach (RFECV), which is a common backward elimination approach incorporating crossvalidation for robust feature selection.
For this study, we incorporated Random Forest (RF), Logistic Regression (LR) using Lasso (L1) and Ridge (L2) regularization, the Linear Support Vector Machine (LinearSVM), and K-Nearest Neighbors (KNN) into the RFECV structure as the base estimators [39,40]. RF assesses feature importance by evaluating the decrease in node impurity within its decision trees, thus using Gini impurity to quantify the likelihood of misclassifying a random observation. The LinearSVM and LR determine feature importance based on the coefficients assigned to each variable within their linear model. Although KNN does not inherently offer a measure of feature importance, it can be extracted using the Permutation Feature Importance technique [41], which involves permuting feature values to gauge their impact on model precision, thereby integrating feature importance within the KNN model.
  • Feature extraction (FE): Feature extraction techniques primarily focus on reducing dimensionality by transforming data from a high-dimensional feature space to a lower-dimensional one. Unlike feature selection, which discards certain features, feature extraction aims to summarize information while highlighting important details and suppressing less relevant ones. While convolutional neural networks excel at feature extraction, the computational complexity and demand for extensive training data pose challenges, thus aligning with concerns regarding end-to-end pipelines. Therefore, in this study, we employed alternative feature extraction techniques such as Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Kernel Fisher’s Discriminant Analysis (KFDA), and Locally Linear Embedding (LLE).
PCA computes principal components for linear dimensionality reduction by maximizing variance via eigenvectors [42], while KPCA extends this by using a kernel matrix to enable nonlinear dimensionality reduction [43,44]. Generalizing the PCA approach, ICA aims to extract maximally independent components, instead of principal components, from the original features, thus relying on the assumption of mutual statistical independence and non-Gaussian distribution of the components [45]. LDA, unlike PCA and ICA, is a supervised technique that employs a discriminant rule to project data into a lower-dimensional space, thus typically reducing the dimensions to K 1 , where K represents the number of classes. It can further reduce the dimensions by projecting data into a subspace with dimension L, if L < K 1 , which is similar to PCA’s approach of selecting the first L eigenvectors of the projection [46]. KFDA focuses on maximizing the ratio of between-class variance to within-class variance using Fisher’s Discriminant Analysis in the feature space [47], while LLE preserves the distances within local neighborhoods, thus mapping data to a low-dimensional space based on optimal linear reconstructions from nearby data points [48].

2.2.2. Core Image Segmentation Task

Hyperspectral image analysis typically involves retrieving valuable information from both the spectral and spatial characteristics of image pixels. Segmentation methods for hyperspectral images vary, thus ranging from spectral-based approaches—often referred to as pixelwise classification approaches—to those integrating both spatial and spectral data—which are known as spatial–spectral classification. Common methods for incorporating spatial information include superpixel approaches [49], spatial filtering [50], and utilizing 3D CNN structures [51]. In scenarios where the spatial information is less relevant, such as material identification or component detection, the spectral information becomes paramount.
While some studies advocate for spatial–spectral approaches in hyperspectral image segmentation, they often fail to demonstrate advantages over spectral pixelwise classification methods. Coarse spatial resolution can further diminish the usefulness of spatial details, particularly in land-use and land cover segmentation problems. Accordingly, pixelwise classification for the hyperspectral segmentation task has been employed in this study. We recommend integrating spatial features through image processing tools in the preprocessing and postprocessing phases for future cases where the spatial characteristics are more meaningful, as in high-resolution scenarios. In the evaluation of the proposed framework, two popular classifiers, the Nonlinear Support Vector Machine with Radial Basis Function Kernel (KernelSVM-RBF) and Multilayer Perceptron (MLP), were selected to assess the impact of optimized feature engineering on classification performance, thus ensuring comparability with state-of-the-art results. For MLP, we adopted a shallowwide network with a single hidden layer of size 1000 to simplify the framework evaluation by reducing the classifier complexity and highlighting the impact of feature engineering steps. However, our chosen classifiers for this framework may not represent the optimal choice, thus suggesting potential improvements with better classifiers.

2.3. Optimization Strategies

Arguably, the most critical research challenges in using ML techniques concern how to solve, verify, validate, and compare the employed models. Accordingly, our AutoML framework is designed to incorporate different optimization strategies to solve the model’s inner mathematical problem, tune its parameters, and validate the results. It is specifically designed to ensure the solution’s reliability, generalizability, reproducibility, and comparability. Typically, optimization involves regularization, hyperparameter tuning, and model selection. In addition to that, we also adopted data versioning to facilitate managing the iterative process of model development by creating distinct versions of datasets to track changes and updates.
As a primary step, data versioning was achieved through stratified k-folding—a sampling approach for crossvalidation—where the dataset was divided into representative folds for training and testing, with k = 3 . These versions were stored separately, thus ensuring reproducibility and aiding in model comparison and selection. The testing portion remained untouched for final evaluation and model selection, while the rest was used for training and hyperparameter tuning. We performed random shuffling before k-folding to ensure unbiased samples.
Regularization is an added objective specific to each ML methodology that helps to avoid overfitting by defining a loss function to optimize the model inner parameters. On the other hand, hyperparameter tuning optimizes input parameters through an iterative process, which is often conducted through heuristic approaches like Grid Search and Random Search. We adopted Grid search to ensure the reproducibility and comparability of the models, which is especially crucial for small datasets, as random search’s high variance and lack of reproducibility can hinder performance evaluation.
Back to crossvalidation, the same stratified k-folding with k = 3 was also used for hyperparameter tuning. Crossvalidation helps monitor the possibility of overfitting by systematically rotating through different subsets of the data for training and validation. To prevent bias in evaluating the model performance, we adopted the Nested Crossvalidation (or Double Crossvalidation) approach, as depicted in Figure 3. This involves outer iterations for model selection and inner iterations for hyperparameter tuning, with each using stratified crossvalidation. The best-performing model from inner iterations was retrained on its entire training version subset to produce the final model.
The feature engineering proposed in this study emphasizes the independence of feature transformation and selection from the classifier, thus ensuring flexibility and adaptability across different ML tasks. While techniques like PCA and LDA allow for the flexible adjustment of feature components, others like LLE, ICA, and LFDA pose challenges due to the varying interpretations of component numbers. The dependence of feature extraction on the classifier’s performance prompts tuning of the number of components as a pipeline hyperparameter across different classifiers to determine the optimal value. Consequently, feature extraction’s dependency on the classifier’s choice necessitates tuning the number of components as a pipeline hyperparameter across various classifiers to determine its optimal value, as depicted in the schematic diagram in Figure 4.
As previously described, we implemented an embedded approach known as backward elimination for feature selection optimization. This method ensured the independence of the feature selection process, thus leading to a notable reduction in the computational load and enabling unbiased model inference. Accordingly, each outer–inner iteration combination of feature transformation and selection technique was conducted separately before model training and hyperparameter tuning. The best-performing feature subset across iterations was selected through maximum voting. Figure 4 illustrates this strategy, thus demonstrating how the chosen feature subset is integrated into the pipeline configuration to streamline dataset shrinkage.
Finally, we used the framework to select the most accurate model for hyperspectral classification tasks, thus implicitly determining the most effective techniques. As explained previously, the framework incorporates several combinations of pipeline steps, including the end-to-end pipelines (without feature engineering) for explanatory purposes, thus allowing us to compare and demonstrate the efficacy of feature engineering. Accordingly, the primary evaluation compared the outcomes of the proposed four-stage pipeline with an end-to-end structure, thus assessing the computational time and predictive performance. The performance evaluation involved calculating the predictive accuracy using the test portion of the dataset from the outer iterations, with the accuracies averaged for comparison. Additional metrics like F1 Score, Precision, and Recall were also calculated to provide further statistical insight into model performance.
The secondary objective emphasizes the stand-alone nature of feature engineering, thus reducing the computational burden and providing insights for robust dimension reduction. As mentioned earlier, unlike other feature engineering steps, feature extraction’s parameter optimization depends on the choice of the classifier. Therefore, we can assess if tuning the feature extraction parameters improves the prediction and whether this improvement is influenced by the classifier choice.

2.4. Implementation

The implementation of the framework is based on Python 3.6.x, which was chosen for its widespread support and compatibility with all utilized packages. Various libraries, including Pandas, Numpy, Matplotlib, mpl_toolkits, scikit-learn, scikit-image, kfda, pillow, Pickle, scipy, math, and others, were employed for different tasks such as data analysis and result visualization. The deployment and execution occurred on a Docker-managed server utilizing CPU cores exclusively. The server’s multiuser nature dictates CPU core allocation, with 10 cores designated for feature transformation–selection and the grid search split into 16 parallel executions, with each utilizing 5 cores. Communication with the server and result visualization were facilitated through the Jupyter Notebook web service, thus enabling remote access via SSH Tunnelling for efficient management and access to execution results.

2.5. Testing Dataset

To evaluate the framework, we used the Indian Pines [52] dataset, which is a hyperspectral image capturing a scene from the Indian Pines test site in northwest Tippecanoe County, Indiana, US covering a 2 × 2 mile portion, including the Purdue University Agronomy farm and its surroundings. Captured by the AVIRIS sensor aboard a NASA aircraft on 12 June 1992, the image comprises 145 × 145 pixels and 224 spectral reflectance bands in the range of 0.4–2.5 × 10 6 m. Accessible through the Purdue University Research Repository [53], the dataset has 220 spectral bands due to noise removal. It is already calibrated, and the pixel values are proportional to radiance. The ground truth contains 16 classes predominantly related to agriculture and some to forests and natural perennial vegetations, as well as features elements like highways, a rail line, housing, and built structures, with some unlabeled areas. Figure 5 shows the number of labeled pixels per class and their percentage in the ground truth set.
The choice of the dataset was intentionally small in volume to align with the primary aim of this research: tackling the ground truth scarcity issue. In real-world applications of remote sensing data, collecting accurate and reliable ground truth informationfor each potential class on the ground is costly and labor-intensive but also often too difficult due to limited accessibility to regions and the temporal variability of on-ground objects. Consequently, ground truth scarcity is an inevitable issue with hyperspectral datasets. Therefore, using the Indian Pines dataset, which encapsulates the challenges of high-dimensional data with limited labeled samples, allowed us to effectively evaluate the proposed framework’s ability to handle these constraints. The inherent complexity of the dataset, due to the variety of classes and the presence of mixed pixels, further tested the robustness and reliability of our AutoML framework in hyperspectral image segmentation tasks.

3. Results

3.1. Part1: Feature Transformation–Selection

The distribution and value range of the dataset features are illustrated in Figure 6, which showcases the impact of feature transformation techniques on data normalization and scaling. This figure presents box–whisker plots of the original, standardized, and min–max-scaled versions of the Indian Pines dataset’s feature distribution, thus offering insights into the population and distribution based on quartiles. The x axis denotes spectral bands/features, while the y axis represents intensity values, with green boxes indicating the Interquartile Range (IQR) and slim black lines depicting the range from minimum to maximum feature values. White circles represent outliers, and red points denote feature medians. Notably, the original data exhibit wide variations in the feature value ranges, thus potentially stemming from inherent issues with the hyperspectral scanners. Standardization and min–max scaling were proposed to address these issues, with standardization proving effective in achieving consistent range scaling, thus unaffected by outliers. Accordingly, we also observe that our proposed framework intends to include the feature transformation step, particularly selecting standardization, in its optimized final model.
As explained previously, the execution of combined feature transformation and feature selection was carried out separately before the whole pipeline execution. Figure 7 shows how these combinations performed on each outer fold. As a reminder, we adopted the nested crossvalidation structure, where outer folds, aiming to avoid biased decisions, serve the model selection purpose, and the inner folds are used for hyperparameter tuning. The figure illustrates the yield score/accuracy, denoted on the y axis, for each recursion of the RFECV models, thus denoted on the x axis. Each recursion handled a specific subset of features reached by gradual elimination. The results highlight the inefficacy of RFECV–KNN, thus leading us to eliminate this feature selection option from further framework execution. On the other hand, the RFECV models with embedded RF and the linear SVM showed promising performance across all feature selection cases. Looking deeper into the results, scaling the data led to less biased feature selections for the RFECV–LR models. On the contrary, in the case of the embedded RF and linear SVM, all the feature transformation cases seemed to perform almost equally well. Moreover, the results indicate that RFECV–RF allowed more dimensionality reduction, and RFECV–LR with Ridge regularization maintained the highest number of channels.

3.2. Part 2: The Whole Framework

Benchmarking is essential for evaluating the framework’s performance against established standards. It helps compare the framework’s capabilities with state-of-the-art methods, thus giving a clear idea of its contribution to the field. The proposed framework provides all statistical details per pipeline configuration. The results notably reveal the low validation accuracy of the pipelines without feature engineering across our classifier choices. Including feature transformation techniques within the pipeline configuration shows a significant improvement, thus proving the impact of data preparation on model performance. Notably, standardization emerged as a superior choice over min–max scaling, thus particularly enhancing the performance of distance-based techniques like SVM-RBF. Integrating feature extraction after feature transformation, but omitting feature selection, led to a modest improvement in the accuracy without significant dimensionality reduction. In contrast, relying solely on the combination of feature transformation and feature selection techniques, better enhanced the classification performance more effectively than the latter combination. We will demonstrate and discuss these results later in the Discussion section.
As explained previously, due to the dependence of feature extraction on the classifier’s performance, their hyperparameter (number of components) was tuned as a part of pipeline hyperparameters across different classifiers to determine the optimal value. Accordingly, the final evaluation of the framework’s optimized models based on feature extraction techniques for SVM-RBF and MLP core classifiers are presented in Table 1 and Table 2, respectively. The five best-performing configurations are highlighted in yellow. Comparing the results across tables, it becomes evident that models incorporating feature selection without feature extraction yielded comparable or superior performance to those with feature extraction, thus suggesting that the latter may not necessarily enhance accuracy. Figure 8 represents the classification maps generated by the top-performing models highlighted in Table 1 and Table 2 and visually showcases their comparable performance.

4. Discussion

In ML, performance is a trade-off between the accuracy and time of the process. It stems from the fact that there is neither absoluteness in any AI task nor in ML. Specifically, ML adopts statistical learning techniques that rely on error minimization, which never reaches zero when dealing with real-world datasets. The efficiency in an ML model is also defined as a compromise between these two factors, accuracy and time, which arerestricted by the requirements of the specified application. Accordingly, the primary objective in our proposed framework is to improve hyperspectral image segmentation through feature engineering, which also involves striking a balance between the accuracy and inference time. The reduced inference time is particularly crucial for real-time applications, where reducing latency and increasing throughput are of concern.
As previously elucidated, our framework encompasses various models constructed from all conceivable pipeline configurations, thus facilitating a benchmarking setup to assess the impact of feature engineering on hyperspectral image segmentation tasks. Figure 9 illustrates a comparison of the mean accuracy and mean inference time across different models, thus incorporating or excluding various feature engineering steps. To establish a benchmark for comparing the effect of feature engineering, we categorized the pipeline structures based on state-of-the-art approaches into five reference categories: the end-to-end pipeline or “no feature engineering steps”, “only with feature transformation (FT) step”, “with FT and feature extraction (FE) steps”, “with FT and feature selection (FS) steps”, and “with all feature engineering steps” pipelines. Figure 9 depicts the mean accuracy and inference time of the best-performing model across all these configuration categories. It is important to note that for clarity, all values are presented relative to the “only with FT step”. Thus, at the “only with FT step”, the mean value for the accuracy and inference time is depicted as zero. Notably, standardization consistently emerged as the optimal feature transformation technique, with PCA being the preferred feature extraction method across all cases. Additionally, RFECV-LR-l1 has been identified as the optimal feature selection technique.
The results reveal a consistent improvement in the prediction performance and a reduction in the inference time with the inclusion of feature engineering steps. Notably, while feature transformation and feature selection significantly enhanced performance, feature extraction had a relatively minor impact. This underscores the efficacy of the proposed feature transformation–selection approach for dimensionality reduction while improving performance. It is suggested that focusing on feature transformation and selection, independent of the core classifier choice, can greatly enhance AutoML performance.
We have also assessed the framework using another hyperspectral dataset from an industrial setup, which has been reported thoroughly in [54]. This dataset focuses on a specific application in detecting and measuring residual contaminants in the production of washing machine cabinets for zero-defect manufacturing. The choice of this dataset is significant, because it not only involves a different sensor and setup but also targets classes with less distinct spectral signatures, thus providing a more stringent evaluation of the framework’s capabilities. Overall, the results are consistent. The AutoML framework produced the optimized robust model and impacted the overall performance. The findings underscore the importance of tailored feature engineering strategies in optimizing model performance and efficiency across diverse datasets and scenarios. The industrial assessment provides a comprehensive and thorough test of the framework’s transferability, thus highlighting its capability to adapt to different contexts and confirming its generalizability beyond similar case studies.

5. Conclusions

In this study, we introduced an AutoML framework tailored for hyperspectral image segmentation, thus highlighting the effectiveness of a classic four-stage ML pipeline structure that integrates feature engineering to address data challenges. Through feature engineering and dimensionality reduction techniques, we mitigated the necessity for a large quantity of labeled data, which is particularly a challenge in the case of end-to-end deep learning models. Additionally, the proposed framework not only improved overall performance but also generated models with more accurate predictions within shorter inference times. Through a multilateral optimization approach, the framework ensured model robustness and reliability by mitigating overfitting and bias concerns. The transparency of the framework’s process, allowing access to all models and statistical information, facilitated the validation of the proposed approach’s effectiveness. Overall, the study successfully achieved its objectives in addressing challenges with hyperspectral image-based machine learning solutions.
The hyperspectral-based AutoML framework presented in this study offers a streamlined solution for developing supervised ML models tailored to specific hyperspectral image segmentation tasks. Automating the task-specific feature engineering process simplifies what is typically a complex and expertise-intensive endeavor. Notably, the proposed feature selection component, identified as a key factor in enhancing predictive performance, can pinpoint irrelevant features, thus enabling informed decisions regarding data collection, transmission, and storage efficiency.
In considering future directions for AutoML, it is crucial to prioritize sustainability, efficiency, scalability, and inclusiveness. Addressing the resource-intensive nature and carbon emissions associated with AutoML processes is essential, and efforts should be made to minimize footprints through optimization criteria like overall runtime, energy consumption, and CO2 emissions. Enhancing efficiency remains a key focus, particularly through algorithmic improvements and resource consumption optimization, and there is an opportunity to assess and refine frameworks for better performance. Additionally, scalability and inclusiveness are critical for making AutoML more accessible across different fields and applications. The proposed framework offers problem-specific solutions and can be scaled up independently or integrated into other AutoML frameworks. Future research should explore alternative feature engineering sequences and incorporate multiple techniques within each category to improve dimensionality reduction effectiveness. Evaluating additional ML and image processing methodologies, along with exploring new evaluation metrics and model selection techniques, can provide a more comprehensive analysis of AutoML frameworks. Moreover, extending the evaluation to new applications will help establish the generalizability of the proposed approach across diverse hyperspectral datasets and applications.

Author Contributions

Conceptualization, A.V., S.C. and M.M.; methodology, A.V., S.C. and M.M.; investigation, A.V.; resources, S.C. and M.M.; data curation, A.V.; writing—original draft preparation, A.V.; writing—review and editing, S.C. and M.M.; visualization, A.V.; supervision, S.C. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data derived from public domain resources, available at https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 1 July 2024).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Leeuw, J.; Georgiadou, Y.; Kerle, N.; De Gier, A.; Inoue, Y.; Ferwerda, J.; Smies, M.; Narantuya, D. The function of remote sensing in support of environmental policy. Remote Sens. 2010, 2, 1731–1750. [Google Scholar] [CrossRef]
  2. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
  3. Goetz, A.F. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  4. Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE Signal Process. Mag. 2013, 31, 45–54. [Google Scholar] [CrossRef]
  5. Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat, f. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef] [PubMed]
  6. Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
  7. Vali, A.; Comai, S.; Matteucci, M. Deep learning for land use and land cover classification based on hyperspectral and multispectral earth observation data: A review. Remote Sens. 2020, 12, 2495. [Google Scholar] [CrossRef]
  8. Bellman, R. Dynamic programming. Science 1966, 153, 34–37. [Google Scholar] [CrossRef] [PubMed]
  9. Jia, W.; Sun, M.; Lian, J.; Hou, S. Feature dimensionality reduction: A review. Complex Intell. Syst. 2022, 8, 2663–2693. [Google Scholar] [CrossRef]
  10. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep learning meets hyperspectral image analysis: A multidisciplinary review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef]
  11. Liu, P.; Choo, K.K.R.; Wang, L.; Huang, F. SVM or deep learning? A comparative study on remote sensing image classification. Soft Comput. 2017, 21, 7053–7065. [Google Scholar] [CrossRef]
  12. Yu, X.; Wu, X.; Luo, C.; Ren, P. Deep learning in remote sensing scene classification: A data augmentation enhanced convolutional neural network framework. Giscience Remote Sens. 2017, 54, 741–758. [Google Scholar] [CrossRef]
  13. Triguero, I.; García, S.; Herrera, F. Self-labeled techniques for semi-supervised learning: Taxonomy, software and empirical study. Knowl. Inf. Syst. 2015, 42, 245–284. [Google Scholar] [CrossRef]
  14. Han, W.; Feng, R.; Wang, L.; Cheng, Y. A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 23–43. [Google Scholar] [CrossRef]
  15. Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 2015, 13, 105–109. [Google Scholar] [CrossRef]
  16. Chen, Z.; Zhang, T.; Ouyang, C. End-to-end airplane detection using transfer learning in remote sensing images. Remote Sens. 2018, 10, 139. [Google Scholar] [CrossRef]
  17. Hong, D.; Yokoya, N.; Xia, G.S.; Chanussot, J.; Zhu, X.X. X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data. ISPRS J. Photogramm. Remote Sens. 2020, 167, 12–23. [Google Scholar] [CrossRef] [PubMed]
  18. Jiang, T.; Gradus, J.L.; Rosellini, A.J. Supervised machine learning: A brief primer. Behav. Ther. 2020, 51, 675–687. [Google Scholar] [CrossRef] [PubMed]
  19. Zhou, L.; Pan, S.; Wang, J.; Vasilakos, A.V. Machine learning on big data: Opportunities and challenges. Neurocomputing 2017, 237, 350–361. [Google Scholar] [CrossRef]
  20. Moore, G.E. Cramming more components onto integrated circuits. Proc. IEEE 1998, 86, 82–85. [Google Scholar] [CrossRef]
  21. Theis, T.N.; Wong, H.S.P. The end of moore’s law: A new beginning for information technology. Comput. Sci. Eng. 2017, 19, 41–50. [Google Scholar] [CrossRef]
  22. Gholami, A.; Yao, Z.; Kim, S.; Hooper, C.; Mahoney, M.W.; Keutzer, K. Ai and memory wall. arXiv 2024, arXiv:2403.14123. [Google Scholar] [CrossRef]
  23. Shalf, J. The future of computing beyond Moore’s Law. Philos. Trans. R. Soc. A 2020, 378, 20190061. [Google Scholar] [CrossRef] [PubMed]
  24. Lundstrom, M.S.; Alam, M.A. Moore’s law: The journey ahead. Science 2022, 378, 722–723. [Google Scholar] [CrossRef] [PubMed]
  25. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
  26. Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J.; Blum, M.; Hutter, F. Efficient and robust automated machine learning. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar]
  27. Hutter, F.; Kotthoff, L.; Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  28. Baratchi, M.; Wang, C.; Limmer, S.; van Rijn, J.N.; Hoos, H.; Bäck, T.; Olhofer, M. Automated machine learning: Past, present and future. Artif. Intell. Rev. 2024, 57, 1–88. [Google Scholar] [CrossRef]
  29. Wolpert, D.H. The lack of a priori distinctions between learning algorithms. Neural Comput. 1996, 8, 1341–1390. [Google Scholar] [CrossRef]
  30. Wolpert, D.H.; Macready, W.G. Coevolutionary free lunches. IEEE Trans. Evol. Comput. 2005, 9, 721–735. [Google Scholar] [CrossRef]
  31. Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; Vinyals, O. Understanding deep learning requires rethinking generalization. arXiv 2016, arXiv:1611.03530. [Google Scholar] [CrossRef]
  32. Kawaguchi, K.; Kaelbling, L.P.; Bengio, Y. Generalization in deep learning. arXiv 2017, arXiv:1710.05468. [Google Scholar]
  33. Saxe, A.M.; Bansal, Y.; Dapello, J.; Advani, M.; Kolchinsky, A.; Tracey, B.D.; Cox, D.D. On the information bottleneck theory of deep learning. J. Stat. Mech. Theory Exp. 2019, 2019, 124020. [Google Scholar] [CrossRef]
  34. Dinh, L.; Pascanu, R.; Bengio, S.; Bengio, Y. Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; Volume 70, pp. 1019–1028. [Google Scholar]
  35. Steinbrecher, G.; Shaw, W.T. Quantile mechanics. Eur. J. Appl. Math. 2008, 19, 87–112. [Google Scholar] [CrossRef]
  36. Zheng, A.; Casari, A. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2018. [Google Scholar]
  37. Kuhn, M.; Johnson, K. Feature Engineering and Selection: A Practical Approach for Predictive Models; Chapman and Hall/CRC: Boca Raton, FL, USA, 2019. [Google Scholar]
  38. Venkatesh, B.; Anuradha, J. A review of feature selection and its methods. Cybern. Inf. Technol. 2019, 19, 3–26. [Google Scholar] [CrossRef]
  39. Sung, J.; Han, S.; Park, H.; Hwang, S.; Lee, S.J.; Park, J.W.; Youn, I. Classification of stroke severity using clinically relevant symmetric gait features based on recursive feature elimination with cross-validation. IEEE Access 2022, 10, 119437–119447. [Google Scholar] [CrossRef]
  40. Misra, P.; Yadav, A.S. Improving the classification accuracy using recursive feature elimination with cross-validation. Int. J. Emerg. Technol. 2020, 11, 659–665. [Google Scholar]
  41. Altmann, A.; Toloşi, L.; Sander, O.; Lengauer, T. Permutation importance: A corrected feature importance measure. Bioinformatics 2010, 26, 1340–1347. [Google Scholar] [CrossRef] [PubMed]
  42. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  43. Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
  44. Schölkopf, B.; Smola, A.; Müller, K.R. Kernel principal component analysis. In Proceedings of the International Conference on Artificial Neural Networks, Lausanne, Switzerland, 8–10 October 1997; pp. 583–588. [Google Scholar]
  45. Hyvarinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 1999, 10, 626–634. [Google Scholar] [CrossRef]
  46. Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2017; Volume 2. [Google Scholar]
  47. Ghojogh, B.; Karray, F.; Crowley, M. Fisher and kernel Fisher discriminant analysis: Tutorial. arXiv 2019, arXiv:1906.09436. [Google Scholar]
  48. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
  49. Jia, S.; Deng, B.; Zhu, J.; Jia, X.; Li, Q. Local binary pattern-based hyperspectral image classification with superpixel guidance. IEEE Trans. Geosci. Remote Sens. 2017, 56, 749–759. [Google Scholar] [CrossRef]
  50. He, L.; Li, J.; Plaza, A.; Li, Y. Discriminative low-rank Gabor filtering for spectral–spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 55, 1381–1395. [Google Scholar] [CrossRef]
  51. Li, Y.; Zhang, H.; Shen, Q. Spectral—Spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  52. Baumgardner, M.F.; Biehl, L.L.; Landgrebe, D.A. 220 band aviris hyperspectral image data set: June 12, 1992 indian pine test site 3. Purdue Univ. Res. Repos. 2015, 10, R7RX991C. [Google Scholar]
  53. Hyperspectral Images. Available online: https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 1 July 2024).
  54. Vali, A. Hyperspectral Image Analysis and Advanced Feature Engineering for Optimized Classification and Data Acquisition. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 2022. [Google Scholar]
Figure 1. The general scheme of the ML workflow; each semirectangular node schematically represents a task within the workflow. The aim is to assess different types of feature engineering for a given individual ML problem and evaluate the impact of the best-fitted set of feature engineering in improving prediction performance and efficiency.
Figure 1. The general scheme of the ML workflow; each semirectangular node schematically represents a task within the workflow. The aim is to assess different types of feature engineering for a given individual ML problem and evaluate the impact of the best-fitted set of feature engineering in improving prediction performance and efficiency.
Remotesensing 16 02561 g001
Figure 2. Overview of the feature engineering assessment setup (FT = feature transformation, FS = feature selection, and FE = feature extraction). Each feature engineering task.
Figure 2. Overview of the feature engineering assessment setup (FT = feature transformation, FS = feature selection, and FE = feature extraction). Each feature engineering task.
Remotesensing 16 02561 g002
Figure 3. The overview of our adopted Nested Crossvalidation strategy, with k = 3 .
Figure 3. The overview of our adopted Nested Crossvalidation strategy, with k = 3 .
Remotesensing 16 02561 g003
Figure 4. An overview of feature transformation–selection strategy adopted within the proposed framework.
Figure 4. An overview of feature transformation–selection strategy adopted within the proposed framework.
Remotesensing 16 02561 g004
Figure 5. Overview of the class distribution on Indian Pines dataset. The left figure demonstrates the number of samples (pixels) per class, and the right figure shows each class ratio with respect to the total number of pixels.
Figure 5. Overview of the class distribution on Indian Pines dataset. The left figure demonstrates the number of samples (pixels) per class, and the right figure shows each class ratio with respect to the total number of pixels.
Remotesensing 16 02561 g005
Figure 6. The overview of feature transformation’s impact on the Indian Pines dataset. The figure contains the feature distribution demonstrated via box–whisker plots for the original data and the transformed versions using standardization and min–max scaling.
Figure 6. The overview of feature transformation’s impact on the Indian Pines dataset. The figure contains the feature distribution demonstrated via box–whisker plots for the original data and the transformed versions using standardization and min–max scaling.
Remotesensing 16 02561 g006
Figure 7. Complete overview of the REFCV models’ performance in each recursion. Each recursion picks a subset of features after gradually eliminating the least important features. Therefore, each recursion is referable by the number of features (along the x axis).
Figure 7. Complete overview of the REFCV models’ performance in each recursion. Each recursion picks a subset of features after gradually eliminating the least important features. Therefore, each recursion is referable by the number of features (along the x axis).
Remotesensing 16 02561 g007
Figure 8. The classification maps for the Indian Pines dataset yielded using the best models achieved by the framework are listed in Table 1 and Table 2. The upper row shows the dataset, the ground truth, and the legend of classes. The lower 2 × 5 set of images presents the classification maps, each of which was predicted by the model and includes the core classifier indicated at the left and the feature extraction technique noted at the bottom. More specifically, the best model version among the three folds was utilized for this classification.
Figure 8. The classification maps for the Indian Pines dataset yielded using the best models achieved by the framework are listed in Table 1 and Table 2. The upper row shows the dataset, the ground truth, and the legend of classes. The lower 2 × 5 set of images presents the classification maps, each of which was predicted by the model and includes the core classifier indicated at the left and the feature extraction technique noted at the bottom. More specifically, the best model version among the three folds was utilized for this classification.
Remotesensing 16 02561 g008
Figure 9. The performance measures, inference accuracy, and time of the framework’s final optimized model compared to its innercorporated optimized models used as baseline references based on the Indian Pines dataset. The bars represent the average difference from the extended version of the end-to-end reference (just FT step), and the line anchors represent the standard deviations.
Figure 9. The performance measures, inference accuracy, and time of the framework’s final optimized model compared to its innercorporated optimized models used as baseline references based on the Indian Pines dataset. The bars represent the average difference from the extended version of the end-to-end reference (just FT step), and the line anchors represent the standard deviations.
Remotesensing 16 02561 g009
Table 1. The performance results related to the framework’s output optimized models using the SVM-RBF core classifier executed separately for each choice of feature extraction and in case of its absence within the pipeline configuration (to allow repeatability of process in case of system failure). It shows the details of optimized models after hyperparameter tuning on each outer fold of the framework and contains the corresponding final evaluation results. Highlighted rows indicate the best-performing model configurations.
Table 1. The performance results related to the framework’s output optimized models using the SVM-RBF core classifier executed separately for each choice of feature extraction and in case of its absence within the pipeline configuration (to allow repeatability of process in case of system failure). It shows the details of optimized models after hyperparameter tuning on each outer fold of the framework and contains the corresponding final evaluation results. Highlighted rows indicate the best-performing model configurations.
Feature Extractor Best Feature Selector CV-ScoreFit Time (ms)Score Time (ms)Final Eval. (acc.)
Type n_comp. FT Inner Model n_feat. Fold Best C Mean std Mean std Mean std Train Test Mean
None-stdRFECV-LR-l114411000.91920.00137.43380.53058.11130.11130.98670.93090.9300
-stdRFECV-SVM15221000.91910.00316.81880.28074.25700.28710.97920.9254
-stdRFECV-LR-l114431000.91950.003912.82431.98539.63070.42630.97720.9338
PCA140stdRFECV-LR-l114411000.92100.00493.63251.07453.31190.43580.98620.93090.9291
150stdRFECV-SVM15221000.91990.00526.12060.24246.00431.12080.97910.9242
82stdRFECV-LR-l1144310000.91910.00084.76581.08757.03124.40780.99550.9321
KPCA (kernel = poly)90stdRFECV-RF91110,0000.89400.004542.43277.11764.23250.95570.98730.91830.9136
148stdRFECV-SVM152210,0000.89240.003240.32972.74495.21460.36180.96210.9057
148stdRFECV-SVM152310,0000.88820.000729.86391.43774.01410.18350.97530.9166
KPCA (kernel = rbf)150stdRFECV-SVM152110,0000.88580.011822.76311.73722.60800.31690.99850.92040.9200
152stdRFECV-SVM152210,0000.88720.006539.49272.53754.08130.52380.99750.9177
150stdRFECV-SVM152310,0000.88120.004723.13442.23752.47890.28730.99800.9218
FastICA90noneRFECV-LR-l191110,0000.83780.004313.23194.68812.00390.16060.97750.89820.8943
138stdRFECV-SVM152210,0000.81020.0046112.549576.03403.71060.16910.94980.8940
134stdRFECV-LR-l1144310,0000.80160.008866.876025.23535.16661.15510.95300.8908
LDA14stdRFECV-SVM15211000.86170.00463.82670.16922.76740.49690.93120.87210.8710
10noneRFECV-LR-l114421000.86080.00411.15770.29380.61320.07950.90980.8630
12noneRFECV-LR-l114431000.85760.00500.93770.30760.61770.20360.92300.8779
KFDA (kernel = poly)18stdRFECV-SVM1521100.87080.005024.57913.37792.85680.25840.99970.88790.8866
24stdRFECV-SVM1522100.85410.008234.02986.89032.53980.20630.99560.8854
20stdRFECV-SVM1523100.84880.006150.18238.00261.99810.10230.99610.8866
KFDA (kernel = rbf)46stdRFECV-LR-l11441100.90970.006775.75409.01902.11150.18970.99940.92570.9232
58stdRFECV-LR-l11442100.90750.005847.717514.74111.37640.02510.99910.9236
50stdRFECV-LR-l11443100.90320.006952.890210.98021.76340.15590.99910.9202
LLE (nn = 3)150stdRFECV-SVM152110,0000.73810.004633.39140.10528.09190.44090.83780.75620.7641
148stdRFECV-SVM152210,0000.73340.005041.86220.87124.83911.31780.81960.7567
152stdRFECV-SVM152310,0000.72950.003745.86104.550811.69971.47740.84810.7793
Table 2. The performance results related to the framework’s output optimized models using the MLP core classifier executed separately for each choice of feature extraction and in case of its absence within the pipeline configuration (to allow repeatability of process in case of system failure). It shows the details of optimized models after hyperparameter tuning on each outer fold of the framework and contains the corresponding final evaluation results. Best configurations are highlighted in yellow.
Table 2. The performance results related to the framework’s output optimized models using the MLP core classifier executed separately for each choice of feature extraction and in case of its absence within the pipeline configuration (to allow repeatability of process in case of system failure). It shows the details of optimized models after hyperparameter tuning on each outer fold of the framework and contains the corresponding final evaluation results. Best configurations are highlighted in yellow.
Feature Extractor Best Feature Selector cls. hyppr.CV-ScoreFit Time (ms)Score Time (ms)Final Eval. (acc.)
Type n_comp. FT Inner Model n_feat. Fold Activation α Mean std Mean std Mean std Train Test Mean
None-stdRFECV-LR-l11441logistic0.0010.92460.0017105.69947.91590.04050.01750.98170.92480.9327
-stdRFECV-LR-l11442logistic0.0010.92730.0054133.512520.92370.02940.00920.98640.9321
-stdRFECV-LR-l11443logistic0.0010.92540.0041164.030613.27480.07900.03630.98900.9412
PCA144stdRFECV-SVM1521logistic0.010.92800.002648.34152.47170.02470.00250.99520.92770.9338
144stdRFECV-LR-l11442logistic0.010.92730.0034119.43997.75210.02640.00090.99630.9309
116stdRFECV-LR-l11443tanh0.10.92860.003957.42183.90090.01870.00130.97820.9429
KPCA (kernel = poly)106stdRFECV-SVM1521relu0.010.92320.004166.44342.82861.99850.23360.97040.92950.9328
144stdRFECV-SVM1522relu0.010.92070.004656.48571.29981.30470.20120.98710.9365
148stdRFECV-SVM1523relu0.010.92510.005056.24791.58591.24450.09290.98210.9324
KPCA (kernel = rbf)124stdRFECV-SVM1521relu0.0010.91660.007289.969016.22880.52330.07840.99780.93030.9320
148stdRFECV-SVM1522relu0.0010.92020.0038124.015049.94851.21520.07150.99530.9309
150stdRFECV-SVM1523relu0.0010.91760.002385.72808.25681.01240.03550.99740.9347
FastICA32stdRFECV-RF911relu0.0010.88610.0089839.3872125.46770.34420.21580.97630.91890.9161
34stdRFECV-SVM1522relu0.0010.88130.0033459.816861.64370.10760.00850.96710.9098
54stdRFECV-RF913relu0.0010.87930.0048576.961635.25440.20970.07300.97860.9195
LDA14stdRFECV-SVM1521tanh0.10.86050.002219.31572.56740.01150.00020.94030.87390.8704
12noneRFECV-LR-l11442relu0.10.85970.006116.42903.33520.00680.00100.92810.8621
14stdRFECV-LR-l11443tanh0.10.85660.004721.75653.37100.00960.00140.94970.8753
KFDA (kernel = poly)12stdRFECV-RF911logistic0.0010.78580.010043.30902.14102.08580.02290.99030.86950.8640
14stdRFECV-RF912logistic0.0010.75210.002975.003522.83041.95010.01020.98970.8601
14stdRFECV-RF913logistic0.0010.75990.008369.902217.58381.79620.02070.98890.8625
KFDA (kernel = rbf)54stdRFECV-LR-l11441tanh0.10.90710.0060102.865010.48520.31310.05270.99990.92360.9214
58stdRFECV-LR-l11442tanh0.10.90530.007899.56029.68320.48660.04340.99980.9208
62stdRFECV-LR-l11443tanh0.10.90610.007287.99217.20030.56120.40890.99980.9199
LLE (nn = 3)150stdRFECV-SVM1521relu0.0010.75450.0049149.4420123.94531.82980.92440.84310.77350.7765
152stdRFECV-SVM1522relu0.0010.76040.0053178.837599.31891.36470.19000.84520.7728
150stdRFECV-SVM1523relu0.0010.75750.0016350.9361121.46961.26450.14980.86900.7831
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vali, A.; Comai, S.; Matteucci, M. An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation. Remote Sens. 2024, 16, 2561. https://doi.org/10.3390/rs16142561

AMA Style

Vali A, Comai S, Matteucci M. An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation. Remote Sensing. 2024; 16(14):2561. https://doi.org/10.3390/rs16142561

Chicago/Turabian Style

Vali, Ava, Sara Comai, and Matteo Matteucci. 2024. "An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation" Remote Sensing 16, no. 14: 2561. https://doi.org/10.3390/rs16142561

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop