Next Article in Journal
Fault Diagnosis Across Aircraft Systems Using Image Recognition and Transfer Learning
Previous Article in Journal
Multi-Head Structural Attention-Based Vision Transformer with Sequential Views for 3D Object Recognition
Previous Article in Special Issue
Exploring Image Decolorization: Methods, Implementations, and Performance Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Argan Tree Deforestation Detection Using Sentinel-2 Time Series and Machine Learning

1
Image et Reconnaissance de Formes–Systèmes Intelligents et Communicants Laboratory, Faculty of Sciences, Ibn Zohr University, Agadir 80000, Morocco
2
Institut Géographique National France International, 75012 Paris, France
3
Departamento de Física, Universidad de La Laguna, 38200 San Cristóbal de La Laguna, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(6), 3231; https://doi.org/10.3390/app15063231
Submission received: 2 February 2025 / Revised: 11 March 2025 / Accepted: 13 March 2025 / Published: 16 March 2025
(This article belongs to the Special Issue Latest Research on Computer Vision and Image Processing)

Abstract

:
The argan tree (Argania spinosa) is a rare species native to southwestern Morocco, valued for its fruit, which produces argan oil, a highly prized natural product with nutritional, health, and cosmetic benefits. However, increasing deforestation poses a significant threat to its survival. This study monitors changes in an argan forest near Agadir, Morocco, from 2017 to 2023 using Sentinel-2 satellite imagery and advanced image processing algorithms. Various machine learning models were evaluated for argan tree detection, with LightGBM achieving the highest accuracy when trained on a dataset integrating spectral bands, temporal features, and vegetation indices information. The model achieved 100% accuracy on tabular test data and 85% on image-based test data. The generated deforestation maps estimated an approximate forest loss of 2.86% over six years. This study explores methods to enhance detection accuracy, provides valuable statistical data for deforestation mitigation, and highlights the critical role of remote sensing, advanced image processing, and artificial intelligence in environmental monitoring and conservation, particularly in argan forests.

1. Introduction

The argan tree (Argania spinosa (L.) Skeels) is a rare species endemic to southwestern Morocco [1]. Argan trees play a critical role in the environmental, social, and economic landscape. Known for their resilience in arid conditions, the argan trees contribute to biodiversity by supporting a rich network of plant and animal life while also enhancing soil stability [2]. Beyond their ecological significance, they also serve as a source of livelihood and income for local communities through valuable products like argan oil and timber [3]. The argan oil industry significantly contributes to the Moroccan economy, as highlighted in Table 1, which presents the market size of Moroccan and global argan oil in 2019, along with forecasts for 2030 [4,5]. The growing global demand for argan oil across the cosmetics, skincare, and culinary sectors further underscores its increasing economic importance.
The argan forest considered in this study is located in the Souss region, centered around the city of Agadir (Morocco), an area distinguished by its agricultural sector, which is a leading economic sector alongside tourism. This argan forest is one of the largest in the country [6] and includes a section that was declared a Biosphere Reserve by UNESCO in 1998, dedicated to protecting this unique habitat [7]. However, accurately assessing the precise distribution and abundance of argan trees in the region remains challenging [8].
At present, the argan forest has been threatened by deforestation due to several factors [9]. On the one hand, agricultural expansion, urban encroachment [6], massive logging for producing firewood and charcoal, not to mention climate change and drought [8,10]. As a result, the argan forests have lost approximately 50% of their total area over the past century (Table 2), with an estimated annual loss of 600 hectares [11]. This has raised alarm bells regarding the possible extinction of this magnificent tree in the future [12]. It is therefore crucial for resource conservation that significant efforts are made to create accurate maps, essential for the monitoring and proper management of land use and land cover (LULC) in this strategic zone, given its role in food security and Morocco’s environmental and economic development [13].
In recent years, remote sensing technologies and satellite imagery have provided the capacity to collect accurate, comprehensive, and wide-ranging data, facilitating studies related to forest mapping and the accurate monitoring of forest habitat change and deforestation [14,15,16,17]. In particular, multispectral and multitemporal data from the European Space Agency’s Sentinel-2 satellites have contributed to the success of this technology in these applications [18,19,20].
Tree detection studies using Sentinel-2 data fall into two main categories: those relying solely on single-date multispectral information and those incorporating both multispectral and multi-temporal data [21]. Two primary methods are used to assess spectral band significance. The first calculates feature importance scores from classifiers, offering a quantitative approach to select optimal bands, though interpretability and classification performance can sometimes pose challenges [22]. The second method involves individual classification experiments with each spectral feature, providing a clearer understanding of their impact [23,24]. Incorporating multi-temporal data is crucial due to the “same spectra but different objects, same objects but different spectra” phenomenon, caused by tree phenological cycles, which can limit detection accuracy when using single-date data [21,25]. The multi-temporal analysis allows for better tracking of these phenological stages, enhancing detection accuracy [26]. However, cloud contamination can interfere with this process [27]. Integrating multi-source data has been suggested, but this approach requires extensive preprocessing and sensor calibration [28,29]. In summary, using multispectral and multi-temporal Sentinel-2 data significantly improves tree detection accuracy, with results strongly dependent on information selection strategies [21,26].
In studies on tree detection and forest mapping using Sentinel-2 data, various classification algorithms have been employed, encompassing both machine learning and deep learning techniques [18], with supervised classification generally preferred over unsupervised approaches [30]. Before undertaking any remote sensing task, it is essential to identify the most appropriate approach for data analysis, especially with Sentinel-2. There are three primary approaches: pixel-based [31], patch-based [32], and object-based [31], which differ primarily in the basic unit of analysis. Pixel-based and patch-based approaches often employ machine learning algorithms for LULC classification, with Random Forest (RF) and Support Vector Machine (SVM) being the most frequently used. These algorithms have shown promising results, often achieving an Overall Accuracy (OA) exceeding 80% [18]. Other studies with similar objectives have utilized other machine learning algorithms, such as Maximum Likelihood Classification (MLP), Artificial Neural Networks (ANN), Decision Trees, k-Nearest Neighbors (KNN), and Bayes models [18]. It is worth noting that pixel-based methods, while commonly used, can be prone to noise. Patch-based or object-based approaches are generally preferable, especially for change detection tasks [33,34]. Object-based methods, in particular, often take advantage of deep learning algorithms, which were first introduced to remote sensing in 2014 [20]. Since then, deep learning has gained significant attention due to its success in this domain [35]. Among the most commonly used deep learning models are Convolutional Neural Networks (CNN), which have a demonstrated OA exceeding 90% in LULC classification and mapping tasks using Sentinel-2 data [17,36]. Overall, deep learning algorithms outperform traditional machine learning algorithms in terms of accuracy due to their superior feature extraction capabilities, non-linear modeling, and high-level semantic segmentation [37,38]. Nevertheless, the efficiency of machine learning models remains relevant, as they typically require less training data compared to deep learning models, making them well-suited for smaller datasets [39]. Finally, regardless of the type of algorithm chosen for a remote sensing task, factors such as data preprocessing and algorithm parameterization can significantly enhance accuracy [18].
Finally, it is important to briefly review the role of remote sensing in detecting changes in the same geographical area over time, a key focus of this study [40]. Change detection methods using multispectral satellite data can be categorized into four main types [41]: Algebra-Based Methods [42], Statistics-Based Methods [43,44], Transformation-Based Methods [42], and Deep-Learning-Based Methods [17,45]. For deforestation mapping, two primary approaches are commonly used. The first involves classifying images for each time period into multiple land-cover labels (e.g., trees, bare soil, urban areas, water) using machine learning classifiers [46] or deep learning algorithms [47,48]. By comparing these classification maps, deforestation maps are generated. However, this approach is prone to error propagation from the classification maps, underscoring the need to improve classification accuracy. The second approach directly compares two images from different dates. Deep neural network models, such as Improved UNet++ [49], are specifically designed for this task. These models use multi-temporal data in an end-to-end manner, analyzing spectral, spatial, and structural features to produce a final change map. This method can also be accomplished using machine learning algorithms [50].
Although numerous studies have explored remote sensing data and developed algorithms for forest mapping, few have specifically focused on argan forests, either for general mapping or deforestation detection, largely due to the limited availability of argan tree-related data [51]. Two studies focused on mapping argan trees near Essaouira, close to Agadir, using the NDVI index derived from Sentinel-2 time series data and the Support Vector Machine (SVM) algorithm, achieving overall accuracies of 89.78% [52] and 92.60% [53], respectively. Another study mapped argan using Sentinel-1 time series, Sentinel-2 data, and three machine learning algorithms, in addition to a layer of the missing Shuttle Radar Topography Mission Digital Elevation Model (DEM), achieving the highest accuracy of 93.25% [54]. Building on this foundation, our team researched mapping argan deforestation using deep learning algorithms and a patch-based approach [20,32,55]. This study aims to further enhance these results.
Recognizing the ecological and economic importance of argan trees and the threats they face, this study aims to map argan forests and calculate the deforestation rate using remote sensing and machine learning tools. Specifically, we create argan tree maps for 2017/2018 and 2022/2023, comparing these maps to generate a map and calculate the deforestation rate over this period. Given the limited availability of argan-specific data, machine learning algorithms were selected for their efficiency, faster data analysis capabilities, and minimal computational requirements. This work seeks to provide specialists with valuable insights and statistics to inform strategies for mitigating argan forest deforestation.

2. Materials and Methods

Figure 1 shows a flow chart summarizing the most important steps of the methodology followed, which include:
  • Data acquisition and preprocessing.
  • Tree detection.
  • Change detection.

2.1. Study Area

The study area was the Admine forest located in the Souss region, near the city of Agadir in southern Morocco (Figure 2). It covers approximately 128 square kilometers. This forest is part of the UNESCO-declared biosphere reserve dedicated to the protection of the argan tree. From a geological perspective, the area is situated in the alluvial basin of the Souss valley, which separates the Sahara Desert from the Anti-Atlas Mountains [55]. The region is characterized by hot summers and mild winters, with a dry and semi-arid climate. In recent years, it has experienced a reduction in rainfall, leading to long-term environmental changes in the area. The predominant natural vegetation in the Souss region consists of savanna landscapes, with the argan tree being the dominant species [20]. The height of argan trees ranges from 2 to 10 m, and the canopy area varies from tree to tree, ranging from less than one square meter for small trees to more than 50 square meters for very large trees [56,57]. The study area also comprises buildings, roads, farms, and greenhouses. It is surrounded on all sides by residential areas belonging to Greater Agadir, with Agadir International Airport located centrally. The area was chosen because it has unfortunately experienced significant deforestation in recent years, which may threaten the future disappearance of argan in the region.

2.2. Sentinel-2 Imagery

Sentinel-2 images were used in this study. The mission consists of a constellation of two satellites, Sentinel-2A and -2B, orbiting in a sun-synchronous polar orbit, phased 180 degrees apart from each other. Each Sentinel-2 carries onboard a MultiSpectral Instrument (MSI) with 13 spectral bands in the visible, near-infrared, and short-wave infrared wavelengths and spatial resolutions ranging from 10 m to 60 m. Sentinel-2A and -2B provide images periodically every 10 days at the equator with one satellite and every 5 days with two satellites [18].
Sentinel-2 offers images at two distinct levels of preprocessing: Level 1C (Top-of-Atmosphere reflectance) and Level 2A (Bottom-of-Atmosphere reflectance). While Level 1C data undergo geometric and radiometric corrections, they lack atmospheric adjustments. This type of data is typically used in studies where atmospheric effects are not a primary concern or can be addressed at a later stage. On the other hand, Level 2A data are comprehensively processed, including geometric, radiometric, and atmospheric aspects. This level is preferred for tasks requiring precise surface reflectance. It is worth noting that Level-1C data consist of 13 bands (Table 3), whereas Level-2A data include only 12 bands. The difference is due to the removal of Band 10, which provides information on cirrus clouds. This band is considered unnecessary for surface reflectance analysis [58]. Therefore, it is removed or excluded from the Level-2A dataset to streamline the product and optimize it for terrestrial applications. All Sentinel-2 bands were resampled to a spatial resolution of 10 m using bilinear interpolation. Sentinel-2 data at both processing levels were used from 1 September 2017 to 31 August 2018 and from 1 September 2022 to 31 August 2023.

2.3. Google Earth Engine

Several cloud computing platforms provide remote sensing data [60,61]: Sentinel Hub [61,62], Open Data Cube [63,64], SEPAL [65,66], JEODPP [67,68], pipsCloud [69,70], OpenEO [71,72], and Google Earth Engine (GEE) [73,74], with GEE being among the most widely used [75].
GEE [73,74] is a powerful online platform widely used for Earth observation and environmental research. It provides an intuitive and user-friendly interface, including a code editor and graphical user tools, making it accessible to researchers across disciplines such as geography, geology, and remote sensing. It hosts an archival database of images from various satellites, such as Sentinel, Landsat, and MODIS, along with geospatial data. This facilitates the creation of diverse maps and offers users the ability to download data, tools for processing, capabilities to sort and filter, mask clouds and their shadows, and data analysis [73,74]. Its popularity is attributed to key features such as being free to use and offering efficient parallel processing capabilities without the need to download data [76]. It enables easy progress using data from different spatial and temporal ranges [77,78], thereby saving a significant amount of time and processing large volumes of data [79].
GEE is widely used for tasks such as forest and deforestation mapping using machine learning algorithms, such as Random Forest [50,80,81]. A portion of this study was conducted using GEE. This included the collection of spectral and time series data from Sentinel-2, the calculation and addition of spectral indices, and the labeling of the data. Finally, the processed data were downloaded in tabular format for training the machine learning models, along with the images to which these models were applied [82].

2.4. Data Acquisition and Pre-Processing

The pre-processing consisted of three basic steps: calculating different spectral indices, creating time series for each life cycle of the argan tree (2017/2018 and 2022/2023), and labeling the data based on ground truth.

2.4.1. Spectral Indices

In the field of remote sensing, spectral indices serve as key tools within multispectral transformation processes [83]. These indices transform satellite sensor reflectance readings into environmentally significant figures. Thanks to the multispectral capabilities of satellite imagery, these transformations provide insights into various phenomena’s current conditions. Vegetation indices, in particular, play a crucial role in both the identification and monitoring of vegetation patterns and in calculating key biophysical attributes of plant covers, including biomass, the leaf area index, and the proportion of absorbed photosynthetically active radiation [83]. Therefore, we decided to include various spectral indices available in the Sentinel hub platform (Table 4) [84]. Throughout the study, tests were carried out to examine the importance of each index.
These indices were selected based on studies aimed at forest mapping [91,92]. Their purpose in this study was to enhance argan tree detection and forest mapping. The Normalized Difference Vegetation Index (NDVI) is widely used to distinguish vegetation from non-vegetation by comparing red and near-infrared reflectance [93]. The Enhanced Vegetation Index (EVI) accounts for background canopy effects and atmospheric disturbances, increasing sensitivity in regions with high biomass [94]. The Atmospherically Resistant Vegetation Index (ARVI) reduces atmospheric scattering effects by incorporating the blue reflectance band [95]. Meanwhile, the Green Normalized Difference Vegetation Index (GNDVI) provides greater sensitivity to chlorophyll content by leveraging green reflectance instead of red [96]. The Normalized Difference Water Index (NDWI) highlights vegetation water content by utilizing green and near-infrared reflectance [97]. The Perpendicular Vegetation Index (PVI) minimizes soil background influence through a geometric approach in spectral space [98]. The Soil-Adjusted Vegetation Index (SAVI) compensates for soil brightness variations by incorporating a correction factor into the NDVI formula [99]. Finally, the Soil Composition Index (SCI) emphasizes spectral variations due to differences in soil constituents and thus helps to isolate soil reflectance from vegetation signals, improving forest/non-forest discrimination in heterogeneous landscapes [100]. While NDVI, EVI, ARVI, GNDVI, and SAVI mainly differ in their sensitivity to soil background and atmospheric effects, NDWI provides additional insights into vegetation and soil water content, while SCI helps assess soil composition. Collectively, these indices enable a more accurate and robust classification of forested areas [101].

2.4.2. Time Series Construction

The second pre-processing step was creating the time series for each life cycle of the argan tree (phenology). The remotely sensed spectral signature of argan trees varies over time due to several factors. A primary factor is the dependency of the Sentinel-2 imagery on sunlight; changes in sunlight intensity result in variations in spectral information. Additionally, alterations in the phenology contribute to spectral changes. To address this challenge, 24 image capture dates were selected for each year, representing various phenological stages in the argan tree’s life cycle (Table 5). The inhabitants of the study area harvest argan fruits in the summer, during the months of July and August. We therefore considered that the cycle starts at the beginning of September and ends at the end of August. The dates were carefully and precisely selected based on specific criteria, such as choosing two images per month and ensuring that each image is ideal for the study, as shown in Table 5. The study years were selected because 2017 marks the first release of Sentinel-2 Level-2A data, while 2023 corresponds to the year in which the data used in this study were collected [102].

2.4.3. Ground Truth Data

Ground truth data were meticulously collected using tools from the GEE platform. To ensure the accuracy required for a comprehensive analysis, argan trees were precisely geolocated and marked (using a GEE tool) through a detailed examination of high-resolution Google Earth imagery. This identification was based on prior knowledge of the study area and the distinctive characteristics of the argan tree, such as crown size, color, and shadow patterns, which allowed us to distinguish them from other tree species present in the images. Data validation was carried out via field visits to the study area, enhancing the quality and credibility of data (Figure 3). Once the argan trees were identified, the data were downloaded using a custom GEE script. Data collection was conducted in late 2023. It is important to note that we did not have argan data for 2017/2018; therefore, we assumed that the trees present in 2022/2023 were also present in 2017/2018, as no reforestation efforts had occurred between these periods.
The ground truth data were divided into two categories: the Argan, representing argan trees, and the Non-Argan, representing other objects such as buildings, farms, soil, etc. Since accurate identification of argan trees is crucial for the success of this study, two types of data were used: tabular data and image data.
The tabular data (data for training and testing) consisted of spectral information for each selected pixel, along with spectral indices. These data were divided into two categories: single-temporal and multi-temporal tabular data. Single-temporal tabular data were used to train machine learning models on data from a single day, whereas multi-temporal tabular data were used to train models on time-series data, with 24 days from each life cycle selected to form the time series. To ensure a balanced representation, an equal number of samples were selected for each category, with 695 pixels chosen for the Argan category and 695 pixels for the Non-Argan. This balanced sample size ensures that both categories receive equal attention during training, preventing bias toward the dominant class. The specific number of pixels was determined by the maximum amount of data that could be collected within the constraints of the available time frame.
For the imagery data (data for testing only), the aim was to identify the optimal scenario for detecting and mapping argan trees with high accuracy. A small region was completely labeled to create a reference image (Figure 4), enabling the evaluation of different scenarios. The reference image has dimensions of 65 by 90 pixels and includes 688 pixels representing the Argan category and 5162 pixels representing the Non-Argan category, accurately reflecting the 2023 distribution of argan trees.

2.5. Machine Learning Models

The argan tree is a large species with thorny, horizontally spreading branches, resulting in a wide, sprawling appearance. In Sentinel-2 imagery, which has a resolution of 10 m, each argan tree typically corresponds to one to four pixels (see Figure 5). This pixel representation allows the application of machine learning algorithms to classify pixels into two previously defined categories: “Argan” (containing argan trees) or “Non-Argan” (without argan trees).
Eight machine learning algorithms were selected for classifying the study area and detecting argan trees, with the objective of identifying the most effective models for this task: Support Vector Machine (SVM) [103], Decision Tree (DT) [104], Random Forest (RF) [104], LightGBM [105], XGBoost [106], Gaussian Naive Bayes [107], K-Nearest Neighbors (KNN) [108], and Artificial Neural Network (ANN) [108]. All algorithms were supervised and trained on various dataset scenarios. Each model had tunable parameters to optimize performance. Initially, all models were tested with their default parameter settings. Subsequently, parameter adjustments were made to improve the results. Table 6 highlights the key parameters for each model and the tested values, except for Gaussian Naive Bayes, which is typically used with its default settings and required no modifications. It is worth noting that other parameters were tested, but they were disregarded as they did not provide any performance improvement.
The SVM, RF, KNN, and ANN algorithms are widely used in tree detection and mapping due to their proven effectiveness in handling spatial and spectral data [18], making them suitable candidates for this study. Additionally, DT, LightGBM, XGBoost, and Gaussian Naive Bayes were selected for their strong classification capabilities, as demonstrated in various remote sensing applications.
Support Vector Machine (SVM) uses the principle of maximizing the margin between classes in a high-dimensional feature space, leading to robust performance, especially in high-dimensional or small-sample scenarios [109]. Decision Tree (DT) constructs a hierarchical structure of if-then rules by recursively partitioning the feature space, offering ease of interpretation due to its rule-based nature [110]. Random Forest (RF) is an ensemble method that aggregates multiple decision trees grown on bootstrap samples, enhancing predictive accuracy and reducing overfitting [111]. LightGBM relies on a gradient boosting framework and employs a leaf-wise tree growth strategy with histogram-based decision rules, enabling faster training on large-scale datasets [112]. Similarly, XGBoost also uses gradient boosting but incorporates second-order gradients and advanced regularization, often achieving high accuracy and efficiency in competition settings [113]. Gaussian Naive Bayes applies Bayes’ theorem under an assumption of feature independence and normal distributions, making it computationally efficient and effective with limited data [114]. K-Nearest Neighbors (KNN) is an instance-based learner that classifies a point based on the majority class among its nearest neighbors, remaining straightforward to implement with minimal explicit training [115]. Finally, artificial neural networks (ANNs) utilize layers of interconnected units or “neurons” that learn complex, often non-linear mappings from inputs to outputs, excelling in tasks involving intricate feature interactions [116].
Differences among these algorithms revolve around their assumptions (e.g., independence in Naive Bayes), interpretability (e.g., Decision Trees and Random Forests), computational efficiency (e.g., LightGBM and XGBoost), and capability to capture non-linear relationships (e.g., ANNs and SVMs), making each algorithm more or less suitable depending on data characteristics and application requirements.
All algorithms were implemented using Python v3.9. For the algorithms SVM, Decision Tree, Random Forest, Gaussian Naive Bayes, and K-Nearest Neighbors, specific functions for each were utilized from the ‘sklearn’ package [117,118]. The ‘lightgbm’ and ‘xgboost’ packages were used for LightGBM and XGBoost algorithms. Lightgbm is a gradient boosting framework developed by Microsoft, utilizing tree-based learning algorithms designed for speed and efficiency [112,119]. Xgboost is a separate library providing a highly efficient implementation of gradient boosting [113,120]. For the ANN algorithm, the TensorFlow package was employed [121,122]. TensorFlow is an open-source machine learning framework developed by Google, primarily utilized for deep learning tasks.

2.6. Accuracy Assessment

Evaluating the accuracy of results and comparing them provides insights into the model’s and data’s effectiveness [123]. Evaluation is done by comparing classification results with ground truth data [53]. Table 7 details the data split used for training and testing. All data originate from the study area but were collected from various locations within it, ensuring diversity and offering a more comprehensive representation of the region. In general, a larger portion of the data was used for training to provide a sufficiently large sample size for effectively training the model. The remaining data were used for testing to evaluate model performance. For fair comparisons between models, it was critical to train them on the same training samples and evaluate them using identical test samples. We used the most commonly employed metrics in similar studies (Table 8) [108]: Overall accuracy (OA), precision, and the kappa coefficient. These metrics were key to determining the best-performing model.
The OA is a primary indicator of the likelihood of correct classification, calculated from the sum of the confusion matrix’s diagonal elements (Figure 6). Precision assesses the accuracy of each class individually. The kappa coefficient, a more complex metric, assesses observed accuracy against random chance, considering not just the diagonal but also the off-diagonal elements of the confusion matrix. The “sklearn” package facilitated the evaluation by providing functions for each of the used metrics.
During the final testing phase of argan tree detection, the fully labeled reference image was used to evaluate the performance of the most robust models. The best-performing model was determined based on its ability to correctly classify this reference image, achieving the highest number of accurately classified pixels compared to other classifiers.

2.7. Argan Tree Detection Method

After processing the data, the goal was to detect argan trees in the study area for 2017/2018 and 2022/2023. The approach involved testing various scenarios to minimize mapping errors and improve accuracy. This phase relied on tabular data and included the following steps:
  • Model evaluation: machine learning models were tested using original spectral bands and derived spectral indices (single temporal data) to identify the best-performing model for argan tree detection.
  • S2 Product Levels: the suitability of Sentinel-2 Level 1C and Level 2A products for training was compared.
  • Optimal detection time: the best time of year for detecting argan trees was identified to ensure model accuracy for both periods.
  • Impact of spectral information elements: the effect of individual spectral bands and indices on detection accuracy was analyzed by training the best model on each element separately.
  • Optimal spectral information combination: the best combination of spectral bands and indices was determined, testing next scenarios:
    • 10-m resolution spectral bands.
    • 10- and 20-m resolution spectral bands.
    • All spectral bands.
    • The top 10 most influential spectral features.
    • All spectral bands and indices.
  • Optimal temporal information combination: the impact of temporal data combinations was assessed, including:
    • Single-date data for the detection day.
    • Single-date data from both periods.
    • Multi-temporal data from one year.
    • Multi-temporal data from both periods.
  • Final model training and mapping: The final model was used to classify pixels from the 2017/2018 and 2022/2023 images of the study area into two categories: Argan and Non-Argan. This process resulted in the generation of two argan distribution maps.
After evaluating the tests, the best model was trained on the optimal data to ensure robustness. Two optimal Sentinel-2 images were then selected, one for 2017/2018 and another for 2022/2023. Each pixel in these images was classified as “contains Argan” or “does not contain Argan”, resulting in two detailed maps of argan tree distribution for the study area in each period.

2.8. Change Detection Method

To create a deforestation map of argan forests in the study area between 2017/2018 and 2022/2023, four scenarios were considered:
  • Detected in both 2017 and 2022.
  • Not detected in either year.
  • Detected in 2017 but not in 2022 (deforestation).
  • Detected in 2022 but not in 2017 (regeneration or new plantations).
The deforestation map was generated by comparing each pixel in the 2017/2018 map to its corresponding location in the 2022/2023 map using a patch-based approach. Due to the 10-m resolution of Sentinel-2, individual argan trees could not be identified, necessitating the use of spatial kernels to improve classification accuracy. Five kernel sizes were evaluated to balance detail preservation and noise reduction. Smaller kernels provide finer details but are prone to increased classification errors. Larger kernels smooth classification results, reducing noise but potentially overlooking small-scale deforestation events [124]. For sparse and heterogeneous forests like argan, medium to large kernels are generally preferred to enhance stability while retaining relevant patterns [125]. The tested kernel sizes were:
  • Patch 1 × 1 (Pixel-based): Direct comparison of individual pixels between the two maps.
  • Patch 3 × 3: Each pixel in one map was compared to all pixels within a 3 × 3 kernel centered on the corresponding pixel in the other map.
  • Patch 5 × 5, 7 × 7, 9 × 9: Similar to the 3 × 3 kernel but using larger sizes.
After generating the deforestation maps, the deforestation rate of argan forests was calculated for each kernel size (Equation (1)).
r a t e   =   A r g a n _ 2017   A r g a n _ 17 _ 22   +   A r g a n _ 2017   +   A r g a n _ 2022 ,
where Argan_17_22 is the number of pixels classified as argan trees in both 2017 and 2022, Argan_2017 is the number of pixels classified as argan trees in 2017 but not in 2022 (deforestation) and Argan_2022 is the number of pixels classified as argan trees in 2022 but not in 2017 (regeneration).
The optimal patch size was determined based on:
  • Visual assessment of classification accuracy and deforestation pattern preservation.
  • Comparison with deforestation rates from previous studies and official reports.
These methodological choices were crucial in ensuring a reliable estimation of deforestation, despite the limitations of Sentinel-2 imagery and the absence of ground truth data for 2017/2018.

3. Results

3.1. Statistics and Analysis

3.1.1. Optimizing Models and S2 Data Level to Identify the Best Time for Argan Tree Detection

In the initial phase, all selected machine learning models were trained on Sentinel-2 bands (both Levels 1C and 2A) and remote sensing indices (single-temporal data). Table 9 and Table 10 display the results obtained. The first observation noted is that the OA and kappa rise and fall simultaneously, making it sufficient to rely on one of them. The results clearly demonstrate the superiority of DT, RF, LightGBM, and XGBoost over the other models. LightGBM, in particular, showed outstanding results, with an OA of approximately 98.0% for both levels in 2017/2018 and of 98.0% for Level 2A and 99.3% for Level 1C in 2022/2023. The Gaussian Naive Bayes model yielded the lowest OA. Based on those results, LightGBM will be the primary model used for argan tree detection. Regarding Levels 1C and 2A, the results are very close. Specifically, for LightGBM, the results are equal for 2017/2018, with a slight advantage for Level 1C over Level 2A in 2022/2023. Consequently, Level 1C data will be used in the remaining stages of this study.
Having selected the most suitable machine learning model and data level, the focus now shifts to determining the time that yields the best results. Figure 7 presents the classification outcomes for argan detection, assessed using OA, with the LightGBM model and single-temporal Level 1C data. The results demonstrate notable variability in performance across different dates, highlighting the importance of selecting the optimal time for argan tree detection. For the 2017/2018 period, the best results were achieved in early October, early November, and late February, whereas for 2022/2023, the highest OA was observed in late December. To ensure consistency, the selected detection time for both years had to fall within a period that yielded strong results for each. Based on this criterion, late July was chosen, specifically 24 July 2018 and 18 July 2023.

3.1.2. Impact of Spectral Information Elements

Figure 8 shows the results of argan classification using individual spectral information elements (Sentinel-2 bands and spectral indices) with single temporal data. The results varied across different days, with the best outcome for each element selected (OA between 0.631 and 0.855). While high-resolution Sentinel-2 bands (10 m) generally showed greater importance, all elements provided acceptable results (OA > 60%). Notably, despite B1 having a lower resolution (60 m), it still proved relevant for detection. The lowest performance was observed with spectral band B10, which also explains the minimal differences observed between Level 1C and Level 2A data. These findings suggest that the choice of spectral information is somewhat flexible, as most elements lead to similar detection accuracy, supporting the idea that combining various spectral data may increase the overall model performance.

3.1.3. Optimal Spectral Information Combination

After studying the importance of each spectral information element individually, it is essential to determine the best combination of these elements that yields the best results for argan detection. It should be noted that combination 4 consists of the top 10 pieces of information for argan detection. Based on results from Figure 8, these elements are: B1, B2, B3, B4, B6, B8, ARVI, NDVI, PVI, and SAVI. Table 11 shows the results of all combinations, where combination 5 outperforms the others, confirming the importance of all elements of spectral information. Therefore, we will rely on all Sentinel-2 bands in addition to remote sensing indices.

3.1.4. Optimal Temporal Information Combination

To select the optimal combination of multi-temporal information, the four groups shown in Table 12 were considered and trained separately with the LightGBM algorithm. The models were validated in a real context using the reference image. Unlike all previous results, OA and Kappa do not rise and fall simultaneously, making it insufficient to rely on only one of them. Since the main objective of these models was to detect argan trees, the Argan Precision metric was selected as the deciding factor in choosing which group to use for the remainder of the study. With this in mind, Group C (Table 12) was selected as it showed the highest performance compared to the other groups. Therefore, we will employ two models: the first, trained with 2017/2018 multi-temporal data, will generate an argan tree map for 2017/2018, and the second, trained with 2022/2023 multi-temporal data, will generate a map for the 2022/2023 period.

3.2. Argan Tree Detection Results

Previous experiments (Section 3.1) helped reduce errors and map the argan areas as accurately as possible. Figure 9 shows the resulting maps of the argan zones, one for the year 2017/2018 and another for 2022/2023. Both maps efficiently show the presence of argan trees. The clear differences observed visually confirm that changes have occurred in the study area between the two periods. Figure 10 further validates the quality of the results, showing their strong alignment with ground truth data while also acknowledging the presence of some errors.

3.3. Deforestation Maps of the Argan Forest

Figure 11 illustrates the deforestation map for the 9 × 9 patch. The smaller patches (1 × 1 or 3 × 3) generated deforestation maps with fine details but a significant amount of noise, whereas the larger patches (7 × 7 and 9 × 9) yielded deforestation maps that were more contextually enriched, exhibiting less noise and fewer details. Also, Table 13 presents the estimated deforestation rate for the argan forests in the study area (Admine Forest), showing an initial value of 2.86% ± 0.07%.

4. Discussion

This study aimed to map the Admine argan forest located in the Souss region of Morocco for the periods 2017/2018 and 2022/2023, calculate the deforestation rate over this six-year period, and provide specialists with accurate insights to inform mitigation strategies for this UNESCO World Biosphere Reserve habitat. Using remote sensing and machine learning tools, we addressed the challenge of limited argan-specific data through efficient algorithms capable of analyzing large datasets with minimal computational requirements. Additionally, we sought to enhance the precision of these tools by conducting extensive experiments to identify the optimal scenarios for mapping and deforestation analysis.
The experiments provided valuable insights into the performance of different machine-learning algorithms, data levels, and spectral information. The Decision Tree, Random Forest, and XGBoost machine learning algorithms consistently performed well, in line with prior studies [18], but the LightGBM was selected due to the outstanding results, with OAs greater than 98.0% in all cases. Additionally, the comparison between Sentinel-2 Levels 1C and 2A demonstrated minimal differences, primarily attributed to the negligible contribution of Band 10, the key differentiator between these data levels. These results suggest that both data levels are suitable for this task, offering flexibility in data selection for future studies.
The superior classification results for 2022/2023 compared to 2017/2018 can be attributed to the lack of available ground truth data for 2017/2018. Consequently, the 2022/2023 dataset is inherently more reliable than the 2017/2018 dataset.
Experiments also highlighted the importance of selecting optimal temporal windows for Argan tree detection (Figure 7). Variations in classification results were observed depending on the observation date, underscoring the need for a precise temporal analysis when mapping vegetation in dynamic ecosystems [126,127]. An analysis of classification performance throughout the year, as illustrated in Figure 7, reveals some periods with higher accuracy; however, the difference between the average OA for each season is minimal (Table 14), suggesting that seasonal variations have a limited impact on classification performance and argan tree detection accuracy. This can be attributed to the severe seven-year drought that Morocco is facing, which has affected the study area [128]. The decreasing rainfall and its seasonal irregularity (Figure 12), as well as the relatively stable temperature throughout the year, which results in limited seasonal fluctuations in the region [129]. On the other hand, the experiments demonstrated that combining spectral and temporal data significantly improves OA (Table 12). Specifically, models trained on time-series data consistently outperformed those trained on single-date data, indicating that temporal information helps the models better learn the characteristics of Argan trees, ultimately leading to improved classification accuracy.
The argan tree is distinguished by its medium-density distribution, adaptation to semi-arid regions, and typical growth in areas surrounded by bare soil. Given these conditions, it was initially expected that incorporating additional spectral information, such as spectral indices, could enhance accuracy—provided an appropriate resampling method was applied. In this study, all Sentinel-2 bands were resampled to a 10-m resolution. While resampling higher-resolution bands does not introduce new information to lower-resolution data, the interpolation method used minimized data distortion, preserved spatial patterns, and enabled the effective use of certain indices (e.g., ARVI and SCI), which combine bands of different native resolutions (10 m, 20 m, and 60 m). Standardizing spatial resolution across all input features benefits machine learning models, as they tend to perform better when all variables share the same resolution, reducing potential biases during training and classification. While resampling inherently has limitations, in this study, it proved advantageous by enhancing model consistency, improving classification performance, and enabling a more effective integration of spectral information.
In this study, OA was assessed using tabular test data and image-based test data, which provide a more realistic representation of real-world complexities. This resulted in a clear gap between the OA achieved with tabular data (100%) and that obtained with image-based data (85%). This discrepancy is primarily due to key classification challenges inherent in image data, including tree edge overlap, which leads to multiple land cover types within a single pixel, the loss of spatial contextual information when data are represented in tabular form, and the impact of mixed pixels, which complicates classification. Nevertheless, achieving perfect accuracy with tabular data does not imply that the task was easy or unreliable; rather, it reflects the quality of data preprocessing and the selection of relevant features. Since the models were trained on tabular data, it is natural for them to perform better on this format than on image-based data. Furthermore, applying the model to an entire scene—characterized by its vastness, class diversity, and factors such as terrain and spatial context—naturally leads to a decrease in accuracy compared to the training phase. This is expected in binary classification, where Argan tree samples were carefully selected to simulate their spectral characteristics.
Pixel-based classification has significant limitations in argan tree detection, leading to errors in deforestation mapping. Factors already discussed, such as the 10-m resolution of Sentinel-2, the presence of mixed pixels, and daily spectral variations, contributed to the classification uncertainties. In an attempt to resolve these problems, a patch-based approach was applied.
Since the study area is located in the plain of Souss argan orchards, which has an average density of 10 trees/ha [6], and that the argan tree may be shrubby or reach up to 10 m, occasionally 20 m [130], we conclude that an argan pixel can be located in a window larger than 3 × 3 pixels (such as a 5, 7, or 9 pixels square). In this case, the most appropriate patch size would be 7 × 7 or even 9 × 9 (close to one hectare) for a sure probability of argan presence. However, determining that optimal patch size (for deforestation detection) was particularly challenging due to the lack of reference data for 2017/2018, which prevented direct numerical validation. Therefore, the two criteria described in Section 2.8 were adopted to evaluate patch performance.
The visual assessment (first criterion) revealed that 1 × 1 and 3 × 3 were the least effective patch sizes. They produced excessive noise and classification errors, making it unreliable. In Figure 13 (Study area—Region 1), which shows a newly constructed road in 2022/2023 that did not exist in 2017/2018, the 1 × 1 and 3 × 3 patches captured this change most clearly. However, in Figure 14 (Study area—Region 2), a zone with no significant changes between 2017/2018 and 2022/2023, the 7 × 7 and 9 × 9 patches best preserved this stability, with 9 × 9 maintaining more relevant spatial details.
The second criterion involved comparing the deforestation rates obtained for each patch size with official reports and previous studies conducted in the same area using Sentinel-2 data and deep learning methodologies, which estimated deforestation rates between 2% and 5% [20,32,131,132]. For instance, a study employing Convolutional Neural Networks (CNN) with 32 × 32 patches estimated a deforestation rate of 2.56% between 2015 and 2020 [20], while another study using U-Net with 16 × 16 and 32 × 32 patches reported a 2.59% deforestation rate between 2015 and 2022 [32]. The 9 × 9 patch aligned most closely with this range, producing results that balanced noise reduction and fine-scale deforestation detection. The 1 × 1 and 3 × 3 patches significantly overestimated deforestation, making them unreliable. Therefore, the 9 × 9 patch size provided the best overall performance, preserving detail while maintaining classification stability (Figure 15). This study highlights the importance of optimizing patch selection strategies to enhance deforestation detection, particularly in regions where ground truth data is limited.
Despite the progress made, several challenges remain. The first challenge arises from the model’s occasional difficulty in distinguishing between argan trees and soil, leading to errors in argan detection. Argan trees, native to arid and semi-arid regions of southwestern Morocco, have small, oval, leathery leaves spaced to conserve water in harsh conditions. The result is less dense foliage than trees from temperate or tropical climates, which tend to have larger and denser leaves. In addition, the gnarled, twisted, and widely spreading branches of the argan contribute to this effect. The low canopy density allows sunlight to reach the ground (Figure 2), which makes the spectral signatures of the argan tree and the ground similar, making it challenging for machine learning models to differentiate between the two. The model’s ability to accurately detect argan trees could be improved by using images with higher spatial resolution than those provided by Sentinel-2, ideally with a resolution of less than one meter, such as drone or airborne data. Alternatively, experimenting with other models, especially deep learning models, may yield better results.
The second challenge lies in the phenomenon of “same spectra but different objects, same objects but different spectra”, which limits classification accuracy when relying on single-date data [21,25]. This study mitigated this issue by employing multi-temporal data, demonstrating the value of temporal information in improving classification results. While challenges remain, our findings provide a solid foundation for further advancements in the detection and management of argan forests.

5. Conclusions

This study demonstrated the feasibility of using Sentinel-2 imagery and machine learning tools for accurate argan forest mapping and deforestation analysis. Extensive experiments were conducted to identify the best machine-learning algorithms, data configurations, and methods for this task, achieving highly accurate results (OA > 85%). This study underscores the critical role of artificial intelligence and technology in offering precise tools and data for decision-makers. By enabling informed and timely interventions, these tools support effective argan forest conservation efforts and help minimize potential ecological impacts. However, further research is needed to address current limitations, such as spectral confusion and the challenges of single-date classification. Future work will focus on improving data quality and exploring new approaches, such as deep learning models and higher-resolution imagery, to enhance the accuracy of both argan tree and deforestation maps.

Author Contributions

Conceptualization, Y.K. and S.I.; Data curation, Y.K.; Formal analysis, S.I., S.S., A.M. and M.A.; Funding acquisition, S.I.; Investigation, Y.K., S.I., S.S., A.M. and M.A.; Methodology, Y.K., S.I., S.S., M.A. and A.M.; Project administration, S.I.; Resources, S.I.; Software, Y.K. and S.S.; Supervision, S.I. and M.A.; Validation, Y.K.; Writing—original draft, Y.K., S.I. and M.A.; Writing—review and editing, S.I. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Higher Education, Scientific Research and Innovation, the Digital Development Agency (DDA) and the CNRST of Morocco (ALKHAWARIZMI/2020/29).

Data Availability Statement

The data presented in this study are available upon request from the first author. The data are not publicly available due to privacy issues.

Acknowledgments

We appreciate the valuable feedback from the four anonymous reviewers, which has greatly improved this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. El Mousadik, A.; Petit, R.J. High Level of Genetic Differentiation for Allelic Richness among Populations of the Argan Tree [Argania Spinosa (L.) Skeels] Endemic to Morocco. Theor. Appl. Genet. 1996, 92, 832–839. [Google Scholar] [CrossRef] [PubMed]
  2. Bouzoubaâ, Z.; El Mousadik, A. Effet de La Température, Du Déficit Hydrique et de La Salinité Sur La Germination de l’Arganier, Argania Spinosa (L.) Skeels. Acta Bot. Gallica 2003, 150, 321–330. [Google Scholar] [CrossRef]
  3. Santoro, A.; Ongoma, V.; Ait el Kadi, M.; Piras, F.; Fiore, B.; Bazzurro, A.; Romano, F.; Meskour, B.; Hssaisoune, M.; Labbaci, A.; et al. Innovation of Argan (Argania Spinosa (L.) Skeels) Products and Byproducts for Sustainable Development of Rural Communities in Morocco. A Systematic Literature Review. Biodivers. Conserv. 2023, 1–29. [Google Scholar] [CrossRef]
  4. Morocco Argan Oil Market Size & Outlook, 2030. Available online: https://www.grandviewresearch.com/horizon/outlook/argan-oil-market/morocco (accessed on 21 February 2025).
  5. Argan Oil Market Size & Outlook, 2030. Available online: https://www.grandviewresearch.com/industry-analysis/argan-oil-market (accessed on 21 February 2025).
  6. Chakhchar, A.; Ben Salah, I.; El Kharrassi, Y.; Filali-Maltouf, A.; El Modafar, C.; Lamaoui, M. Agro-Fruit-Forest Systems Based on Argan Tree in Morocco: A Review of Recent Results. Front. Plant Sci. 2022, 12, 783615. [Google Scholar] [CrossRef]
  7. Afi, C.; Telmoudi, M.; Labbassi, S.; Chabbi, N.; Hallam, J.; Msanda, F.; Ait Aabd, N. Assessing the Impact of Aridity on Argan Trees in Morocco: Implications for Conservation in a Changing Climate. Resources 2024, 13, 135. [Google Scholar] [CrossRef]
  8. le Polain de Waroux, Y.; Lambin, E.F. Monitoring Degradation in Arid and Semi-Arid Forests and Woodlands: The Case of the Argan Woodlands (Morocco). Appl. Geogr. 2012, 32, 777–786. [Google Scholar] [CrossRef]
  9. Laarıbya, S.; Alaouı, A.; Ayan, S.; Benabou, A. Spatial Analysis of the Degraded Forest Areas in Idmine Forest-Morocco Using Geoscience Capabilities. Kastamonu Univ. J. For. Fac. 2021, 21, 1–11. [Google Scholar] [CrossRef]
  10. Karsenty, A.; Pirard, R. Changement climatique: Faut-il récompenser la «déforestation évitée»? Nat. Sci. Sociétés 2007, 15, 357–369. [Google Scholar] [CrossRef]
  11. Krichi, H. Gouvernance Locale et Résilience: Cas de L’arganeraie au Maroc. Master’s Thesis, Université de Sherbrooke, Sherbrooke, QC, Canada, 2017. [Google Scholar]
  12. Zhao, X.; Dupont, L.; Cheddadi, R.; Koelling, M.; Reddad, H.; Groeneveld, J.; Zohra Ain-Lhout, F.; Bouimetarhan, I. Recent Climatic and Anthropogenic Impacts on Endemic Species in Southwestern Morocco. Quat. Sci. Rev. 2019, 221, 105889. [Google Scholar] [CrossRef]
  13. Chaves, M.E.D.; Soares, A.R.; Sanches, I.D.; Fronza, J.G. CBERS Data Cubes for Land Use and Land Cover Mapping in the Brazilian Cerrado Agricultural Belt. Int. J. Remote Sens. 2021, 42, 8398–8432. [Google Scholar] [CrossRef]
  14. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of Studies on Tree Species Classification from Remotely Sensed Data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  15. Chaves, M.E.D.; Picoli, M.C.A.; Sanches, I.D. Recent Applications of Landsat 8/OLI and Sentinel-2/MSI for Land Use and Land Cover Mapping: A Systematic Review. Remote Sens. 2020, 12, 3062. [Google Scholar] [CrossRef]
  16. Szostak, M.; Hawryło, P.; Piela, D. Using of Sentinel-2 Images for Automation of the Forest Succession Detection. Eur. J. Remote Sens. 2018, 51, 142–149. [Google Scholar] [CrossRef]
  17. Saidi, S.; Idbraim, S.; Karmoude, Y.; Masse, A.; Arbelo, M. Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review. Remote Sens. 2024, 16, 3852. [Google Scholar] [CrossRef]
  18. Phiri, D.; Simwanda, M.; Salekin, S.; Nyirenda, V.R.; Murayama, Y.; Ranagalage, M. Sentinel-2 Data for Land Cover/Use Mapping: A Review. Remote Sens. 2020, 12, 2291. [Google Scholar] [CrossRef]
  19. Nzimande, N.; Mutanga, O.; Kiala, Z.; Sibanda, M. Mapping the Spatial Distribution of the Yellowwood Tree (Podocarpus Henkelii) in the Weza-Ngele Forest Using the Newly Launched Sentinel-2 Multispectral Imager Data. S. Afr. Geogr. J. 2021, 103, 204–222. [Google Scholar] [CrossRef]
  20. Idbraim, S.; Mimouni, Z.; Salah, M.B.; Dahbi, M.R. CNN Model for Change Detection of Argania Deforestation from Sentinel-2 Remote Sensing Imagery. In Proceedings of the Innovations in Smart Cities Applications Volume 6; Ben Ahmed, M., Boudhir, A.A., Santos, D., Dionisio, R., Benaya, N., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 716–725. [Google Scholar]
  21. Yi, Z.; Jia, L.; Chen, Q. Crop Classification Using Multi-Temporal Sentinel-2 Data in the Shiyang River Basin of China. Remote Sens. 2020, 12, 4052. [Google Scholar] [CrossRef]
  22. Feng, S.; Zhao, J.; Liu, T.; Zhang, H.; Zhang, Z.; Guo, X. Crop Type Identification and Mapping Using Machine Learning Algorithms and Sentinel-2 Time Series Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3295–3306. [Google Scholar] [CrossRef]
  23. Immitzer, M.; Neuwirth, M.; Böck, S.; Brenner, H.; Vuolo, F.; Atzberger, C. Optimal Input Features for Tree Species Classification in Central Europe Based on Multi-Temporal Sentinel-2 Data. Remote Sens. 2019, 11, 2599. [Google Scholar] [CrossRef]
  24. Meng, S.; Zhong, Y.; Luo, C.; Hu, X.; Wang, X.; Huang, S. Optimal Temporal Window Selection for Winter Wheat and Rapeseed Mapping with Sentinel-2 Images: A Case Study of Zhongxiang in China. Remote Sens. 2020, 12, 226. [Google Scholar] [CrossRef]
  25. Conese, C.; Maselli, F. Use of Multitemporal Information to Improve Classification Performance of TM Scenes in Complex Terrain. ISPRS J. Photogramm. Remote Sens. 1991, 46, 187–197. [Google Scholar] [CrossRef]
  26. Peña, M.A.; Brenning, A. Assessing Fruit-Tree Crop Classification from Landsat-8 Time Series for the Maipo Valley, Chile. Remote Sens. Environ. 2015, 171, 234–244. [Google Scholar] [CrossRef]
  27. Cai, Y.; Guan, K.; Peng, J.; Wang, S.; Seifert, C.; Wardlow, B.; Li, Z. A High-Performance and in-Season Classification System of Field-Level Crop Types Using Time-Series Landsat Data and a Machine Learning Approach. Remote Sens. Environ. 2018, 210, 35–47. [Google Scholar] [CrossRef]
  28. Heckel, K.; Urban, M.; Schratz, P.; Mahecha, M.D.; Schmullius, C. Predicting Forest Cover in Distinct Ecosystems: The Potential of Multi-Source Sentinel-1 and -2 Data Fusion. Remote Sens. 2020, 12, 302. [Google Scholar] [CrossRef]
  29. Wu, M.; Yang, C.; Song, X.; Hoffmann, W.C.; Huang, W.; Niu, Z.; Wang, C.; Li, W.; Yu, B. Monitoring Cotton Root Rot by Synthetic Sentinel-2 NDVI Time Series Using Improved Spatial and Temporal Data Fusion. Sci. Rep. 2018, 8, 2016. [Google Scholar] [CrossRef]
  30. Miranda, E.; Mutiara, A.B.; Ernastuti; Wibowo, W.C. Classification of Land Cover from Sentinel-2 Imagery Using Supervised Classification Technique (Preliminary Study). In Proceedings of the 2018 International Conference on Information Management and Technology (ICIMTech), Jakarta, Indonesia, 3–5 September 2018; pp. 69–74. [Google Scholar]
  31. Sekertekin, A.; Marangoz, A.M.; Akcin, H. Pixel-based classification analysis of land use land cover using Sentinel-2 and Landsat-8 data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-4-W6, 91–93. [Google Scholar] [CrossRef]
  32. Idbraim, S.; Bouhsine, T.; Dahbi, M.R.; Masse, A.; Arbelo, M. Argania Forest Change Detection from Sentinel-2 Satellite Images Using U-Net Architectures. In Proceedings of the International Conference on Advanced Intelligent Systems for Sustainable Development; Kacprzyk, J., Ezziyyani, M., Balas, V.E., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 174–184. [Google Scholar]
  33. Chen, G.; Hay, G.J.; Carvalho, L.M.T.; Wulder, M.A. Object-Based Change Detection. Int. J. Remote Sens. 2012, 33, 4434–4457. [Google Scholar] [CrossRef]
  34. Chen, Y.; Ming, D.; Lv, X. Superpixel Based Land Cover Classification of VHR Satellite Image Combining Multi-Scale CNN and Scale Parameter Estimation. Earth Sci. Inform. 2019, 12, 341–363. [Google Scholar] [CrossRef]
  35. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change Detection from Remotely Sensed Images: From Pixel-Based to Object-Based Approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  36. Segal-Rozenhaimer, M.; Li, A.; Das, K.; Chirayath, V. Cloud Detection Algorithm for Multi-Modal Satellite Imagery Using Convolutional Neural-Networks (CNN). Remote Sens. Environ. 2020, 237, 111446. [Google Scholar] [CrossRef]
  37. Zhang, W.; Lu, X. The Spectral-Spatial Joint Learning for Change Detection in Multispectral Imagery. Remote Sens. 2019, 11, 240. [Google Scholar] [CrossRef]
  38. Fang, B.; Pan, L.; Kou, R. Dual Learning-Based Siamese Framework for Change Detection Using Bi-Temporal VHR Optical Remote Sensing Images. Remote Sens. 2019, 11, 1292. [Google Scholar] [CrossRef]
  39. Debats, S.R.; Luo, D.; Estes, L.D.; Fuchs, T.J.; Caylor, K.K. A Generalized Computer Vision Approach to Mapping Crop Fields in Heterogeneous Agricultural Landscapes. Remote Sens. Environ. 2016, 179, 210–221. [Google Scholar] [CrossRef]
  40. Cheng, G.; Huang, Y.; Li, X.; Lyu, S.; Xu, Z.; Zhao, H.; Zhao, Q.; Xiang, S. Change Detection Methods for Remote Sensing in the Last Decade: A Comprehensive Review. Remote Sens. 2024, 16, 2355. [Google Scholar] [CrossRef]
  41. Parelius, E.J. A Review of Deep-Learning Methods for Change Detection in Multispectral Remote Sensing Images. Remote Sens. 2023, 15, 2092. [Google Scholar] [CrossRef]
  42. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change Detection Techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  43. Liu, S.; Marinelli, D.; Bruzzone, L.; Bovolo, F. A Review of Change Detection in Multitemporal Hyperspectral Images: Current Techniques, Applications, and Challenges. IEEE Geosci. Remote Sens. Mag. 2019, 7, 140–158. [Google Scholar] [CrossRef]
  44. Zhao, J.; Chang, Y.; Yang, J.; Niu, Y.; Lu, Z.; Li, P. A Novel Change Detection Method Based on Statistical Distribution Characteristics Using Multi-Temporal PolSAR Data. Sensors 2020, 20, 1508. [Google Scholar] [CrossRef]
  45. Bhandari, A.K.; Kumar, A.; Singh, G.K. Feature Extraction Using Normalized Difference Vegetation Index (NDVI): A Case Study of Jabalpur City. Procedia Technol. 2012, 6, 612–621. [Google Scholar] [CrossRef]
  46. Xie, G.; Niculescu, S. Mapping and Monitoring of Land Cover/Land Use (LCLU) Changes in the Crozon Peninsula (Brittany, France) from 2007 to 2018 by Machine Learning Algorithms (Support Vector Machine, Random Forest, and Convolutional Neural Network) and by Post-Classification Comparison (PCC). Remote Sens. 2021, 13, 3899. [Google Scholar] [CrossRef]
  47. Ienco, D.; Gaetano, R.; Dupaquier, C.; Maurel, P. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1685–1689. [Google Scholar] [CrossRef]
  48. Gallwey, J.; Robiati, C.; Coggan, J.; Vogt, D.; Eyre, M. A Sentinel-2 Based Multispectral Convolutional Neural Network for Detecting Artisanal Small-Scale Mining in Ghana: Applying Deep Learning to Shallow Mining. Remote Sens. Environ. 2020, 248, 111970. [Google Scholar] [CrossRef]
  49. Peng, D.; Zhang, Y.; Guan, H. End-to-End Change Detection for High Resolution Satellite Images Using Improved UNet++. Remote Sens. 2019, 11, 1382. [Google Scholar] [CrossRef]
  50. Brovelli, M.A.; Sun, Y.; Yordanov, V. Monitoring Forest Change in the Amazon Using Multi-Temporal Remote Sensing Data and Machine Learning Classification on Google Earth Engine. ISPRS Int. J. Geo-Inf. 2020, 9, 580. [Google Scholar] [CrossRef]
  51. Kharki, A.E.; Mechbouh, J.; Wahbi, M.; Alaoui, O.Y.; Boulaassal, H.; Maatouk, M.; Kharki, O.E. Optimizing SVM for Argan Tree Classification Using Sentinel-2 Data: A Case Study in the Sous-Massa Region, Morocco. Rev. Teledetec. 2025, 65, 22060. [Google Scholar] [CrossRef]
  52. Moumni, A.; Belghazi, T.; Maksoudi, B.; Lahrouni, A. Argan Tree (Argania Spinosa (L.) Skeels) Mapping Based on Multisensor Fusion of Satellite Imagery in Essaouira Province, Morocco. J. Sens. 2021, 2021, e6679914. [Google Scholar] [CrossRef]
  53. Sebbar, B.; Moumni, A.; Lahrouni, A.; Chehbouni, A.; Belghazi, T.; Maksoudi, B. Remotely Sensed Phenology Monitoring and Land-Cover Classification for the Localization of the Endemic Argan Tree in the Southern-West of Morocco. J. Sustain. For. 2022, 41, 1014–1028. [Google Scholar] [CrossRef]
  54. Elmoussaoui, E.; Moumni, A.; Lahrouni, A. Cartography of moroccan argan tree using combined optical and sar imagery integrated with digital elevation model. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLVI-4-W5-2021, 211–217. [Google Scholar] [CrossRef]
  55. Elhassouny, A.; Idbraim, S.; Bekkari, A.; Mammass, D.; Ducrot, D. Change Detection by Fusion/Contextual Classification Based on a Hybrid DSmT Model and ICM with Constraints. Int. J. Comput. Appl. 2011, 35, 28–40. [Google Scholar]
  56. Morton, J.F.; Voss, G.L. The Argan Tree (Argania Sideroxylon, Sapotaceae), a Desert Source of Edible Oil. Econ. Bot. 1987, 41, 221–233. [Google Scholar] [CrossRef]
  57. Kirchhoff, M.; Marzolff, I.; Stephan, R.; Seeger, M.; Aït Hssaine, A.; Ries, J.B. Monitoring Dryland Trees with Remote Sensing. Part B: Combining Tree Cover and Plant Architecture Data to Assess Degradation and Recovery of Argania Spinosa Woodlands of South Morocco. Front. Environ. Sci. 2022, 10, 896703. [Google Scholar] [CrossRef]
  58. Gascon, F.; Bouzinac, C.; Thépaut, O.; Jung, M.; Francesconi, B.; Louis, J.; Lonjou, V.; Lafrance, B.; Massera, S.; Gaudel-Vacaresse, A.; et al. Copernicus Sentinel-2A Calibration and Products Validation Status. Remote Sens. 2017, 9, 584. [Google Scholar] [CrossRef]
  59. Sentinel-2 MSI: MultiSpectral Instrument, Level-1C [Deprecated]|Earth Engine Data Catalog. Available online: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2 (accessed on 14 April 2024).
  60. Francini, S.; McRoberts, R.E.; D’Amico, G.; Coops, N.C.; Hermosilla, T.; White, J.C.; Wulder, M.A.; Marchetti, M.; Mugnozza, G.S.; Chirici, G. An Open Science and Open Data Approach for the Statistically Robust Estimation of Forest Disturbance Areas. Int. J. Appl. Earth Obs. Geoinf. 2022, 106, 102663. [Google Scholar] [CrossRef]
  61. Gomes, V.C.F.; Queiroz, G.R.; Ferreira, K.R. An Overview of Platforms for Big Earth Observation Data Management and Analysis. Remote Sens. 2020, 12, 1253. [Google Scholar] [CrossRef]
  62. Sentinel Hub. Available online: https://www.sentinel-hub.com/about/ (accessed on 19 October 2024).
  63. Dhu, T.; Giuliani, G.; Juárez, J.; Kavvada, A.; Killough, B.; Merodio, P.; Minchin, S.; Ramage, S. National Open Data Cubes and Their Contribution to Country-Level Development Policies and Practices. Data 2019, 4, 144. [Google Scholar] [CrossRef]
  64. Open Data Cube. Available online: https://www.opendatacube.org/overview (accessed on 19 October 2024).
  65. Ghosh, A.; Rambaud, P.; Finegold, Y.; Jonckheere, I.; Martin-Ortega, P.; Jalal, R.; Adebayo, A.D.; Alvarez, A.; Borretti, M.; Caela, J.; et al. Monitoring Sustainable Development Goal Indicator 15.3.1 on Land Degradation Using SEPAL: Examples, Challenges and Prospects. Land 2024, 13, 1027. [Google Scholar] [CrossRef]
  66. Documentation—SEPAL Documentation. Available online: https://docs.sepal.io/en/latest/ (accessed on 19 October 2024).
  67. Soille, P.; Marchetti, P.G. Proceedings of the 2017 Conference on Big Data from Space (BIDS’ 2017): 28th–30th November 2017 Toulouse (France); Publications Office of the European Union: Luxembourg, 2017; ISBN 978-92-79-73527-1. [Google Scholar]
  68. Reference Manual—JEODPP Interactive Library: User Guide 1.0 Documentation. Available online: https://jeodpp.jrc.ec.europa.eu/services/processing/interhelp/docfiles/3_reference.html (accessed on 19 October 2024).
  69. Wang, L.; Ma, Y.; Yan, J.; Chang, V.; Zomaya, A.Y. pipsCloud: High Performance Cloud Computing for Remote Sensing Big Data Management and Processing. Future Gener. Comput. Syst. 2018, 78, 353–368. [Google Scholar] [CrossRef]
  70. Suzhou Zhenjian Information Technology Co., Ltd.—Développement et Promotion de Logiciels de Photogrammétrie—Photogrammétrie Aérienne—Photogrammétrie UAV à Basse Altitude—Traitement des Données de Photogrammétrie. Available online: http://www.pipscloud.net/index.asp (accessed on 29 January 2025).
  71. Pondi, B.; Appel, M.; Pebesma, E. OpenEOcubes: An Open-Source and Lightweight R-Based RESTful Web Service for Analyzing Earth Observation Data Cubes. Earth Sci. Inform. 2024, 17, 1809–1818. [Google Scholar] [CrossRef]
  72. openEO Platform Documentation. Available online: https://docs.openeo.cloud/ (accessed on 19 October 2024).
  73. Google Earth Engine. Available online: https://developers.google.com/earth-engine (accessed on 19 October 2024).
  74. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  75. Pham-Duc, B.; Nguyen, H.; Phan, H.; Tran-Anh, Q. Trends and Applications of Google Earth Engine in Remote Sensing and Earth Science Research: A Bibliometric Analysis Using Scopus Database. Earth Sci. Inform. 2023, 16, 2355–2371. [Google Scholar] [CrossRef]
  76. Seydi, S.T.; Akhoondzadeh, M.; Amani, M.; Mahdavi, S. Wildfire Damage Assessment over Australia Using Sentinel-2 Imagery and MODIS Land Cover Product within the Google Earth Engine Cloud Platform. Remote Sens. 2021, 13, 220. [Google Scholar] [CrossRef]
  77. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for Sentinel-2. Image Signal Process. Remote Sens. XXIII 2017, 10427, 1042704. [Google Scholar] [CrossRef]
  78. Poortinga, A.; Tenneson, K.; Shapiro, A.; Nquyen, Q.; San Aung, K.; Chishtie, F.; Saah, D. Mapping Plantations in Myanmar by Fusing Landsat-8, Sentinel-2 and Sentinel-1 Data along with Systematic Error Quantification. Remote Sens. 2019, 11, 831. [Google Scholar] [CrossRef]
  79. Nugroho, F.S.; Danoedoro, P.; Arjasakusuma, S.; Candra, D.S.; Bayanuddin, A.A.; Samodra, G. Assessment of Sentinel-1 and Sentinel-2 Data for Landslides Identification Using Google Earth Engine. In Proceedings of the 2021 7th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Bali, Indonesia, 1–3 November 2021; pp. 1–6. [Google Scholar]
  80. Celik, N. Change Detection of Urban Areas in Ankara through Google Earth Engine. In Proceedings of the 2018 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018; pp. 1–5. [Google Scholar]
  81. Feng, S.; Li, W.; Xu, J.; Liang, T.; Ma, X.; Wang, W.; Yu, H. Land Use/Land Cover Mapping Based on GEE for the Monitoring of Changes in Ecosystem Types in the Upper Yellow River Basin over the Tibetan Plateau. Remote Sens. 2022, 14, 5361. [Google Scholar] [CrossRef]
  82. Arpitha, M.; Ahmed, S.A.; Harishnaika, N. Land Use and Land Cover Classification Using Machine Learning Algorithms in Google Earth Engine. Earth Sci. Inform. 2023, 16, 3057–3073. [Google Scholar] [CrossRef]
  83. Les Indices de Végétation. Available online: https://e-cours.univ-paris1.fr/modules/uved/envcal/html/vegetation/indices/index.html (accessed on 14 April 2024).
  84. Sinergise, S.-H. Sentinel-2 RS Indices. Available online: https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/indexdb/ (accessed on 14 April 2024).
  85. Bannari, A.; Morin, D.; Bonn, F.; Huete, A.R. A Review of Vegetation Indices. Remote Sens. Rev. 1995, 13, 95–120. [Google Scholar] [CrossRef]
  86. IDB—Information for Sensor and Index. Available online: https://www.indexdatabase.de/db/si-single.php?sensor_id=96&rsindex_id=4 (accessed on 19 October 2024).
  87. Sentinel-Hub/Indexdb/Id_4.Js. Available online: https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/indexdb/id_4.js (accessed on 19 October 2024).
  88. Indices Gallery—ArcGIS Pro|Documentation. Available online: https://pro.arcgis.com/en/pro-app/latest/help/data/imagery/indices-gallery.htm (accessed on 19 October 2024).
  89. Sentinel-Hub/Indexdb/Id_64.Js. Available online: https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/indexdb/id_64.js (accessed on 19 October 2024).
  90. Sentinel-Hub/Indexdb/Id_87.Js. Available online: https://custom-scripts.sentinel-hub.com/custom-scripts/sentinel-2/indexdb/id_87.js (accessed on 19 October 2024).
  91. Misra, G.; Cawkwell, F.; Wingler, A. Status of Phenological Research Using Sentinel-2 Data: A Review. Remote Sens. 2020, 12, 2760. [Google Scholar] [CrossRef]
  92. Aslan, M.F.; Sabanci, K.; Aslan, B. Artificial Intelligence Techniques in Crop Yield Estimation Based on Sentinel-2 Data: A Comprehensive Survey. Sustainability 2024, 16, 8277. [Google Scholar] [CrossRef]
  93. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS. In Third Earth Resources Technology Satellite-1 Symposium. Volume 1: Technical Presentations, Section A; NASA: Washington, DC, USA, 1974. [Google Scholar]
  94. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the Radiometric and Biophysical Performance of the MODIS Vegetation Indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  95. Kaufman, Y.J.; Tanre, D. Atmospherically Resistant Vegetation Index (ARVI) for EOS-MODIS. IEEE Trans. Geosci. Remote Sens. 1992, 30, 261–270. [Google Scholar] [CrossRef]
  96. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a Green Channel in Remote Sensing of Global Vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  97. Gao, B. NDWI—A Normalized Difference Water Index for Remote Sensing of Vegetation Liquid Water from Space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  98. Richardson, A.J.; Wiegand, C.L. Distinguishing Vegetation from Soil Background Information. Photogramm. Eng. Remote Sens. 1977, 43, 1541–1552. [Google Scholar]
  99. Huete, A.R. A Soil-Adjusted Vegetation Index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  100. Al-Khaier, F. Soil Salinity Detection Using Satellite Remote Sensing. Geo-Inf. Sci. Earth Obs. 2003, 70, 1–70. [Google Scholar]
  101. Mulla, D.J. Twenty Five Years of Remote Sensing in Precision Agriculture: Key Advances and Remaining Knowledge Gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  102. Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-2A|Earth Engine Data Catalog. Available online: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED?hl=fr (accessed on 28 February 2025).
  103. Migas-Mazur, R.; Kycko, M.; Zwijacz-Kozica, T.; Zagajewski, B. Assessment of Sentinel-2 Images, Support Vector Machines and Change Detection Algorithms for Bark Beetle Outbreaks Mapping in the Tatra Mountains. Remote Sens. 2021, 13, 3314. [Google Scholar] [CrossRef]
  104. Zhang, X.; Yu, L.; Zhou, Q.; Wu, D.; Ren, L.; Luo, Y. Detection of Tree Species in Beijing Plain Afforestation Project Using Satellite Sensors and Machine Learning Algorithms. Forests 2023, 14, 1889. [Google Scholar] [CrossRef]
  105. Candido, C.; Blanco, A.C.; Medina, J.; Gubatanga, E.; Santos, A.; Ana, R.S.; Reyes, R.B. Improving the Consistency of Multi-Temporal Land Cover Mapping of Laguna Lake Watershed Using Light Gradient Boosting Machine (LightGBM) Approach, Change Detection Analysis, and Markov Chain. Remote Sens. Appl. Soc. Environ. 2021, 23, 100565. [Google Scholar] [CrossRef]
  106. Alonso, L.; Picos, J.; Bastos, G.; Armesto, J. Detection of Very Small Tree Plantations and Tree-Level Characterization Using Open-Access Remote-Sensing Databases. Remote Sens. 2020, 12, 2276. [Google Scholar] [CrossRef]
  107. Pádua, L.; Antão-Geraldes, A.M.; Sousa, J.J.; Rodrigues, M.Â.; Oliveira, V.; Santos, D.; Miguens, M.F.P.; Castro, J.P. Water Hyacinth (Eichhornia Crassipes) Detection Using Coarse and High Resolution Multispectral Data. Drones 2022, 6, 47. [Google Scholar] [CrossRef]
  108. Vasilakos, C.; Kavroudakis, D.; Georganta, A. Machine Learning Classification Ensemble of Multitemporal Sentinel-2 Images: The Case of a Mixed Mediterranean Ecosystem. Remote Sens. 2020, 12, 2005. [Google Scholar] [CrossRef]
  109. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000; ISBN 978-0-521-78019-3. [Google Scholar]
  110. Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
  111. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  112. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: New York, NY, USA, 2017; Volume 30. [Google Scholar]
  113. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Association for Computing Machinery: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  114. Domingos, P.; Pazzani, M. On the Optimality of the Simple Bayesian Classifier under Zero-One Loss. Mach. Learn. 1997, 29, 103–130. [Google Scholar] [CrossRef]
  115. Cover, T.; Hart, P. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  116. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  117. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  118. Scikit-Learn: Machine Learning in Python—Scikit-Learn 1.5.2 Documentation. Available online: https://scikit-learn.org/stable/ (accessed on 18 October 2024).
  119. Welcome to LightGBM’s Documentation!—LightGBM 4.5.0 Documentation. Available online: https://lightgbm.readthedocs.io/en/stable/ (accessed on 18 October 2024).
  120. Python Package Introduction—Xgboost 2.1.1 Documentation. Available online: https://xgboost.readthedocs.io/en/stable/python/python_intro.html (accessed on 18 October 2024).
  121. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation; USENIX Association: Berkeley, CA, USA, 2016; pp. 265–283. [Google Scholar]
  122. TensorFlow. Available online: https://www.tensorflow.org/?hl=fr (accessed on 18 October 2024).
  123. Pande, C.B. Land Use/Land Cover and Change Detection Mapping in Rahuri Watershed Area (MS), India Using the Google Earth Engine and Machine Learning Approach. Geocarto Int. 2022, 37, 13860–13880. [Google Scholar] [CrossRef]
  124. Lappicy, T.; Cabral, A.I.R.; Silva, R.G.P.d.; Arguelho, J.S.; Andrade, S.P.B.d.; Pereira, A.K.; Laques, A.-E.; Saito, C.H. LandScriptDeforestMap: An R Package to Evaluate Deforestation in Remote Sensing Images. SoftwareX 2024, 27, 101799. [Google Scholar] [CrossRef]
  125. Mohod, S.; Thakare, R.D.; Bhoyar, D.B.; Khade, S.S.; Fulzele, P. Remote Sensing Application for Analysis of Forest Change Detection. In Proceedings of the 2022 International Conference for Advancement in Technology (ICONAT), Goa, India, 21–22 January 2022; pp. 1–7. [Google Scholar]
  126. Moukrim, S.; Benabou, A.; Lahssini, S.; Aafi, A.; Chkhichekh, A.; Moudden, F.; Bammou, M.B.; Aboudi, A.E.; Laaribya, S. Spatio-Temporal Analysis of North African Forest Cover Dynamics Using Time Series of Vegetation Indices—Case of the Maamora Forest (Morocco). Biosyst. Divers. 2022, 30, 372–379. [Google Scholar] [CrossRef]
  127. Rakhymberdina, M.E.; Daumova, G.K.; Zhussupova, G.K.; Chettykbayev, R.K.; Chepashev, D.V. Mapping vegetation types on different slopes and assessing the dynamics of their change over a long period of time. Bull. Serikbayev EKTU 2024, 3, 177–188. [Google Scholar] [CrossRef]
  128. The Prolonged Drought in Morocco|Copernicus. Available online: https://www.copernicus.eu/en/media/image-day-gallery/prolonged-drought-morocco (accessed on 5 March 2025).
  129. Prolonged Drought in Morocco Slashes 2024 Wheat Harvest by Nearly 50%: Innovative Solutions in the Face of Crisis. Available online: https://droughtclp.unccd.int/blog/prolonged-drought-morocco-slashes-2024-wheat-harvest-nearly-50-innovative-solutions-face (accessed on 5 March 2025).
  130. Msanda, F.; Aboudi, A.E.; Peltier, J.-P. Biodiversité et biogéographie de l’arganeraie marocaine. Cah. Agric. 2005, 14, 357–364. [Google Scholar]
  131. Forest Pulse: The Latest on the World’s Forests|World Resources Institute Research. Available online: https://research.wri.org/gfr/latest-analysis-deforestation-trends (accessed on 28 January 2025).
  132. Morocco Deforestation Rates & Statistics|GFW. Available online: https://www.globalforestwatch.org/dashboards/country/MAR?category=undefined (accessed on 28 January 2025).
Figure 1. Flowchart of the methodology followed in this study: (1) Data acquisition and preprocessing, (2) Argan trees detection and (3) Change detection.
Figure 1. Flowchart of the methodology followed in this study: (1) Data acquisition and preprocessing, (2) Argan trees detection and (3) Change detection.
Applsci 15 03231 g001
Figure 2. Images of the study area at various zoom levels, highlighted with red boxes, illustrating the argan forest.
Figure 2. Images of the study area at various zoom levels, highlighted with red boxes, illustrating the argan forest.
Applsci 15 03231 g002
Figure 3. The process of labelling pixels.
Figure 3. The process of labelling pixels.
Applsci 15 03231 g003
Figure 4. The reference image for both 2017/2018 and 2022/2023, including annotations on the 2022/2023 image. The green box highlights a zoomed-in area in the high-resolution Google Earth satellite image.
Figure 4. The reference image for both 2017/2018 and 2022/2023, including annotations on the 2022/2023 image. The green box highlights a zoomed-in area in the high-resolution Google Earth satellite image.
Applsci 15 03231 g004
Figure 5. Sentinel-2 pixel size (black grid) on very high-resolution Google Earth imagery.
Figure 5. Sentinel-2 pixel size (black grid) on very high-resolution Google Earth imagery.
Applsci 15 03231 g005
Figure 6. The form of the confusion matrix.
Figure 6. The form of the confusion matrix.
Applsci 15 03231 g006
Figure 7. Overall accuracy results of argan detection for each time using LightGBM and single temporal Sentinel-2 Level 1C data.
Figure 7. Overall accuracy results of argan detection for each time using LightGBM and single temporal Sentinel-2 Level 1C data.
Applsci 15 03231 g007
Figure 8. Bar charts illustrating the importance of each spectral information element (Sentinel-2 bands and spectral indices).
Figure 8. Bar charts illustrating the importance of each spectral information element (Sentinel-2 bands and spectral indices).
Applsci 15 03231 g008
Figure 9. Argan map of the study area for 2017/2018 and 2022/2023.
Figure 9. Argan map of the study area for 2017/2018 and 2022/2023.
Applsci 15 03231 g009
Figure 10. Comparison of Sentinel-2 true-color imagery, annotated ground truth (2022/2023), and predicted argan tree map.
Figure 10. Comparison of Sentinel-2 true-color imagery, annotated ground truth (2022/2023), and predicted argan tree map.
Applsci 15 03231 g010
Figure 11. Map of changes and deforestation for the study area (patch size: 9 × 9).
Figure 11. Map of changes and deforestation for the study area (patch size: 9 × 9).
Applsci 15 03231 g011
Figure 12. Monthly precipitation (mm) in the study area during the 2017–2023 period [128].
Figure 12. Monthly precipitation (mm) in the study area during the 2017–2023 period [128].
Applsci 15 03231 g012
Figure 13. Deforestation maps by patch size in study area (Region 1).
Figure 13. Deforestation maps by patch size in study area (Region 1).
Applsci 15 03231 g013
Figure 14. Deforestation maps by patch size in study area (Region 2).
Figure 14. Deforestation maps by patch size in study area (Region 2).
Applsci 15 03231 g014
Figure 15. Samples of deforestation areas: (a) a subset of 2017/2018 image, (b) a subset of the image 2022/2023 image, (c) a subset of the deforestation map (9 × 9 patch).
Figure 15. Samples of deforestation areas: (a) a subset of 2017/2018 image, (b) a subset of the image 2022/2023 image, (c) a subset of the deforestation map (9 × 9 patch).
Applsci 15 03231 g015
Table 1. Argan oil market analysis: global and Morocco size and forecast (2019–2030) [4,5].
Table 1. Argan oil market analysis: global and Morocco size and forecast (2019–2030) [4,5].
Moroccan Market SizeGlobal Market Size
Revenue, 2019 (US$M)$37.4$233.0
Forecast, 2030 (US$M)$97.0$712.5
Table 2. Changes in argan area and density (argan trees per ha): historical data from the early 20th century to 2016 [11].
Table 2. Changes in argan area and density (argan trees per ha): historical data from the early 20th century to 2016 [11].
Early 20th Century2016
Area (ha)1,500,000829,087
Density (trees/ha)10030
Table 3. Sentinel-2 Level 1C spectral bands [59].
Table 3. Sentinel-2 Level 1C spectral bands [59].
Band NameDescriptionWavelength (nm)
S2A/S2B
Pixel Size (m)
B1Aerosols443.9/442.360
B2Blue496.6/492.110
B3Green560/55910
B4Red664.5/66510
B5Red Edge 1703.9/703.820
B6Red Edge 2740.2/739.120
B7Red Edge 3782.5/779.720
B8NIR835.1/83310
B8ARed Edge 4864.8/86420
B9Water vapor945/943.260
B10Cirrus1373.5/1376.960
B11SWIR 11613.7/1610.420
B12SWIR 22202.4/2185.720
Table 4. Spectral indices selected [84].
Table 4. Spectral indices selected [84].
AbbreviationNameFormula
ARVIAtmospherically Resistant Vegetation Index B 8 A B 4 y   ×   ( B 4 B 2 ) B 8 A + B 4 y   ×   ( B 4 B 2 )
EVIEnhanced Vegetation Index 2.5   ×   ( B 8 B 4 ) ( B 8 + 6.0   ×   B 4 7.5   ×   B 2 ) + 1.0
GNDVIGreen Normalized Difference Vegetation Index B 8 B 3 B 8 + B 3
NDVINormalized Difference Vegetation Index B 8 B 4 B 8 + B 4
NDWINormalized Difference Water Index B 3 B 8 B 3 + B 8
PVIPerpendicular Vegetation Index B 8   a     B 4   b a 2 + 1
SAVISoil Adjusted Vegetation Index ( B 8 B 4 )     ×   ( 1 + L ) B 8 + B 4 + L
SCISoil Composition Index B 11 B 8 B 11 + B 8
y: quotient derived from the components of atmospheric reflectance in the blue and red channel (commonly set to 0.069) [85,86,87]. a: slope of the soil line (commonly set to 0.149) [85,88,89]. b: gradient of the soil line (commonly set to 0.735) [85,88,89]. L: amount of green vegetation cover (commonly set to 0.725) [85,88,90].
Table 5. The image capture days selected to create the time series for the years 2017/2018 and 2022/2023, where two images were chosen from each month that met the criteria, one from the first half and one from the second half.
Table 5. The image capture days selected to create the time series for the years 2017/2018 and 2022/2023, where two images were chosen from each month that met the criteria, one from the first half and one from the second half.
2017/20182022/2023
The First HalfThe Second HalfThe First HalfThe Second Half
02/09/201719/09/201708/09/202228/09/2022
02/10/201722/10/201706/10/2022 21/10/2022
01/11/201721/11/201707/11/202222/11/2022
08/12/201718/12/201710/12/202220/12/2022
02/01/201817/01/201801/01/202316/01/2023
01/02/201819/02/201803/02/202318/02/2023
11/03/201821/03/201805/03/202320/03/2023
10/04/201827/04/201809/04/202324/04/2023
10/05/201815/05/201809/05/202319/05/2023
06/06/201819/06/201815/06/202325/06/2023
04/07/201824/07/201805/07/202318/07/2023
08/08/201818/08/201802/08/202317/08/2023
Table 6. Most significant parameters for each model and the tested values.
Table 6. Most significant parameters for each model and the tested values.
Machine Learning ModelBasic ParametersTested Values
Support Vector Machinekernelrbf, poly, linear, sigmoid
Decision Treecriteriongini, entropy
splitterbest, random
Random Forestcriteriongini, entropy, log_loss
LightGBMboosting_typegbdt, dart, goss
XGBoostboostergbtree, gblinear, dart
K-Nearest Neighborsn_neighbors2, 3, 4, 5, 6, 7, 8, 9, 10
Artificial Neural Networkunits
(Neurons in hidden layer)
26/24, 18, 13/12, 10
Table 7. Tabular Data Split.
Table 7. Tabular Data Split.
DataData SplitCategoriesNumber of Samples
Single-temporal dataTraining Data Set (90%)Argan626
Non-Argan626
Testing Data Set (10%)Argan69
Non-Argan69
Multi-temporal dataTraining Data Set (80%)Argan15,024
Non-Argan15,024
Testing Data Set (20%)Argan1656
Non-Argan1656
Table 8. Formulas of the metrics [108].
Table 8. Formulas of the metrics [108].
Metric NameFormula
Overall accuracy T P   +   T N T P   +   F P   +   T N   +   F N
Precision (Argan) T P T P   +   F P
Precision (Non-Argan) T N T N   +   F N
Kappa p 0     p e 1     p e
TP (True Positive): correctly identified Argan (Argan tree is present, and the model predicts Argan). TN (True Negative): correctly identified Non-Argan (Argan tree is absent, and the model predicts Non-Argan). FP (False Positive): incorrectly identified as Argan (Argan tree is absent, but the model predicts Argan). FN (False Negative): incorrectly identified as Non-Argan (Argan tree is present, but the model predicts Non-Argan). p 0 = T P + T N T P + T N + F P + F N , p e = ( T P + F N ) × ( T P + F P ) + ( F P + T N ) × ( F N + T N ) ( T P + T N + F P + F N ) 2 .
Table 9. Results of the machine learning models for detecting argan trees in 2017/2018.
Table 9. Results of the machine learning models for detecting argan trees in 2017/2018.
AlgorithmData LevelOverall AccuracyArgan PrecisionNon-Argan PrecisionKappa
Support Vector Machine1C0.9640.960.970.928
2A0.9570.970.940.913
Decision Tree1C0.9780.961.000.957
2A0.9780.961.000.957
Random Forest1C0.9780.961.000.957
2A0.9780.961.000.957
LightGBM1C0.9860.971.000.971
2A0.9860.971.000.971
XGBoost1C0.9780.961.000.957
2A0.9710.951.000.942
Gaussian Naive Bayes1C0.8770.910.850.754
2A0.8700.920.830.739
K-Nearest Neighbors1C0.9570.930.980.913
2A0.9640.960.970.928
Artificial Neural Network1C0.9640.940.980.928
2A0.9490.940.960.899
Table 10. Results of the machine learning models for detecting Argan trees in 2022/2023.
Table 10. Results of the machine learning models for detecting Argan trees in 2022/2023.
AlgorithmData LevelOverall AccuracyArgan PrecisionNon-Argan PrecisionKappa
Support Vector Machine1C0.9640.970.960.928
2A0.9710.970.970.942
Decision Tree1C0.9860.971.000.971
2A0.9860.971.000.971
Random Forest1C0.9860.971.000.971
2A0.9860.971.000.971
LightGBM1C0.9930.991.000.986
2A0.9860.971.000.971
XGBoost1C0.9860.971.000.971
2A0.9860.971.000.971
Gaussian Naive Bayes1C0.8840.920.850.768
2A0.8840.910.860.768
K-Nearest Neighbors1C0.9710.960.990.942
2A0.9710.970.970.942
Artificial Neural Network1C0.9710.960.990.942
2A0.9640.960.970.928
Table 11. Performance of multi-spectral data combinations (Test data: tabular).
Table 11. Performance of multi-spectral data combinations (Test data: tabular).
DataLife CycleOverall AccuracyArgan PrecisionNon-Argan PrecisionKappa
Combination (a). (10-m bands)2017/20180.9780.961.000.957
2022/20230.9860.971.000.971
Combination (b). (10- and 20-m bands)2017/20180.9780.961.000.957
2022/20230.9860.971.000.971
Combination (c). (All spectral bands)2017/20180.9570.940.970.913
2022/20230.9780.970.990.957
Combination (d)
(B1, B2, B3, B4, B6, B8, ARVI, NDVI, PVI, and SAVI)
2017/20180.9710.951.000.942
2022/20230.9710.960.990.942
Combination (e). (All spectral bands and indices)2017/20180.9860.971.000.971
2022/20230.9930.991.000.986
Table 12. Performance of multi-temporal data combinations (Test data: reference image).
Table 12. Performance of multi-temporal data combinations (Test data: reference image).
GroupDataOverall AccuracyArgan PrecisionNon-Argan PrecisionKappa
aSingle-date (2022)0.8230.430.880.759
bSingle-date (2017 and 2022)0.8130.470.860.745
cMulti-temporal (2022)0.8480.650.870.739
dMulti-temporal (2017 and 2022)0.8420.550.880.752
Table 13. The rate of deforestation in the Admine forest.
Table 13. The rate of deforestation in the Admine forest.
PatchDeforestation Rate
1   ×   132.70%
3   ×   319.47%
5   ×   58.81%
7   ×   74.74%
9   ×   92.86%
Note: The deforestation rate values include an estimated error of ±0.07%.
Table 14. The average overall accuracy for each season during 2017/2018 and 2022/2023 periods.
Table 14. The average overall accuracy for each season during 2017/2018 and 2022/2023 periods.
Season Average OA
2017/2018
Average OA
2022/2023
Autumn (September to November)0.9600.976
Winter (December to February)0.9580.979
Spring (March to May)0.9570.967
Summer (June to August)0.9500.975
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karmoude, Y.; Idbraim, S.; Saidi, S.; Masse, A.; Arbelo, M. Efficient Argan Tree Deforestation Detection Using Sentinel-2 Time Series and Machine Learning. Appl. Sci. 2025, 15, 3231. https://doi.org/10.3390/app15063231

AMA Style

Karmoude Y, Idbraim S, Saidi S, Masse A, Arbelo M. Efficient Argan Tree Deforestation Detection Using Sentinel-2 Time Series and Machine Learning. Applied Sciences. 2025; 15(6):3231. https://doi.org/10.3390/app15063231

Chicago/Turabian Style

Karmoude, Younes, Soufiane Idbraim, Souad Saidi, Antoine Masse, and Manuel Arbelo. 2025. "Efficient Argan Tree Deforestation Detection Using Sentinel-2 Time Series and Machine Learning" Applied Sciences 15, no. 6: 3231. https://doi.org/10.3390/app15063231

APA Style

Karmoude, Y., Idbraim, S., Saidi, S., Masse, A., & Arbelo, M. (2025). Efficient Argan Tree Deforestation Detection Using Sentinel-2 Time Series and Machine Learning. Applied Sciences, 15(6), 3231. https://doi.org/10.3390/app15063231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop