Next Article in Journal
North American Circum-Arctic Permafrost Degradation Observation Using Sentinel-1 InSAR Data
Previous Article in Journal
Synergizing a Deep Learning and Enhanced Graph-Partitioning Algorithm for Accurate Individual Rubber Tree-Crown Segmentation from Unmanned Aerial Vehicle Light-Detection and Ranging Data
Previous Article in Special Issue
Assessment of Terra/Aqua MODIS and Deep Convective Cloud Albedo Solar Calibration Accuracies and Stabilities Using Lunar Calibrated MERBE Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weed Species Identification: Acquisition, Feature Analysis, and Evaluation of a Hyperspectral and RGB Dataset with Labeled Data

1
Laboratory for Multidimensional Analysis in Remote Sensing (MARS), Department of Mapping and Geoinformation Engineering, Technion-Israel Institute of Technology, Haifa 32000, Israel
2
Department of Plant Pathology and Weed Research, Agricultural Research Organization, Newe Ya’ar Research Center, Ramat-Yishai 30095, Israel
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(15), 2808; https://doi.org/10.3390/rs16152808
Submission received: 8 May 2024 / Revised: 24 July 2024 / Accepted: 27 July 2024 / Published: 31 July 2024
(This article belongs to the Special Issue Remote Sensing Data Sets II)

Abstract

:
Site-specific weed management employs image data to generate maps through various methodologies that classify pixels corresponding to crop, soil, and weed. Further, many studies have focused on identifying specific weed species using spectral data. Nonetheless, the availability of open-access weed datasets remains limited. Remarkably, despite the extensive research employing hyperspectral imaging data to classify species under varying conditions, to the best of our knowledge, there are no open-access hyperspectral weed datasets. Consequently, accessible spectral weed datasets are primarily RGB or multispectral and mostly lack the temporal aspect, i.e., they contain a single measurement day. This paper introduces an open dataset for training and evaluating machine-learning methods and spectral features to classify weeds based on various biological traits. The dataset comprises 30 hyperspectral images, each containing thousands of pixels with 204 unique visible and near-infrared bands captured in a controlled environment. In addition, each scene includes a corresponding RGB image with a higher spatial resolution. We included three weed species in this dataset, representing different botanical groups and photosynthetic mechanisms. In addition, the dataset contains meticulously sampled labeled data for training and testing. The images represent a time series of the weed’s growth along its early stages, critical for precise herbicide application. We conducted an experimental evaluation to test the performance of a machine-learning approach, a deep-learning approach, and Spectral Mixture Analysis (SMA) to identify the different weed traits. In addition, we analyzed the importance of features using the random forest algorithm and evaluated the performance of the selected algorithms while using different sets of features.

1. Introduction

Weeds within agricultural fields present a substantial challenge, often causing detrimental impacts on crop plants and significantly reducing yield quality and quantity [1]. While herbicides effectively control weeds, their application is associated with negative environmental pollution and human health aspects. To address this pressing issue, the emergence of the Site-Specific Weed Management (SSWM) approach has offered a pathway toward more sustainable weed control strategies. This approach facilitates the precise application of herbicides based on the specific location of the weed, its density, and its species composition [2]. SSWM utilizes image data, which is employed to generate weed maps through various methodologies capable of identifying pixels corresponding to crops, soil, and distinct weed species. However, the complexity of differentiating between various weed species due to their similarities presents a formidable challenge.
In this regard, the potential of spectral data in classifying weed species has been the subject of many studies [3]. Scenarios such as dense weed populations, overlapping leaves, early growth stages, and coarse spatial resolutions pose challenges in identifying weed species using texture and shape features and can benefit from spectral data. Spectral data can capture plant physiological trait variations, encompassing leaf anatomy and biochemistry. This functionality is exhibited through the Visible Range (VIS), which responds to variations in pigment content and photosynthetic activity; the Near-Infrared Range (NIR), which is sensitive to anatomical leaf traits; and the Shortwave Infrared Range (SWIR), indicative of water, sugars, and protein content within leaves [4,5].
Spectral data can handle scenarios where several species are densely mixed and pixel-wise classification is required. Therefore, exploring methods for analyzing spectral data is highly relevant for SSWM methodologies. Machine-learning techniques, including Convolutional Neural Network (CNN) approaches, have shown promising results in classifying weed species [6]. Using CNN allows the development of non-linear and relatively more complex models than those of different machine-learning methods. As a result, classification can be applied to simple data such as RGB images.
On the other hand, the large number of features available in spectral data allows the use of simpler machine-learning models that are easier to train and require lower computational demands. However, since weed spectra exhibit variations across distinct growth stages [7] and under varying environmental conditions [8], expansive and diverse training datasets are needed to represent various scenarios. Producing labeled datasets that cover a wide range of imaging conditions is tedious and requires extensive collaboration and data sharing between research groups. Facilitating access to labeled datasets aims to support the development of machine-learning and deep-learning models in agriculture [9,10].
Unfortunately, most public weed datasets predominantly comprise only RGB images [6,11,12,13]. Some studies have acquired and published multispectral data for weed-related research. For example, there exists a dataset capturing nine weed species through UAV-mounted multispectral sensors in a sugar beet field [14] and a sugar beet/weed dataset from a controlled field experiment alongside pixel-wise labeled data [15]. However, the availability of open-access weed datasets remains limited. Furthermore, existing accessible spectral weed datasets are primarily multispectral and confined to single-day measurements. Remarkably, despite the extensive research employing hyperspectral imaging data to classify species under varying conditions [3,7,16], to the best of our knowledge, there are no open-access hyperspectral weed datasets. Therefore, recognizing the significance of hyperspectral data in studying unique weed species features, we present a dataset encompassing three common weed species during their early growth stages. In addition, our dataset includes corresponding RGB images of the scenes with a higher spatial resolution and accurately sampled labels.
This paper aims to provide a detailed description of the dataset’s construction and comprehensively evaluate the data. Consequently, we tested the performance of a machine-learning approach, a deep-learning approach, and an unmixing algorithm for identifying traits based on different features. Our results indicated that none of the examined classification approaches were significantly advantageous. However, we found that the classification of some species significantly depends on features from the VIS region, while others rely more on features from the NIR. Consequently, the classification of all species in the scene required using both spectral regions.

2. Materials and Methods

2.1. Dataset

2.1.1. Scene Construction and Data Acquisition

Our dataset comprises five scenes. Each scene was created by sowing weed seeds within a 34 cm × 54 cm sowing tray divided into cells of 2 cm × 2 cm. The weeds were grown in a greenhouse with daily irrigation. We recorded the five scenes over six days within two weeks following weed sowing, using a hyperspectral camera (Specim IQ, Oulu, Finland).
We acquired the images at 7, 8, 9, and 12, 13, 14 Days After Sowing (DAS), where the weeds started at stage 12 and reached stages 13–14 on the BBCH scale.
Some weed traits, such as the weed botanical group, are essential for making informed decisions about herbicide selection. Consequently, three distinct weed species were carefully chosen in the dataset (Figure 1 and Figure 2b): Amaranthus retroflexus (Ar), Solanum nigrum (Sn), and Setaria adhaerens (Sa). These species were selected because they represent different botanical groups and photosynthesis pathways, as detailed in Figure 2a. This selection facilitates a comprehensive exploration of the spectral resemblances and disparities among the chosen weed species.
The image acquisition process involved maintaining a consistent camera distance of 1.5, with the scene illuminated by two halogen light spots (Figure 2c). The hyperspectral images encompass 512 × 512 pixels with a spatial resolution of 0.1 cm and 204 spectral bands within the visible and near-infrared range (400–1000 nm). In addition to the hyperspectral image, the Specim camera concurrently captured an RGB image of the scene with a size of 645 × 645 pixels.

2.1.2. Image Calibration

We placed a barium-sulfate calibration panel within each scene (see Figure 3, tile a). A lossless Lambertian surface characterizes this panel. Thus, we used it to calibrate the images and convert their units into reflectance values. Accordingly, after dark correction, the reflectance factor R is calculated by dividing the pixel’s radiance by the radiance of a white reference target, as follows:
R λ = L p i x e l λ L r e f e r e n c e λ ,
where R ( λ ) is a reflectance factor at wavelength λ and Lpixel( λ ) and Lreference( λ ) are the radiance of a pixel and white reference target at wavelength λ , respectively.

2.1.3. Data Labeling

The dataset includes labeled data for each image. We first sampled the labels on the RGB images (Figure 3b,c) and then transformed them into the corresponding spectral images (Figure 3d,e). The enhanced spatial resolution and quality of the RGB images have afforded higher accuracy when labeling the scenes than directly labeling the hyperspectral images.
The process involved manually assigning a corresponding label to each weed pixel within the images. Consequently, to label soil pixels, we calculated the excessive green index [20], and all non-weed pixels below an optimized threshold were assigned a soil label. Subsequently, we established a geometric transformation to align the RGB images with their corresponding hyperspectral counterparts. This alignment was facilitated by employing Oriented FAST and rotated BRIEF (ORB) features [21]. Then, the pixel labels were adjusted using the transformation to accurately fit the hyperspectral image format. Finally, we generated three distinct labeled sets, each characterizing specific attributes of weeds and soil:
  • Species labels: featuring designations for Ar, Sn, Sa, and soil.
  • Botanical group labels: including categorizations for monocotyledons (Sa), dicotyledons (Ar, Sn), and soil.
  • Photosynthesis mechanisms labels: distinguishing between C3 weeds (Sn) and C4 weeds (Ar, Sa) and soil.
Figure 3b,c present representative hyperspectral and RGB images from the dataset, an example of sampled weed species and soil spectra, and the corresponding labeled data. Table 1 presents the number of labeled pixels available for each class in the dataset.
With the corresponding labels, our dataset is organized to analyze the weeds’ spectral characteristics based on different traits.

2.2. Experimental Analysis and Evaluation

We applied comprehensive experiments using critical remote sensing applications to test and evaluate the dataset’s significance, advantages, and limitations. Our analysis included exploring different features for classification and assessing the performance of machine-learning and deep-learning approaches for classifying the various traits. Furthermore, we examined the performance of Spectral Mixture Analysis (SMA) for identification at the sub-pixel level.

2.2.1. Feature Selection

Random Forest

We utilized a random forest algorithm to determine the importance of various spectral features for classifying weed traits. The random forest model, consisting of 100 trees, was employed to enhance the robustness and accuracy of the classification process. Each tree in the forest was trained on a bootstrap sample of the data, with a subset of spectral features randomly selected at each split to ensure diversity among the trees. In particular, the algorithm was trained using 5% of the labeled data. Feature importance was then computed based on the mean decrease in impurity (Gini importance) across all trees, providing insights into which spectral features were most influential in distinguishing the different weed groups.

Principle Component Analysis (PCA)

Principal Component Analysis (PCA) was employed to analyze the spectral features further to classify different weed groups. PCA is a dimensionality reduction technique that transforms the original spectral data into a new set of uncorrelated variables, known as principal components, which capture the most significant variance in the data. By applying PCA, we reduced the complexity of the dataset while retaining the essential information required for classification.

2.2.2. Supervised Classification

Following feature selection, we compared the classification performance when using the selected features or a subset of the selected features to the classification achieved using the full spectra.

Support Vector Machine (SVM)

We performed an experimental analysis using the hyperspectral dataset and its corresponding labels. We assessed the Support Vector Machine (SVM) classifier. SVM is a supervised classification algorithm that finds a hyperplane in the hyperspectral feature space to classify each pixel distinctly. Previous works showed the SVM’s robustness for weed classification [22]. Accordingly, we used 5% of the labeled data to train the classifier. We trained the SVM model for each growth stage to obtain optimal classification results. To quantitatively analyze the classification results, we calculated the image’s overall accuracy for each growth stage.

One-Dimensional Convolutional Neural Network

We designed and implemented a One-Dimensional Convolutional Neural Network (1D-CNN) with the following architecture (Figure 4): The network begins with a sequence input layer tailored to the number of input channels, followed by a causal convolutional layer with a specified kernel size of [1,15], 32 filters, and casual padding, keeping the size of the vector unchanged. The convolutional layer is succeeded by a Rectified Linear Unit (ReLU) activation function to introduce non-linearity and a normalization layer to stabilize and accelerate the training process. A global average pooling layer reduces the feature map dimensions, thus preventing overfitting and reducing computational complexity. The network concludes with a fully connected layer that maps the extracted features to the desired number of classes and a SoftMax layer to output the class probabilities. The highest probability then determined the pixel’s class. We used 5% of the labeled data to train the network, 5% for validation, and 90% for the test.

2.2.3. Spectral Unmixing

Spectral mixture analysis is a common method used to extract subpixel information from spectral data and can potentially improve attempts to upscale precision agriculture applications [23]. We assessed the Vectorized Projected Gradient Descend Unmixing (VPGDU) [24]. We extracted the EMS automatically using the N-FINDR and the Vertex Components Analysis (VCA) algorithms [25]. Furthermore, we derived a supervised set of EMs for each growth stage. Specifically, we computed the mean spectra from 5% randomly selected pixels within each group for the different traits. We repeated this process eight times to mitigate bias, employing the mean spectra as the set of EMs. The results presented here are of the supervised extraction, which yielded slightly better accuracy. To quantitatively evaluate the Fraction Map (FM), we compared the predicted coverage from the FM to the actual coverage in the labeled data. We initially partitioned each FM into 64 equal cells for coverage estimation, each measuring n × n pixels (n = 64). Subsequently, we calculated the coverage of each EM within every cell based on the fraction map as follows:
C o v e r a g e E M i = y = 1 n x = 1 n f i c , r n 2
where f i c , r is the fraction value of the ith EM at the cell’s pixel located at c , r . We also computed the actual fractions in every cell from the labeled data. Finally, we calculated the Mean Absolute Error (MAE) of coverage estimation for each EM in every growth stage as follows:
M A E = i = 1 n | f g t i c , r f i c , r | n
where f g t i c , r is the actual fraction value of the ith EM at the cell’s pixel located at c , r .

3. Results

3.1. Classification Based on Full Spectra

First, we performed a qualitative analysis by visually comparing each product to its corresponding labeled data. Figure 5 presents a segment of the classification map generated by each method compared to the labeled data. Notably, all four classification methods successfully separated soil and weed pixels. As Figure 5 shows, the different classification approaches accurately classified all weed traits. The quantitative analysis revealed that the classification accuracies when using the full spectra ranged between 0.88 and 0.96 for all traits using SVM and between 0.90 and 0.97 using the 1D-CNN (Table 2).

3.2. Features Selection and Reduction

Random forest is commonly used in spectral analysis to evaluate the importance of features [26]. Consequently, we used this model of each growth stage to assess the significance of different spectral bands. As Figure 6 shows, we observed relatively high importance of the VIS bands between 500 and 600 and NIR bands between 700 and 800 for all traits. The importance of different bands within those regions varied between growth stages. Consequently, we selected a range of 35 bands in the VIS between 500 and 600 nm and 34 bands in the NIR between 700 and 800 nm. Then, we evaluated the performance of the different classification approaches using each range separately and combined.
We also examined the PCA method for feature reduction. PCA is a method based on statistics and algebra, commonly used to reduce the dimensionality of spectral data [27]. We evaluated the performance using the first ten principal components that explain ~100% of the variability of the dataset.

3.3. Classification Based on Selected Features

Figure 7 shows a representative segment from the classification maps generated based on the different sets of features at 14 DAS. There was no significant difference between the SVM approach and the CNN approach; misclassified pixels were mainly found around the same areas in the image. The highest Overall Accuracies (OAA) were achieved using the full spectra, the combined VIS and NIR features, or the PCA features (Table 2).
However, the reduced classification performance when using a single region was not reflected in the OAA due to the relatively high number of soil pixels and the high level of separation that could be achieved between soil and weeds, regardless of the features used. The effect of selected features on the classification performance is reflected in the precision and recall values in Table 3. When we used a single region in the VIS or NIR, the recall and precision values were reduced significantly for the weed classes (Table 3).
Nonetheless, some separation could still be achieved when we used a single selected region. The VIS region allowed separation into two groups of Sa and Ar, in which Sn pixels were mainly classified as Sa. Consequently, The NIR region allowed separation into two groups of Sa and Sn, in which Ar pixels were primarily classified as Sa. This fact is reflected in Sn’s and Ar’s high precision (100) and low recall (5) for SVM classification in the VIS and NIR, respectively (Table 3). The 1D-CNN classification using the VIS region achieved a relatively similar result; however, precision values were very low for Sn, as less than 100 pixels (i.e., 0.001%) were classified in this class. While the 1D-CNN recall and precision values were lower for all weed classes when using a single region, the SVM classifier could still classify some classes correctly. The performance was significantly reduced when using the VIS alone for botanical group classification, with Sa and Sn mainly classified into the same group. Photosynthetic pathways classification results were negatively affected to a large extent when using a single region, with most pixels classified as the C4 class.

3.4. Spectral Mixture Analysis

A qualitative evaluation of the fraction maps showed spatial coherency between the fraction maps of the different EMs and their ground truth images (Figure 8). Table 4 presents the MAE values calculated for coverage estimation by spectral unmixing using different sets of features at 14 DAS. In most cases, MAE values increased slightly, by 1–4%, when we did not use the full spectra. However, we could not find a consistent reduction when using specific features across EMs. Table 5 presents the MAE values for all EMs at the different growth stages. We observed an increased MAE value as growth progressed (Table 4).

4. Discussion

This paper evaluated a hyperspectral dataset for the purpose of weed species classification, using commonly used methods in remote sensing and spectral analysis. As was demonstrated in our study, this shared dataset suggests an opportunity to examine the performance of different methods for weed classification during critical early growth stages, which are pivotal for effective weed detection and management. The chosen weed species possess diverse anatomical and physiological traits, including the weed botanical group essential for site-specific herbicide selection. The comprehensive labeled data provide a substantial number of pixels for analysis, allowing the examination of the method’s performance using different random training sets of various sizes and the stability of its performance across growth stages for each selected trait.
During the acquisition process, the controlled environment minimized spectral variation that results from natural illumination and physiological differences between the weeds, thus focusing on the ability of the tested algorithms to separate unique species traits.
The weeds in the provided dataset hold different characteristics that were expected to be spectrally differentiable. Differences between weed botanical groups, such as higher abundance of airspaces in the dicotyledons spongy mesophyll compared to monocotyledons, are the cause for spectral variation between those groups [28]. Weeds of different photosynthetic pathways differ in their leaf anatomy. C4 weeds are characterized by their Kranz anatomy. Thus, bundle sheath and mesophyll cell arrangements may cause light scattering and absorption variation [29,30]. Spectral variation between species is expected to result from such traits and unique characteristics. Consequently, our analysis assured that the hyperspectral data were well represented, as both simple linear models and more complex non-linear CNN models achieved satisfactory results for all weed traits.
Our feature analysis revealed that important features mainly exist between 500 and 600 nm and between 700 and 800 nm, i.e., in the VIS and NIR regions. Similar conclusions were reported in previous vegetation research [31]. As we attempted to classify the images using only the specified regions and their combination, we observed that the combined regions’ contribution is essential for classifying the different traits, yet we could observe that some species variation from the others could be captured based on a single region (VIS or NIR).
While hyperspectral data are relatively complex, they allow for accurate results using a linear classification approach. We expected to observe some advantage of the 1D-CNN over the SVM classifier when fewer features were used; however, this was not the case, and both methodologies produced very similar results. It is possible that a different structure of a higher-dimensional CNN, capturing spatial characteristics, would perform better with a limited number of spectral features. Consequently, the data can be used to design, train, and evaluate other CNN structures.
Feature reduction is used to reduce the dimensionality of the data and consequently reduce computational complexity in the classification process. Our experiment demonstrated that reducing the dimensionality to ten PCs is sufficient to achieve satisfactory classification results close to those achieved using the entire spectra.
Feature selection can contribute to the optimal choice of filters and sensors for specific tasks. Our analysis utilized a single feature selection method from many used in the literature. Different features are required for discriminating different combinations of plant species. Furthermore, selecting important features depends on the technique used and the data of interest [16]. In this regard, data sharing is essential to better understanding the variation between datasets and finding general patterns preserved across different datasets, environments, growth stages, and acquisition conditions. Consequently, this dataset can evaluate and compare different feature selection methods.
Acknowledging the significance of SMA methods in analyzing spectral data, we demonstrated that SMA could detect the different trait fractions with an error rate mostly lower than 10%. Our previous work [23] provides a comprehensive analysis of spectral mixture analysis using the proposed dataset, including the simulation of different spectral and spatial resolutions.
As mentioned, we include an RGB image of each scene with a higher spatial resolution and corresponding labeled data to complement the dataset. Although we only used the hyperspectral image for the analysis, the combination of spectral and RGB data suggests an opportunity for users to explore different spectral and spatial features from both hyperspectral and RGB images. This, in turn, implies a chance to study the benefits of feature fusion or other data fusion-based approaches for weed species classification [32,33].
While this data is limited and represents three common weed species, other datasets should include those from different environments and species representative of various traits. A comprehensive analysis of different datasets will allow a better exploration of the main factors driving spectral variation within and between species.

5. Conclusions

The results of our analysis in this paper demonstrated how research can use the proposed dataset to evaluate the performance of different methodologies in classifying weed species and their unique traits. Notably, a nonlinear classification approach using a 1D-CNN did not provide an advantage in this case over a simpler algorithm such as SVM. On the other hand, feature analysis revealed the importance of spectral bands within the VIS (500–600 nm) and NIR (700–800 nm) regions out of the entire spectra available of 204 bands (400–1000 nm). In this regard, the results discovered that using each of the VIS and NIR features separately allows the distinction of different species. For example, using the VIS bands allowed the accurate separation of Ar from the other species, while the NIR bands were advantageous for separating Sn from the rest. Finally, we provided a detailed explanation of the construction of the dataset together with suggestions for further uses and discussed the importance of hyperspectral data sharing for SSWM.

6. Data Construction

The dataset includes data from 30 scenes. Each of the six measurement days has five scenes captured from different sowing plates. We provide a raw image, a dark frame, a white reference, metadata, and reflectance data for each plate’s measurement. Figure 9 illustrates the data structure. At the root of the dataset are five folders corresponding to the measurement days. Within each folder, the five plates are organized in individual folders containing a set of files and subfolders as follows:
  • A PNG file with an RGB composite image derived from the visible bands in the spectral image.
  • A PNG file with the image acquired by the RGB sensor.
  • The Capture folder contains the raw data, dark frame, and white reference data cubes.
  • The Results folder contains the “.dat” and “.hdr” files of the scene’s reflectance data cube.
  • The Label folder contains two sub-folders: the “RGB” folder contains the original labels sampled on the RGB image and the “Hyperspectral” folder contains the transformed labels for the hyperspectral images. Both folders include three image files, with a label assigned for each pixel according to its species, botanical group, and photosynthetic mechanism.
  • The root folder includes a README file providing information about the numerical label for each class in the Label files.

Author Contributions

Conceptualization, I.R. and F.K.; methodology, I.R.; software, I.R.; validation, I.R., F.K. and R.N.L.; formal analysis, I.R.; investigation, I.R.; resources, F.K. and R.N.L.; data curation, I.R.; writing—original draft preparation, I.R.; writing—review and editing, I.R., F.K. and R.N.L.; visualization, I.R.; supervision, F.K. and R.N.L.; project administration, I.R.; funding acquisition, I.R., F.K. and R.N.L. All authors have read and agreed to the published version of the manuscript.

Funding

The Israeli Council for Higher Education (CHE)’s planning and budgeting committee (PBC) partially supported this work.

Data Availability Statement

The original data presented in the study are openly available in the following links: (1) Kizel, Fadi; Ronay, Inbal (2024), “Weed Species Identification: A Hyperspectral and RGB Dataset with Labeled Data”, Mendeley Data, V1, doi: 10.17632/6wm4kzf9y6.1; (2) https://technionmail-my.sharepoint.com/:u:/g/personal/fadikizel_technion_ac_il/EZLap5dBzSdAndNgIILNfPsBYInkRcuwFcLjn97IkKxQeQ?e=qXbLSD (accessed on 26 July 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pimentel, D.; Zuniga, R.; Morrison, D. Update on the Environmental and Economic Costs Associated with Alien-Invasive Species in the United States. Ecol. Econ. 2005, 52, 273–288. [Google Scholar] [CrossRef]
  2. Lati, R.N.; Rasmussen, J.; Andujar, D.; Dorado, J.; Berge, T.W.; Wellhausen, C.; Pflanz, M.; Nordmeyer, H.; Schirrmann, M.; Eizenberg, H.; et al. Site-specific Weed Management—Constraints and Opportunities for the Weed Research Community: Insights from a Workshop. Weed Res. 2021, 61, 147–153. [Google Scholar] [CrossRef]
  3. Li, Y.; Al-Sarayreh, M.; Irie, K.; Hackell, D.; Bourdot, G.; Reis, M.M.; Ghamkhar, K. Identification of Weeds Based on Hyperspectral Imaging and Machine Learning. Front. Plant Sci. 2021, 11, 611622. [Google Scholar] [CrossRef] [PubMed]
  4. Buitrago, M.F.; Groen, T.A.; Hecker, C.A.; Skidmore, A.K. Spectroscopic Determination of Leaf Traits Using Infrared Spectra. Int. J. Appl. Earth Obs. Geoinf. 2018, 69, 237–250. [Google Scholar] [CrossRef]
  5. Ronay, I.; Ephrath, J.E.; Eizenberg, H.; Blumberg, D.G.; Maman, S. Hyperspectral Reflectance and Indices for Characterizing the Dynamics of Crop–Weed Competition for Water. Remote Sens. 2021, 13, 513. [Google Scholar] [CrossRef]
  6. Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A Survey of Deep Learning Techniques for Weed Detection from Images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
  7. Basinger, N.T.; Jennings, K.M.; Hestir, E.L.; Monks, D.W.; Jordan, D.L.; Everman, W.J. Phenology Affects Differentiation of Crop and Weed Species Using Hyperspectral Remote Sensing. Weed Technol. 2020, 34, 897–908. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Slaughter, D.C. Hyperspectral Species Mapping for Automatic Weed Control in Tomato under Thermal Environmental Stress. Comput. Electron. Agric. 2011, 77, 95–104. [Google Scholar] [CrossRef]
  9. Persello, C.; Member, S.; Grift, J.; Fan, X.; Paris, C.; Hänsch, R.; Koeva, M.; Nelson, A. AI4SmallFarms: A Dataset for Crop Field Delineation in Southeast Asian Smallholder Farms. IEEE Geosci. Remote Sens. Lett. 2023, 20, 2505705. [Google Scholar] [CrossRef]
  10. Nascimento, E.; Just, J.; Almeida, J.; Almeida, T. Productive Crop Field Detection: A New Dataset and Deep-Learning Benchmark Results. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5002005. [Google Scholar] [CrossRef]
  11. Krestenitis, M.; Raptis, E.K.; Kapoutsis, A.C.; Ioannidis, K.; Kosmatopoulos, E.B.; Vrochidis, S.; Kompatsiaris, I. CoFly-WeedDB: A UAV Image Dataset for Weed Detection and Species Identification. Data Brief. 2022, 45, 108575. [Google Scholar] [CrossRef] [PubMed]
  12. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9, 2058. [Google Scholar] [CrossRef] [PubMed]
  13. Sudars, K.; Jasko, J.; Namatevs, I.; Ozola, L.; Badaukis, N. Dataset of Annotated Food Crops and Weed Images for Robotic Computer Vision Control. Data Brief. 2020, 31, 105833. [Google Scholar] [CrossRef] [PubMed]
  14. Sa, I.; Chen, Z.; Popovic, M.; Khanna, R.; Liebisch, F.; Nieto, J.; Siegwart, R. WeedNet: Dense Semantic Weed Classification Using Multispectral Images and MAV for Smart Farming. IEEE Robot. Autom. Lett. 2018, 3, 588–595. [Google Scholar] [CrossRef]
  15. Sa, I.; Popović, M.; Khanna, R.; Chen, Z.; Lottes, P.; Liebisch, F.; Nieto, J.; Stachniss, C.; Walter, A.; Siegwart, R. WeedMap: A Large-Scale Semantic Weed Mapping Framework Using Aerial Multispectral Imaging and Deep Neural Network for Precision Farming. Remote Sens. 2018, 10, 1423. [Google Scholar] [CrossRef]
  16. Hennessy, A.; Clarke, K.; Lewis, M. Hyperspectral Classification of Plants: A Review of Waveband Selection Generalisability. Remote Sens. 2020, 12, 113. [Google Scholar] [CrossRef]
  17. Gold, S. (n.d). Setaria adhaerens [Photograph]. Wild Flowers. Available online: https://www.wildflowers.co.il/images/merged/1374-l.jpg?Setaria%20adhaerens (accessed on 24 July 2024).
  18. Gold, S. (n.d). Solanum nigrum [Photograph]. Wild Flowers. Available online: https://www.wildflowers.co.il/images/merged/190-l-1.jpg?Solanum%20nigrum (accessed on 24 July 2024).
  19. Livne, E. (n.d). Amaranthus retroflexus [Photograph]. Wild Flowers. Available online: https://www.wildflowers.co.il/images/merged/510-l.jpg?Amaranthus%20retroflexus (accessed on 24 July 2024).
  20. Meyer, G.E.; Neto, J.C. Verification of Color Vegetation Indices for Automated Crop Imaging Applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  21. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An Efficient Alternative to SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  22. Wang, A.; Zhang, W.; Wei, X. A Review on Weed Detection Using Ground-Based Machine Vision and Image Processing Techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
  23. Ronay, I.; Nisim Lati, R.; Kizel, F. Spectral Mixture Analysis for Weed Traits Identification under Varying Resolutions and Growth Stages. Comput. Electron. Agric. 2024, 220, 108859. [Google Scholar] [CrossRef]
  24. Kizel, F.; Shoshany, M.; Netanyahu, N.S.; Even-Tzur, G.; Benediktsson, J.A. A Stepwise Analytical Projected Gradient Descent Search for Hyperspectral Unmixing and Its Code Vectorization. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4925–4943. [Google Scholar] [CrossRef]
  25. Luo, B.; Yang, C.; Chanussot, J.; Zhang, L. Crop Yield Estimation Based on Unsupervised Linear Unmixing of Multidate Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2013, 51, 162–173. [Google Scholar] [CrossRef]
  26. Sapkota, B.; Singh, V.; Cope, D.; Valasek, J.; Bagavathiannan, M. Mapping and Estimating Weeds in Cotton Using Unmanned Aerial Systems-Borne Imagery. AgriEngineering 2020, 2, 350–366. [Google Scholar] [CrossRef]
  27. Machidon, A.L.; Del Frate, F.; Picchiani, M.; Machidon, O.M.; Ogrutan, P.L. Geometrical Approximated Principal Component Analysis for Hyperspectral Image Analysis. Remote Sens. 2020, 12, 1698. [Google Scholar] [CrossRef]
  28. Gausman, H.W. Plant Leaf Optical Properties in Visible and Near-Infrared Light; International Center for Arid and Semiarid Land Studies (ICASALS): Lubbock, TX, USA, 1985. [Google Scholar]
  29. Liu, L.; Cheng, Z. Mapping C3 and C4 Plant Functional Types Using Separated Solar-Induced Chlorophyll Fluorescence from Hyperspectral Data. Int. J. Remote Sens. 2011, 32, 9171–9183. [Google Scholar] [CrossRef]
  30. Adjorlolo, C.; Mutanga, O.; Cho, M.A.; Ismail, R. Spectral Resampling Based on User-Defined Inter-Band Correlation Filter: C3 and C4 Grass Species Classification. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 535–544. [Google Scholar] [CrossRef]
  31. Chang, G.J.; Oh, Y.; Goldshleger, N.; Shoshany, M. Biomass Estimation of Crops and Natural Shrubs by Combining Red-Edge Ratio with Normalized Difference Vegetation Index. J. Appl. Remote Sens. 2022, 16, 014501. [Google Scholar] [CrossRef]
  32. Kizel, F. Resolution Enhancement of Unsupervised Classification Maps Through Data Fusion of Spectral and Visible Images from Different Sensing Instruments. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021. [Google Scholar]
  33. Kizel, F.; Benediktsson, J.A. Spatially Enhanced Spectral Unmixing Through Data Fusion of Spectral and Visible Images from Different Sensors. Remote Sens. 2020, 12, 1255. [Google Scholar] [CrossRef]
Figure 1. The weed species in the dataset: (a) Setaria adhaerens (Sarah Gold, n.d [17]), (b) Solanum nigrum (Sarah Gold, n.d [18]) (Sn), and (c) Amaranthus retroflexus (Eli Livne, n.d [19]).
Figure 1. The weed species in the dataset: (a) Setaria adhaerens (Sarah Gold, n.d [17]), (b) Solanum nigrum (Sarah Gold, n.d [18]) (Sn), and (c) Amaranthus retroflexus (Eli Livne, n.d [19]).
Remotesensing 16 02808 g001
Figure 2. (a) The weed species included in the dataset, their photosynthetic pathways, and their classification into botanical groups. (b) The weed species noted on the RGB composite of the hyperspectral images: amaranthus retroflexus (Ar), Solanum nigrum (Sn), and Setaria adhaerens (Sa). (c) The experimental setup. (d) The Specim camera.
Figure 2. (a) The weed species included in the dataset, their photosynthetic pathways, and their classification into botanical groups. (b) The weed species noted on the RGB composite of the hyperspectral images: amaranthus retroflexus (Ar), Solanum nigrum (Sn), and Setaria adhaerens (Sa). (c) The experimental setup. (d) The Specim camera.
Remotesensing 16 02808 g002
Figure 3. (a) The spectral signatures from the marked selected pixels in (b). (b,c) Examples of an RGB image and an RGB composite of the hyperspectral image, respectively. (d) The original labels sampled on the RGB image correspond to the area in the red box in (b). (e) The transformed labels onto the hyperspectral image conform to the area in the red box in (c).
Figure 3. (a) The spectral signatures from the marked selected pixels in (b). (b,c) Examples of an RGB image and an RGB composite of the hyperspectral image, respectively. (d) The original labels sampled on the RGB image correspond to the area in the red box in (b). (e) The transformed labels onto the hyperspectral image conform to the area in the red box in (c).
Remotesensing 16 02808 g003
Figure 4. A diagram of the 1D-CNN used for classification. Each pixel is given as the input to the network from the hyperspectral cube of mxn pixels with b bands. After propagation through the network, the output is a probability for each class out of the c classes. The size of each layer input is mentioned above the layer in the diagram.
Figure 4. A diagram of the 1D-CNN used for classification. Each pixel is given as the input to the network from the hyperspectral cube of mxn pixels with b bands. After propagation through the network, the output is a probability for each class out of the c classes. The size of each layer input is mentioned above the layer in the diagram.
Remotesensing 16 02808 g004
Figure 5. A representative segment of the SVM classification maps on 14 DAS compared to the labeled data for the different weed traits. The black color indicates unclassified pixels.
Figure 5. A representative segment of the SVM classification maps on 14 DAS compared to the labeled data for the different weed traits. The black color indicates unclassified pixels.
Remotesensing 16 02808 g005
Figure 6. Spectral feature importance as obtained from the random forest classification for (a) Species, (b) Botanical groups, and (c) Photosynthetic pathways.
Figure 6. Spectral feature importance as obtained from the random forest classification for (a) Species, (b) Botanical groups, and (c) Photosynthetic pathways.
Remotesensing 16 02808 g006
Figure 7. Visualization of the classification results of the images taken at 14 DAS as predicted by SVM and the 1D-CNN using the full spectra, the VIS and NIR features separated and combined, and the PCA features: (a) Species, (b) Botanical groups, (c) Photosynthetic pathways.
Figure 7. Visualization of the classification results of the images taken at 14 DAS as predicted by SVM and the 1D-CNN using the full spectra, the VIS and NIR features separated and combined, and the PCA features: (a) Species, (b) Botanical groups, (c) Photosynthetic pathways.
Remotesensing 16 02808 g007
Figure 8. A representative part of the unmixing abundance map on 14 DAS compared to the labeled data labels for each of the EMs of the different weed traits.
Figure 8. A representative part of the unmixing abundance map on 14 DAS compared to the labeled data labels for each of the EMs of the different weed traits.
Remotesensing 16 02808 g008
Figure 9. File tree illustrating the dataset structure.
Figure 9. File tree illustrating the dataset structure.
Remotesensing 16 02808 g009
Table 1. Number of pixels for each class in the dataset.
Table 1. Number of pixels for each class in the dataset.
DASSaArSnMonocotsDicotsC3C4Soil
721,21813,290632121,21819,61134,5086321372,700
838,35124,57814,73038,35139,30862,92914,730267,935
945,00830,89318,68645,00849,57975,90118,686374,651
1267,13244,08331,93567,13276,018111,21531,935386,183
1379,51156,52344,93979,511101,462136,03444,939342,641
1487,07457,52950,61287,074108,141144,60350,612323,484
Table 2. Overall Classification Accuracies (OAA) were achieved using the different sets of features with SVM and the 1D-CNN. The highest OAA of each growth stage in each method is highlighted in bold.
Table 2. Overall Classification Accuracies (OAA) were achieved using the different sets of features with SVM and the 1D-CNN. The highest OAA of each growth stage in each method is highlighted in bold.
SVM1D-CNN
DASFull SpectraVISNIRVIS + NIRPCAFull SpectraVISNIRVIS + NIRPCA
Species70.960.960.940.970.970.960.960.940.960.97
80.930.890.850.920.920.920.890.850.920.93
90.930.910.870.940.940.930.910.870.940.95
120.890.830.820.900.900.920.850.830.920.92
130.880.830.780.910.910.920.840.790.910.91
140.880.800.780.900.900.900.810.800.900.91
Botanical groups70.960.960.950.970.970.970.960.940.970.98
80.940.890.870.930.930.930.900.870.930.94
90.930.910.890.950.950.950.910.890.950.95
120.900.850.850.920.920.930.860.850.930.93
130.890.850.840.920.920.920.850.840.920.92
140.890.820.830.910.920.920.820.830.910.92
Photosynthetic pathway70.960.960.970.970.970.970.960.970.970.97
80.930.900.910.910.910.910.910.910.920.94
90.930.920.930.930.930.930.920.930.930.95
120.890.870.890.900.890.920.870.890.920.93
130.900.870.870.910.920.930.870.870.920.93
140.890.840.860.900.910.920.850.870.910.92
Table 3. Precision and recall values were achieved using the different sets of features with the SVM and the 1D-CNN. The highest precision and recall of each class are highlighted in bold.
Table 3. Precision and recall values were achieved using the different sets of features with the SVM and the 1D-CNN. The highest precision and recall of each class are highlighted in bold.
PrecisionRecall
TraitMethodClassFull SpectraVISNIRVIS + NIRPCAFull SpectraVISNIRVIS + NIRPCA
SpeciesCNNSa85.55451.881.885.582.783.470.886.485.1
Ar83.969.751.980.37971.465.821.176.181.2
Sn87.826.761.48387.874.4048.173.972.4
Soil92.594.193.49594.797.995.997.396.197
SVMSa89.353.4488483.57986.184.284.984.7
Ar83.87610081.381.474.952.9576.978.6
Sn91.610070.688.588.569.254469.369.3
Soil90.693.294.693.794.198.696.796.997.597.5
Botanical groupsCNNMono86.360.662.683.386.183.862.733.48484.1
Di88.967.960.987.388.985.156.380.583.885.3
Soil94.793.194.794.99596.896.496.49696.9
SVMMono90.258.464.485.486.278.563.149.883.882.9
Di91.666.168.189.389.681.357.673.383.184.1
Soil91.793.69494.494.598.496.897.39797.4
Photosynthetic
pathways
CNNC487.267.47384.585.988.990.487.289.388.6
C389.310067.985.485.774.3020.669.673.8
Soil9591.894.595.495.596.795.296.695.896.3
SVMC489.568.972.481.78384.589.890.789.689.5
C392.910095.692.492.867.6514.653.859.4
Soil92.191.594.994.89598.495.896.796.997
Table 4. MAE of coverage estimation by unmixing using different features for all EMs at 14 DAS.
Table 4. MAE of coverage estimation by unmixing using different features for all EMs at 14 DAS.
EMFull SpectraVISNIRVIS + NIRPCA
SpeciesSa10.928.609.457.7110.91
Ar6.765.866.856.3510.74
Sn7.828.715.644.586.34
Soil4.584.506.358.176.06
Botanical groupsMono6.757.839.837.469.43
Dicot6.8111.849.957.258.78
Soil3.966.157.7110.436.02
Photosynthetic pathwayC412.1017.899.6111.1710.40
C39.5213.165.715.546.98
Soil4.526.326.299.456.08
Table 5. MAE of coverage estimation by unmixing for all EMs at different growth stages.
Table 5. MAE of coverage estimation by unmixing for all EMs at different growth stages.
DASSaArSnSoilMonocotsDicotsSoilC4C3Soil
72.012.303.763.702.092.983.093.494.593.82
85.193.307.134.523.334.594.377.608.384.56
93.902.935.504.123.543.613.666.606.553.73
128.795.577.834.386.576.484.3510.5110.474.67
1312.337.169.237.158.897.696.8114.6410.337.07
1410.926.767.824.586.756.813.9612.109.524.52
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ronay, I.; Lati, R.N.; Kizel, F. Weed Species Identification: Acquisition, Feature Analysis, and Evaluation of a Hyperspectral and RGB Dataset with Labeled Data. Remote Sens. 2024, 16, 2808. https://doi.org/10.3390/rs16152808

AMA Style

Ronay I, Lati RN, Kizel F. Weed Species Identification: Acquisition, Feature Analysis, and Evaluation of a Hyperspectral and RGB Dataset with Labeled Data. Remote Sensing. 2024; 16(15):2808. https://doi.org/10.3390/rs16152808

Chicago/Turabian Style

Ronay, Inbal, Ran Nisim Lati, and Fadi Kizel. 2024. "Weed Species Identification: Acquisition, Feature Analysis, and Evaluation of a Hyperspectral and RGB Dataset with Labeled Data" Remote Sensing 16, no. 15: 2808. https://doi.org/10.3390/rs16152808

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop