Next Article in Journal
Transcriptome Analysis Revealed the Response Mechanism of Pomegranate to Salt Stress
Previous Article in Journal
Correction: Liu et al. Effects of Film Mulching on Soil Microbial Diversity and Community Structure in the Maize Root Zone under Drip Irrigation in Northwest China. Agronomy 2024, 14, 1139
Previous Article in Special Issue
Traditional Strategies and Cutting-Edge Technologies Used for Plant Disease Management: A Comprehensive Overview
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Spectral Intelligence: AI-Driven Hyperspectral Imaging for Agricultural and Ecosystem Applications

1
Department of Plant Pathology, University of Agriculture, Faisalabad 38000, Pakistan
2
Department of Agricultural Science & Engineering, College of Agriculture, Tennessee State University, Nashville, TN 37209, USA
3
Department of Structures and Environmental Engineering, University of Agriculture, Faisalabad 38000, Pakistan
4
Department of Plant Pathology, Institute of Plant Protection, MNS University of Agriculture, Multan 60650, Pakistan
5
Department of Plant Pathology, North Dakota State University, Fargo, ND 58105, USA
6
Horticulture Department, Faculty of Agriculture, Minia University, El-Minia 61517, Egypt
7
Applied Biotechnology Department, University of Technology and Applied Sciences-Sur, Sur 411, Oman
8
Department of Plant Sciences, College of Agricultural and Marine Sciences, Sultan Qaboos University, Al-Khoud, Muscat 123, Oman
9
Faculty of Agriculture, University of Zagreb, Svetošimunska Cesta 25, 10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Agronomy 2024, 14(10), 2260; https://doi.org/10.3390/agronomy14102260
Submission received: 25 August 2024 / Revised: 22 September 2024 / Accepted: 28 September 2024 / Published: 30 September 2024

Abstract

:
Ensuring global food security amid mounting challenges, such as population growth, disease infestations, resource limitations, and climate change, is a pressing concern. Anticipated increases in food demand add further complexity to this critical issue. Plant pathogens, responsible for substantial crop losses (up to 41%) in major crops like wheat, rice, maize, soybean, and potato, exacerbate the situation. Timely disease detection is crucial, yet current practices often identify diseases at advanced stages, leading to severe infestations. To address this, remote sensing and Hyperspectral imaging (HSI) have emerged as robust and nondestructive techniques, exhibiting promising results in early disease identification. Integrating machine learning algorithms with image data sets enables precise spatial–temporal disease identification, facilitating timely detection, predictive modeling, and effective disease management without compromising fitness or climate adaptability. By harnessing these cutting-edge technologies and data-driven decision-making, growers can optimize input costs while achieving enhanced yields, making significant strides toward global food security in the face of climate change risks. This review will discuss some of the foundational concepts of remote sensing, several platforms used for remote sensing data collection, successful application of the approach, and its future perspective.

1. Introduction

Numerous challenges are creating a daunting forecast for the agricultural sector in coming years. An increasing population demands a higher level of food inputs into the global system to ensure food security. The importance of this issue is evident from the UN SDGs (Sustainable Development Goals), which includes zero hunger and food security as a key part of its sustainability plan [1]. The agriculture sector is under immense pressure due to a wide range of challenges including resource depletion, environmental factors, population boom, climate change, plant disease, and infestation. Among these issues, plant disease and infestation are such challenges that are a result of abiotic (availability of optimum climate/weather) and biotic (pathogen inoculum) factors, and are synonymous to all kinds of habitats. Whether a field is hit by any other challenge or not (i.e., some drought, heat, flooding, resources issue), it is almost certain that the farmland will be affected by some kind of disease infestation. In this regard, these plant pathogens are responsible for causing severe yield and economic losses every year across the globe [2]. A varying degree of losses ranging from 20% to 41% have been recorded in crops like maize, potato, wheat, soybean and rice, due to severe pathogen infestation [3]. Disease incidences are becoming more common and intense due to increased human activity on the global stage promoting wide-scale distribution of pathogens. Another key factor contributing to higher disease incidence is the limited genetic diversity in commercially grown high yielding varieties, which results in a poor defense against attacking pathogens [4,5]. Furthermore, often heavy fertilizer applications and the absence of crop rotation practices result in relatively lower diversity among soil-inhabiting beneficial bacteria, making crop plants more vulnerable to infectious pathogens [6].
Early detection of disease is an important practice to avoid massive yield losses and is a key component of an integrated disease management strategy. The most common approach for this activity is either the conventional scouting method or the use of specific genetic and biochemical assays for identification of pathogens associated with a disease. However, each of these strategies is beset by their own set of difficulties. The conventional approach is based on a visual confirmation of the symptoms, which results in the slower detection of disease (especially when present on large scale) and is a time consuming, laborious process. Whereas, the biological assay-based identification demands a higher amount of resource input in terms of the biochemical compounds needed for the assay and the need for new (nascent biochemical compounds) bio-compounds to confirm the pathogen type, to detect even minor mutations, which can further aggravate the financial burden [7]. The importance for an early disease warning system is evident and must act in a manner to ensure the accurate, robust, and automated identification of a disease threat well before its onset.
In this regard, modern sensing technologies, as a part of a precision agriculture approach, are significant to enable the effective early detection of diseases. There are two types of such sensing technologies: the first collects data based on optical images and visuals (the passive remote sensing); the other focuses on the collection of data via laser-based detection systems (active remote sensing). The passive approach collects image-based data using two distinct type of sensors or imaging systems; the first is a multispectral image sensor that has the ability to record a limited number of different spectral resolution bands; the other is a hyperspectral imaging system, that has the ability to record a much larger number of spectral resolution bands [8,9]. Currently, hyperspectral imaging can capture a broad range of accurate image-based data, making it a good choice for the development of an early disease warning and evasion system. It can be utilized for the precise, robust, and effective detection of disease patterns in a non-destructive manner.
A target is achieved by using different sensors to collect data regarding reflected solar radiation and to assess it for disease predictions. In recent years there have been significant advances in hyperspectral data collection sensors. Small commercially available sensors, like Micro- and Nano-Hyperspec, HySpex VNIR, and FireflEYE, can be mounted on to low flying planes or unmanned aerial vehicles (UAVs) to collect imaging data focusing on a relatively smaller scale [10,11]. Furthermore, many satellite-based hyperspectral sensors are also available to provide a relatively larger scale picture; different platforms like DESIS, CHRIS, PRISMA, GLI, MERIS, and EnMAP are some of the well-known systems [12,13]. This increase in regional and global scale sensor technologies has opened a whole new horizon for the better monitoring of crops, soil, and natural resources, providing support to the agriculture sector. So far, various organizations have put remote sensing data in use to provide answers for agricultural challenges. Companies and startups like Descartes Labs, Prospera, and aWhere are working on the effective application of remote sensing data in the agriculture sector (Table 1 provides a list of the agriculture companies working with remote sensing data in the agriculture sector) [14].
Hyperspectral imaging (HSI) has emerged as a cutting-edge tool for precision agriculture (PA) as it leverages the natural ability of materials to absorb specific wavelengths of light while reflecting or scattering others. Unlike traditional imaging methods that rely on the visible spectrum (RGB) or multispectral data, HSI offers a more detailed and thorough perspective on crop health, enabling more precise and targeted crop management strategies. Hyperspectral imaging is being utilized across various platforms, including satellite, airplane, UAV, and ground-based systems. It is employed to estimate crop biochemical properties and nutrient status, to classify crop types, and to retrieve soil moisture and fertility. The integration of hyperspectral sensors with UAVs has expanded the practical applications of HSI to agriculture, allowing for real time, in situ analysis. The high spectral resolution of HSI generates vast amounts of data that are complex to process. AI models often require large amounts of labeled training data, which can be expensive and time-consuming to obtain, particularly for domain-specific agricultural data sets. While AI-enhanced hyperspectral imaging holds great potential for transforming agricultural practices by providing detailed insights into crop health and soil conditions, overcoming these challenges is crucial for its broader adoption and effectiveness in sustainable agriculture.
Given the important role of remote sensing in modern agricultural practices, this review focuses on the fundamentals of sensing and its role in modern plant disease detection. This paper is divided into four sections: the first section discusses various kinds of hyperspectral remotes sensing platforms; the second section briefly discuss the AI algorithms used in remote sensing and precision agriculture, and the stages for data collection, processing, and interpretation of imagery data; the third part contains information about the application of remote sensing data for disease detection and warning systems in agriculture; the final section discusses challenges related to the use of the AI-remote sensing approach and future perspective applications in the agriculture sector on a global scale.

2. Imaging Platforms for Remote Sensing

Several hyperspectral and multispectral sensors are placed on various types of platforms, i.e., airplane, satellite systems, UAVs, and closed range or ground sensors for image acquisition of various resolutions. Among these different platforms, airplane-based and satellite-based systems are the most common ones, whereas UAV/drone-based systems are emerging in recent times. With ongoing efforts to meet global food demands, sustainable farming approaches, and climate smart agriculture initiatives, these imaging technologies are now becoming a part of day-to-day agricultural operations.

2.1. Satellite Based Imaging Systems

Compared to multispectral satellite imagery sensors, like Landsat, SPOT, WorldView, and Sentinel-2, hyperspectral sensors are fewer in number. Some of the available satellite hyperspectral imagery sensors include EO-1 Hyperion, PROBA-CHRIS, and TianGong-1. EO-1 Hyperion has been widely used for agricultural applications, collecting data in the visible, near-infrared, and short-wave infrared ranges with a resolution of 10 nm and a spatial resolution of 30 m [15]. It has been used for various agricultural studies, such as crop disease detection, estimating crop properties, classifying crop types, and investigating soil features. Data collected from Hyperion has been used to identify different crop types in Colombia, and to identify leaf index and vegetative properties in a mixed field. PROBA-CHRIS is another commonly used hyperspectral sensor launched in 2001, with studies utilizing it for LAI retrieval and crop growth stage identification [16]. Other available hyperspectral satellite platforms include IMS-1 and HICO that are used for a variety of operations, ranging from an evaluation of soil salinity to an assessment of ocean coastal areas. Some of the hyperspectral sensors are used less commonly due to limitations in spatial resolution and data availability.
Recent satellite-based hyperspectral sensors include DLR DESIS, HISUI, PRISMA, EnMAP, and SHALOM. Researchers have simulated images from these sensors to study vegetation and soil features. A comparative analysis has shown that hyperspectral sensors exhibit superior performance in comparison to multispectral imagery sensors, as they capture data across hundreds or even thousands of narrow, contiguous spectral bands. This allows for the detection of subtle differences in the reflectance of materials that might not be discernible in a few broad bands [17,18]. However, factors like spatial resolution, temporal resolution, cloud cover, and data quality can impact the widespread use of hyperspectral data in precision farming. Overcoming these limitations will be essential for enhancing the applicability of hyperspectral imaging in agriculture.

2.2. Airplane Based/Air-Borne Imagery Systems

Airborne hyperspectral imaging has become a prominent tool for diverse monitoring applications, notably in agriculture and forestry [19]. AVIRIS is one of the earliest airborne hyperspectral sensors. Developed in the 1980s, the sensor was able to capture more than 220 bands across the short-wave infrared range [20]. The corresponding imagery data has been utilized for the evaluation of certain agricultural features (i.e., vegetative properties, soil data, pest identification, and disease detection) and for decision-making. Apart from AVIRIS, other airborne hyperspectral sensors, like CASI, HyMap, and AISA Eagle, are also widely used. CASI images have been utilized for estimating crop chlorophyll content, studying crop cover fraction, and identifying weeds [21]. HyMap imagery has been leveraged to examine crop biophysical and biochemical variables, to detect plant stress signals, and to investigate the spatial patterns of soil organic carbon (SOC). AISA Eagle data has been applied to estimate crop nitrogen content and biomass.
Numerous other airborne hyperspectral sensors have been employed in previous studies, each contributing valuable insights into vegetation characteristics, crop residues, disease detection, soil moisture estimation, winter wheat leaf area index (LAI) estimation, and the relationship between SOC and spectral signals. The acquisition of these hyperspectral images primarily occurs through airplanes flying at medium to high altitudes, ranging from 1 to 4 km for imagery sensors such as CASI and up to 20 km for AVIRIS. The images generally offer high to medium spatial resolutions, such as 4 m for CASI, 5 m for HyMap, and up to 20 m for AVIRIS [22]. Over recent times researchers have leveraged various hyperspectral sensors to gain insights into different aspects of agriculture, from vegetation properties to soil analysis. Despite the benefits, airborne hyperspectral imaging offers several challenges that limit its overall potential; image acquisition often requires long lead times, and the cost of flight missions can be considerable. Certain applications, particularly those involving species-level or community-level features, necessitate very high spatial resolutions, even reaching sub-meter levels. Additionally, the instability of airplanes as imaging platforms requires gimbals or high-accuracy inertial measurement units (IMUs) to compensate for orientation changes and enable accurate image correction. Harsh weather conditions also pose a serious threat to both the data collected by air-borne platforms and the safety of the imagery platform itself. These limitations have hindered the full realization of airborne hyperspectral imaging’s potential in precision agriculture.

2.3. UAV/Drone Based Imaging Systems

In recent years, unmanned aerial vehicles (UAVs), or drone technology, have gained significant popularity as a platform for remote sensing data acquisition; the advancement in lightweight hyperspectral sensors has prompted researchers to explore mounting these sensors on UAVs to allow for the acquisition of high-spatial-resolution hyperspectral imagery [23,24]. A range of UAV types, including multi-rotors, helicopters, and fixed-wing aircraft, have been employed in recent studies. One of the key advantages of UAVs over manned airplanes and helicopters lies in their capability to acquire high-resolution images at a significantly lower cost, offering greater flexibility in planning flight missions [25]. Over the years, numerous lightweight hyperspectral sensors have been developed, suitable for UAV integration, such as the Headwall Micro- and Nano-Hyperspec VNIR, UHD 185-Firefly, PIKA II sensor, and the HySpex VNIR. These sensors offer more than 100 bands in the visible–near infrared spectral range, while maintaining a compact and lightweight design (weighing 1–2 kg), making them highly deployable on various manned or unmanned remote sensing platforms.
The application of UAV-based hyperspectral imaging entails the consideration of various factors, ranging from sensor setup and data collection to image processing. Scholars have delved into the feasibility of UAV-based hyperspectral imaging in different domains, addressing challenges related to imaging technology, hardware requirements, system settings, calibration, payload capacity, signal-to-noise ratio, and spectral calibration. Additionally, researchers have also explored the applications of UAV-based hyperspectral imaging in agriculture and forestry, shedding light on several challenges, such as radiometric noise, UAV georeferencing quality, and signal-to-noise ratio. With the increasing popularity of UAV-based hyperspectral imaging, it becomes crucial to comprehensively review its strengths and limitations, extending the scope beyond agricultural applications. Various types of UAVs have been employed as hyperspectral imaging platforms, with multi-rotors and fixed-wing plane being the most widely used [26,27,28,29]. Achieving high-spatial-resolution hyperspectral imagery with an optimal signal-to-noise ratio often calls for slow flights at low altitudes. In this context, multi-rotors hold a competitive edge over fixed-wing planes due to their ability to fly at low altitudes, vary flight speeds, and perform vertical takeoff and landing maneuvers. Conversely, fixed-wing UAVs require a minimum flight altitude, speed and, in some cases, accessories for takeoff and landing, such as runways, launchers, or parachutes. Hyperspectral imaging systems, comprising a hyperspectral sensor, data processing unit, GPS, and IMU, contribute significantly to the overall weight of the UAV (e.g., 1–3 kg), posing challenges to the payload capacity and battery endurance of the system. While multi-rotors are typically powered by high-performance batteries like Lithium polymer, granting shorter endurance (e.g., less than 20 min), many fixed-wing UAVs utilize fuel, providing longer flight times (e.g., 1–10 h) [23]. However, the larger size and weight of fixed-wing planes introduce their own complexities during flight operations. Researchers need to factor in the UAV SWaP (size, weight, and power), geographical coverage, time aloft, altitude, and other variables when choosing an appropriate platform. Beyond the technical and operational aspects, obtaining flight permission from aviation authorities, in addition to maintaining constant visual contact with the UAV during flight missions is mandatory for safety reasons, presenting difficulties in cases of extensive terrain coverage, hilly landscapes, or dense forest areas.

2.4. Ground Based Sensors

Ground based hyperspectral imaging sensors is an emerging technology that has gained prominence in recent years. These imaging sensors offer the ability to acquire super-high-spatial-resolution hyperspectral imagery, reaching resolutions at the centimeter or even sub-centimeter levels, which enables fine scale analysis for vegetative features (i.e., crop stress sign, disease, weed detection, etc.) [30,31]. These sensors can be deployed in both internal and external environments on static as well as moving devices, with either halogen lamps (for indoor) or sun (for outdoors) as light sources. The obtained hyperspectral images offer detailed information about a plant’s biophysical and biochemical processes and how it responds to environmental stresses and diseases.
Despite its numerous benefits, close-range hyperspectral imaging does face some challenges during image collection and processing. Some of these issues include uninformative variability resulting from the interaction of light with plant structures (illumination effects), the influence of shadows, and the expanding application of the platform to a larger scale [32,33]. Consequently, further research in these areas is warranted to address these challenges and improve the overall effectiveness of close-range hyperspectral imaging.
Various hyperspectral imagery platforms, such as satellite, airplane-based systems, drones and ground-based setups, have their own pros and cons when applied to precision agriculture. Satellite-based systems excel at capturing images over large areas, but often face spatial–temporal resolution issues and limited data availability due to the constraints of operating sensor and longer revisit times. Airplane-based platforms offer suitable spatial–temporal resolution for field applications, but their high mission costs and scheduling challenges make them unsuitable for repeated monitoring. Drone/UAV-based systems, on the other hand, prove highly effective in acquiring high-spatial-resolution images repeatedly with great flexibility. However, they are limited by their small coverage area due to battery health, power issues, and aviation regulations for the city or region. Ground-based imaging systems, while capable of achieving super-high-spatial-resolution images, are restricted to leaf or canopy level investigations, lacking the ability to cover larger areas.

3. Artificial Intelligence Models Used in Hyperspectral Image Analysis

With complex challenges faced by growers ranging from climate change and cost reduction to food security and profitability, the agriculture sector is undergoing a major innovation cycle. The latest technologies, including web tech, sensors, robotics, and IOT systems, are responsible for the production of massive visual and climatic data sets that can be utilized for better decision-making in resource management, crop monitoring, and pollution mitigation. The classification of data involves categorization of several data sets into different known classes (Figure 1). The process of classification is carried out via different artificial intelligence models which are divided into two main categories [34,35,36,37]:

3.1. Supervised Learning

Supervised learning is a type of artificial intelligence and machine learning (AI and ML) approach in which a model is trained with data sets along with their corresponding target labels or outputs [38]. The key aim in a supervised learning approach is to enable the AI model to learn the mapping or relationship between the input features and the corresponding target labels. The availability of high-quality input data is essential for the higher accuracy of AI models in supervised learning [39]. Once trained, the model can then be applied to predict the label or output of a nascent, unseen data set. Support vector machine (SVM), Decision tree, and Random forest are some of the most common supervised learning AI algorithms.

3.1.1. Support Vector Machine (SVM)

The SVM algorithm is a preferable choice for classification and regression-based tasks and is similar to a single level decision tree with a desired multivariate split condition. The SVM also has a unique technique, called a kernel truck, that transforms a normal data set into a higher-dimension feature space that allows linear algorithms to perform nonlinear computations without explicitly computing the coordinates of the transformed data in that higher-dimensional space [40]. The SVM approach is effective in high-dimensional spaces with clear separation margin and is apposite for cases which have a higher number of dimensions than the total number of samples. A major drawback with the SVM model is its lower accuracy with noisy data sets, i.e., where several target classes tend to overlap.

3.1.2. Decision Tree

The Decision tree model is commonly used for classification and regression related tasks and has a flow chart like structure in which several internal branches, decision nodes, and root nodes are present. Constructing a decision tree model involves portioning or splitting the data set on several criteria or characteristics (i.e., Gini impurity, entropy, information gains, etc.), and works for numeric and categorical features. These models require little data preprocessing and are nonparametric (assuming no shape about the data set) [41]. A major issue with this model is overfitting, where minor alterations in the data set can result in major modifications in the structure of the decision tree algorithm. Thus, this model has limited application in regression based and continuous value predicting operations.

3.1.3. Random Forest

Random forest (RF) is another commonly used supervised machine learning algorithm that can handle classification-related as well as regression problems. The algorithm utilizes the collection of decision tree classifiers having the potential to identify significant characteristics from a massive data set. The algorithm is robust, can be used for a large data set without the risk of overfitting, and provides efficient results [42]. However, RF models demand a large storage for multiple decision trees, along with slow training issues, making it a less feasible option for linear methods with multiple sparse attributes.

3.2. Unsupervised Learning

Unsupervised learning is the type of machine learning algorithm that are utilized for identification of patterns and structures within the data without provision of labelled examples, under these models’ system are made to find respective clusters based on certain metrics apposite for a certain type of data. The main goal of unsupervised learning is to uncover the underlying structure of the data, find relationships between data points, or group similar data points together [43]. By leveraging statistical techniques, clustering algorithms, dimensionality reduction methods, and other unsupervised learning approaches. K-mean, Fuzzy C-mean, and artificial neural networks (ANN) are among the most common unsupervised learning models.

3.2.1. K-Mean

K-mean is a common unsupervised ML model that focuses on resolution of clustering issues; it is widely used for grouping similar data points together based on their feature similarity. K-means clustering is an iterative algorithm that aims to partition the data into K clusters, where K is a predefined number. All data points are consistently associated with the closest cluster and the process keeps on repeating until the targeted results are achieved [44]. The model can be easily scaled up to larger data sets and can generalize clusters of different shapes. The model often has challenges related to clustering data sets of varying quality and sizes.

3.2.2. Fuzzy C-Mean (FCM)

In the FCM model, each data point is assigned a membership value indicating the degree to which it belongs to each cluster. Unlike traditional clustering algorithms where a data point belongs to only one cluster, in Fuzzy C-means the data point has membership to multiple clusters [45]. Although there can be multiple memberships, the sum of all cluster membership points should always be equal to one. These models require the user to specify the number of clusters in advance, and output of the model includes the final cluster centroids and the membership values for each data point.

3.2.3. Artificial Neural Network (ANN)

The ANN is another key machine learning model that can be utilized for several tasks, ranging from classification, and regression to pattern recognition and decision-making. ANN networks are inspired from and based on the structural functionality of biological neural networks. In ANN models, data flows unidirectionally through single or multiple layers of nodes, from input terminal to output layer [46]. These models are robust to noise in the training data and have the potential of executing multiple tasks on various parts of the data simultaneously. ANNs are versatile models that mimic the human brain’s neuron structures to process complex data and can be utilized for both supervised and unsupervised learning. In supervised learning, ANNs learn from labeled data to perform tasks like classification and regression; while, in unsupervised learning, they can discover hidden patterns in unlabeled data, as seen in clustering and feature extraction using autoencoders. The goal is to minimize the difference between the input and the output, which teaches the model to capture the underlying structure of the data; ultimately the model is enabled to organize the data based on the similarity of the input patterns without any labeled data. This flexibility makes ANNs powerful tools in various domains, from image recognition to anomaly detection. Within the ANN, deep neural networks (DNNs) are a subset of artificial neural networks (ANNs) that consist of multiple layers between the input and output layers. These layers, often called hidden layers, enable the network to learn and model complex patterns and representations in the data. The “deep” aspect of DNNs refers to the presence of many such layers, allowing the network to capture hierarchical features.
Examples of some specific unsupervised ANN models are as follows: self-organizing map (SOM), autoencoders, deep belief networks, etc. An SOM can be used to cluster the pixels of hyperspectral images based on their spectral signatures without the need for labeled data. The SOM algorithm maps high-dimensional spectral data (with hundreds of wavelengths) to a lower-dimensional grid (usually 2D) while preserving the topological relationships between data points [47]. Autoencoders are commonly used to reduce the dimensionality of data while retaining important spectral features. This reduced representation can be used to identify key patterns, making analysis easier and more efficient [48].

3.3. Reinforcement Learning

Reinforcement learning (RL) is a type of machine learning where a model learns to make decisions by interacting with an environment to maximize a reward signal. Unlike supervised learning, where the model learns from labeled data, RL focuses on learning through trial and error [49]. The agent receives feedback from its actions in the form of rewards or penalties and adjusts its behavior to improve performance over time. Model-based methods and deep reinforcement learning are some of the most common types of reinforcement learning.

3.3.1. Model-Based Methods

Model-based methods involve creating a model of the environment’s dynamics and using this model to predict future states and rewards. This approach can improve efficiency by planning actions based on the model, rather than learning solely from experience. This approach involves using a model of the environment to compute the optimal policy and value functions. Techniques such as value iteration and policy iteration fall under dynamic programming [50].

3.3.2. Deep Reinforcement Learning (DRL)

Deep reinforcement learning combines reinforcement learning with deep neural networks to handle high-dimensional state and action spaces. DRL methods use neural networks to approximate value functions, policies, or both. For example: A policy gradient method that improves the stability and efficiency of training by using a clipped objective function to update policies. The PPO strikes a balance between exploring new actions and exploiting known ones, leading to more stable learning [51].

4. Generalized Process for Analysis of Hyperspectral Images for Plant Disease Detection

Hyperspectral imagery has emerged as a powerful tool in modern agriculture, offering great insights into crop health, soil conditions, and disease detection. Unlike traditional imaging systems, hyperspectral sensors have the potential to capture data across a vast range of wavelengths, providing detailed spectral signatures for each pixel in an image. This wealth of information allows for the identification of certain changes in vegetation that might otherwise go unnoticed, such as nutrient deficiencies, early disease onset, or water stress. When coupled with artificial intelligence (AI), hyperspectral imagery becomes even more potent, enabling precise and timely decision-making in crop management and monitoring. Through their ability to process and analyze complex, high-dimensional data, AI models help unlock the potential of hyperspectral imagery by identifying patterns and trends that are critical for agricultural success.
The development of AI models for hyperspectral imagery in agriculture involves several key stages, each playing a vital role in enhancing the accuracy and efficiency of predictions. The process begins with image data acquisition, where high-quality hyperspectral images are collected using drones, satellites, or handheld sensors. This is followed by data pre-processing, including noise reduction and correction of distortions, ensuring that the imagery is ready for analysis. The next stage, feature extraction, involves using AI models like autoencoders to identify relevant spectral bands and reduce data dimensionality. Finally, models are employed for tasks like disease detection, clustering of crop types, or anomaly detection. Each stage is crucial, as it ensures the accurate identification of subtle changes in agricultural environments, allowing for early responses and ultimately improving crop yield and sustainability. The integration of AI with hyperspectral imagery offers transformative potential for agriculture, leveraging advanced AI models at each stage of the process—from data acquisition to feature extraction and disease detection—farmers can make informed decisions.
Images attained from several hyperspectral sensor platforms are generally in a raw format and are subjected to pre-processing for feature extraction, before using the data for field detection or model training. The generalized process for Hyperspectral image analysis is as follow:

4.1. Data Acquisition via Hyperspectral Imagery

Initially hyperspectral images for the agricultural fields are obtained with the help of one of the imaging platforms [52]. Hyperspectral imaging captures images at multiple narrow and contiguous spectral bands, providing detailed spectral information for each pixel. This involves using specialized hyperspectral cameras or sensors mounted on drones, satellites, or ground-based platforms to capture images of the agricultural field. The images are acquired by scanning the field or capturing images from a fixed position, depending on the imaging system used.

4.2. Pre-Processing of Imagery Data

The pre-processing phase specifically focuses on enhancing the quality of images obtained by eliminating any noise in the images (Figure 2). Adjustments made in this phase make the data suitable for further analysis and field anomaly detection [11]. Standard pre-processing corrections includes calibration of the images, atmospheric corrections, radiometric normalization, and geometric alignment [53]. Normally, for satellite-based and airplane-based collected imagery data, geometric correction is performed by the image provider, whereas radiometric and atmospheric correction can be carried out following standard steps for remote sensing-based image analysis. Atmospheric correction is carried out to compensate for the effect of the atmosphere on the hyperspectral data generated. This includes correction of the atmospheric scattering effect and absorption that tend to distort spectral data. Several atmospheric absorptions are critical, such as O2 absorption at 760 nm, water absorption at 820, 940, 1140 nm, and CO2 absorption at 2010 and 2060 nm. This indicates the significantly necessary part of atmospheric correction to obtain high-quality imaging data. Several algorithms like dark object subtraction (DOS) or Atmospheric Correction Now (ACORN) can be used to remove atmospheric influences [54].

4.3. Data Exploration and Feature Extraction

This phase involves analysis of hyperspectral data to gain insights into its characteristics. For this phase, spectral visualization is carried out. It involves the display and visualization of the hyperspectral data using spectral plots as well as color-coded images [55]. This allows for the examination of the spectral signatures of different objects or areas within the field. Spectral libraries or databases are used to compare the observed spectra with known spectra of healthy and diseased plants. The selection of the region of interest (ROI) is to identify regions within the hyperspectral images that correspond to healthy and diseased plants or specific features of interest [56]. This can be done visually or by using automated techniques like clustering, segmentation, or thresholding. To accomplish this, various segmentation methods are employed, including threshold-based, K-means, watershed algorithm, and edge detection. ROIs serve as training samples for model development and validation.
For extracting out relevant features from the hyperspectral data, there is a need to reduce the dimensionality of the data while retaining the discriminative information. Spectral indices such as the Normalized Difference Vegetation Index (NDVI) or the Normalized Difference Water Index (NDWI, #2042) are computed to quantify specific vegetation or water-related properties. These indices highlight specific spectral characteristics that are indicative of plant health or disease [57]. Spectral unmixing is carried out to decompose the hyperspectral data into constituent spectra. For this technique, linear or nonlinear unmixing algorithms are used to estimate the abundance fractions of different endmembers within each pixel. These abundance maps can reveal the presence of specific materials or diseases.
Exploration and feature extraction can be labelled as the most significant step in hyperspectral imaging-based classification for disease detection. It involves the extraction and formation of new vectors for disease sensing, using a combination of spectral, spatial, and textural characteristics that are supplied to machine learning (ML) algorithms for final classification.

4.4. Image Classification and Analysis

Plants exhibit unique spectral signatures based on their biological and chemical composition. Several biotic and abiotic factors can alter the way light is absorbed or reflected by plant leaves and tissues. Hyperspectral imaging captures this information across hundreds of narrow spectral bands, creating a detailed spectral curve for each plant or crop. This is followed by using spectral libraries to store the spectral signatures of various crops under different conditions. These libraries include reference spectra for healthy plants, plants under nutrient stress, those infected by diseases, and those exposed to environmental stress (e.g., drought, heat). For example, a spectral library for wheat might include reference spectra for healthy wheat, wheat under drought stress, and wheat infected with rust fungus. Similarly, libraries for major crops like maize, rice, and soybeans are developed for global agricultural monitoring.
Hyperspectral sensors collect spectral data from crops in the field. The spectral signature of each pixel (representing a small part of the crop) is then compared with known reference signatures in the spectral library via several algorithms. This provides information regarding how closely the spectral signature of the observed plant matches that of a healthy reference plant or a known diseased signature. Spectral feature fitting (SFF) is used for fitting specific spectral features (such as absorption bands related to chlorophyll or water content) to known references to identify stress or disease.

4.5. Disease Detection

The newly trained classifier or ML model is utilized for disease detection on the field by applying it over the entire available imagery data. The model classifies pixels or regions within the hyperspectral images as healthy or diseased based on their spectral characteristics and the learned patterns. The classifier marks each pixel in the available image as healthy or diseased based on the predictive model, which generates a disease map highlighting the spatial distribution of diseases within the field. The model further carries out a region-based classification, which groups neighboring pixels with similar disease labels into common clusters. This helps to identify contiguous areas affected by diseases, which can be useful for spatial analysis and the decision-making process (Figure 3).

5. Application of Hyperspectral Image Analysis in Agriculture

Recent advances have led to the integration of several new technologies, including hyperspectral imaging, in the field of agriculture, specifically precision agriculture. Some of its prominent applications involve estimation of biochemical characteristics (i.e., chlorophyll content, water level, etc.), nutrient estimation, studying soil properties, monitoring and detection of biotic as well as abiotic stress. Recent studies have highlighted the increasing use of hyperspectral remote sensing in precision agriculture. In this section, we will focus on some of these recent studies and classify them based on type of targeted plant or crop category (Field crop, Trees, Vegetable, and Fruits).

5.1. Field Crops and Forest

Hyperspectral image analysis has been used for several types of assessments, including plant health, moisture analysis, and nutrient assessment. Mahlein et al. [58] used an SVM-based approach for assessment and detection of Fusarium head blight disease in wheat crop. The study involved the use of a combination of sensors focusing on the assessment of temperature fluctuations, alteration in the level of photosynthetic activity, and change in pigmentation in healthy as well as diseased plants. The analysis gave an initial efficient result with more than 78% accuracy. Another study carried out under controlled conditions involved the use of hyperspectral imaging for the assessment and detection of rust diseases in wheat. The study involved the use of reference spectral data for rust spores [59]. In addition to disease detection, the approach has also been employed for estimating the biochemical content of the plant. Oppelt and Mauser [60] used airborne imagery data in order to assess chlorophyll and nitrogen content in a wheat crop. In a related study, Moharana and Dutta [61] used imagery data from the Hyperion satellite for a similar assessment of two biocompounds in a rice crop. Another crucial component being assessed by a remote sensing approach is the water content in plants, which has become a vital aspect of consideration in light of rising heat waves and the risk of drought. Studies have been carried out involving the use of HyMap imaging for a water content assessment in maize, wheat, and sugar beet crops [62]. Another study focusing on a hydro-content assessment in a rice crop showed a close similarity in field-based data and hyperspectral imaging data making remote sensing a reliable method for such field assessment [61].
The application of hyperspectral imaging has also revolutionized forest research as researchers can delve into the intricate details of forest ecosystems. This approach enables the identification and classification of various tree species, monitoring of vegetation health, and detection of subtle changes in forest conditions caused by factors such as climate change, diseases, or invasive species. AI algorithms can process vast amounts of spectral data, providing valuable insights into forest biodiversity, carbon sequestration, and overall ecosystem dynamics. Lim et al. [63] used imagery data from Sentinel-2 to perform forest typing on the Korean peninsula, using a model focused on the classification of five main forest tree species, i.e., Korean red pine, Korean pine, needle fir, Japanese larch, and oak. The Random forest (RF) model was first designed and applied on South Korea’s Gwangneung Forest (with 83% accuracy), followed by application on the north side of Goseong-gun Forest. A boreal forest inventory in Finland was carried out using Sentinel-2 imaging and machine learning algorithms (RF and DNN) with an aim of providing valuable insights regarding forest variables for stakeholders [64].
Dang et al. [65], performed an intensive analysis of NOAA–DEM and Sentinel-2 images for studying wetland areas in Vietnam, to make a better assessment of coastal vulnerability and geo-ecosystem management. The developed model (using ResU-Net model) was proposed for application on updating new wetland types in the southern part of the Tien Yen Estuary and its surrounding islands. Pham et al. [66] applied SVM and RF models for an aboveground biomass assessment in the Red river delta region, Vietnam. The study involved the collection and incorporation of data from several sources, including ALOS-2, Sentinel-2, and Sentinel-1. Another major study was focused on the development of a wild fire alert system for forest regions. The model was based on fuzzy logic and artificial neural networks to collect data for several factors (i.e., spatial data, soil status, rainfall, etc.), and to determine the wild fire areas [67]. Table 2 consists of several experimental examples of effective AI-remote sensing applications in field crops and forests.
Hyperspectral imaging, coupled with AI-driven analysis, has emerged as a versatile and powerful tool for assessing plant health, disease detection, and estimating biochemical content in both field crops and forest ecosystems. Studies have demonstrated its efficacy in accurately identifying diseases in wheat and rice crops and assessing water content, while also revolutionizing forest research by enabling species classification, monitoring vegetation health, and detecting changes caused by various factors. The integration of remote sensing and machine learning algorithms showcases its potential in providing valuable insights for sustainable agriculture, forest management, wetland assessment, and wildfire monitoring, ushering in a new era of data-driven precision in environmental studies.

5.2. Vegetable Crops

A major proportion of the global population is heavily dependent on vegetable crops for their day-to-day consumption as many of these crops serve as staple crops. The potato is considered to be the most important non-cereal crop in the world owing to its historical and dietary significance. Several diseases tend to reduce annual yield outputs for the crop, hence indicating the significance of early detection via remote sensing to be a vital asset in containing infections. Gibson-Poole et al. [87], utilized UAV-based image data for detecting the early onset of potato black-leg disease with more than 80% accuracy in early disease detection. Late blight disease is another serious ailment of potato crops and has resulted in major yield losses throughout history, resulting in catastrophic events including the great Irish famine. Sugiura et al. [88] collected image data from the field using UAV-based sensors followed by thresholding and image segmentation for a better field assessment, as well as the detection of early disease onset. The tomato is another major field crop susceptible to several types of biological infection (fungal, bacterial, viral, etc.), often causing massive economic losses and food shortages. Abdulridha et al. [89] developed a model based on a multilayer perceptron neural network (MLP), a type of artificial neural network (ANN) approach with 99% classification accuracy. Hyperspectral images were collected for plants in field and lab conditions, with wavelengths ranging from 380 to 1020 nm. The study resulted in accurate early detection and classification of healthy as well as diseased tomato plants for the target spot (TS) and bacterial spot (BS) diseases. Zhang et al. [90] carried out a comprehensive study related to the detection of several important tomato diseases using convolution neural networks (CNN) that resulted in a nearly 98% accurate disease detection among infected plants. Table 3 consists of several experimental examples of effective AI-remote sensing application in vegetable crops.
Mahlein et al. [75] developed a model for disease differentiation in various crops, including three major diseases in sugar beet crops: a. Cercospora leaf spot, b. powdery mildew of sugar beet, and c. rust disease in sugar beet were subjected to differentiation. For this study, hyperspectral imagery data was collected at different developmental stages and levels of disease severity, followed by feature extraction from imaging data with the RELIEF-F algorithm. Following the data extraction, the model was designed based on extracted data features, and classification accuracy of 92, 85 and 87%, was achieved for Cercospora leaf spot, powdery mildew of sugar beet, and rust disease respectively. Zhao et al. [91] performed an assessment of chlorophyll and carotenoid content in healthy cucumber plants along with diseased ones via hyperspectral imaging. The data was then used for comparison and, ultimately, detection of diseased cucumber plants. For this purpose, a partial least square regression approach was used for assessing the quantitative relationship between infection severity, corresponding pigmentation in plants, and the spectrum produced by it. The model was then applied to collected image data pixel-wise for detection of angular spot infection in cucumber crops.
Table 3. Application of various AI models on vegetable crops.
Table 3. Application of various AI models on vegetable crops.
CropResearch AimResearch ContributionAI Model/AlgorithmReference
Potato Detection of potato virus Y infectionHighlighting the significance of training data for high achieving high accuracy for disease detectionConvolution neural network (CNN)[92]
Detection of late blight diseaseEnabled high throughput, objective, and precise phenotyping regarding field resistance to potato late blight. Decision threshold[88]
Detection of blackleg diseaseVisual analysis of UAV-derived aerial imagery is an effective method of detecting diseaseDecision threshold[87]
Severity assessment of late blight diseaseDemonstrated the potential of radiometric readings in the optical domain, acquired at canopy level with sub-decimeter resolution, to detect early stages of potato late blightSimplex Volume
Maximization
[93]
Tomato Severity assessment of leaf spot and blight diseaseProvide early disease detection by using chlorophyll fluorescence and hyperspectral via detection of early, presymptomatic plant responses to bacterial infectionPrincipal component
analysis (PCA)
[94]
Detection of early bight, late blight and yellow leaf curl diseaseEffective model development with a detection accuracy of 97%CNN[90]
Detection of target spot and bacterial spot disease Successful detection, classification and differentiation of both disease with 99% model accuracy.Multilayer perceptron neural network (MLP)[89]
Bell pepper Detection of bacterial spot diseaseDemonstrating the potential of convolutional neural networks for accurate, smartphone-assisted crop disease diagnosis, with up to 99% working accuracy.CNN[95]
Cucumber Severity assessment of angular leaf spot disease Offering a precise method for real-time disease assessment using key wavelengthsPartial least square
regression (PLSR)
[91]
Onion Detection of sour skin disease Short-wave infrared based hyperspectral used for disease detection with 87% accuracy and enabling development of non-destructive imaging system for use in postharvest onion classification lines.PCA[96]

5.3. Fruit Crops

Over the last decade similar approaches for field assessment, early disease detection, and classification have been developed for fruit crops. Abdulridha et al. [97] used hyperspectral imagery data for detecting citrus canker disease under two different scenarios (i.e., lab based and field based). For a lab based disease assessment, imaging data ranging from wavelength 400 to 1000 nm was collected and sorted using the k-nearest neighbor (KNN) method for detecting different phases of disease progression in citrus plants (i.e., asymptomatic stage of infection, early disease stage, and later disease stage). For a field based assessment, sensors were mounted on to UAVs and images for tree canopies were collected and assessed using KNN method with up to a 96% accurate disease detection. The study also focused on the individual features that played a significant role in disease detection in lab (water index—WI) as well as field conditions (Modified Chlorophyll Absorption in Reflectance Index—MCARI). Kerkech et al. [98], made a comprehensive study in the detection of several vine disease (i.e., powdery mildew, mosaic diseases, etc.) using a convoluted neural networks (CNNs). The research focused on the development of an efficient vine disease detection system based on a three-step experimental procedure: 1. formation of uniform images by the combination of visible and infrared pictures to provide broader view of the field, 2. segmentation of images using a CNN, 3. image tuning using a CNN and 3D depth map (DM). Table 4 consists of several experimental examples of effective AI-remote sensing application in fruit crops.
Hyperspectral imaging coupled with an AI-driven analysis has brought a revolution in the assessment and early detection of diseases, nutrient content, and water levels in various plant crops. These studies have demonstrated the effectiveness of different machine learning algorithms, such as SVM, RF, DNN, MLP, and CNN, in accurately identifying and classifying diseases, estimating biochemical content, and monitoring vegetation health. The integration of remote sensing and AI technologies has transformed forest research as well, allowing for species classification, biodiversity monitoring, and detection of changes caused by various climatic factors. The potential of these approaches extends to provide valuable insights for sustainable agriculture, forest management, wetland assessment, and wildfire monitoring. As we embrace data-driven precision in environmental studies, hyperspectral imaging and AI-driven analysis stand as vital tools to address global challenges and pave the way for a more sustainable future in agriculture and forestry.

6. Conclusions

In conclusion, the integration of hyperspectral imaging and AI-driven analysis has emerged as a transformative and powerful tool for tackling various challenges in agriculture and forest research. These technologies have shown great promise in accurately detecting diseases, estimating biochemical content, and monitoring plant health in field crops, vegetable crops, and fruit crops. Moreover, for forest research, AI-based remote sensing has proven instrumental in species classification, biodiversity monitoring, and detecting changes caused by climate change and other environmental factors. The future perspective for AI-based remote sensing in agriculture and forest research seems highly promising. As AI algorithms continue to advance, they will become even more efficient in processing large volumes of hyperspectral data and extracting valuable insights. The use of UAVs and satellite-based hyperspectral sensors will likely become more widespread, enabling more extensive and cost-effective monitoring of crops and forest ecosystems on regional and global scales. These approaches can also cover for the battery, flight, and durability issues associated with UAVs access to data and farming solutions for the growers. Moreover, as AI models improve, the accuracy of disease detection and classification will further increase, aiding in early disease warning systems and proactive management strategies for farmers and forest managers.
Hyperspectral imaging (HSI) combined with AI offers significant potential for agricultural applications, but also faces several challenges. HSI generates massive amounts of high-dimensional data, posing challenges for storage, transfer, and processing. The corresponding equipment used for image collection to data processing and model training has traditionally been expensive, limiting widespread adoption. Low data quality from satellite-based imagery can often affect performance in agricultural applications. There is a lack of sufficient labeled data sets for training AI models on diverse agricultural scenarios. Challenges in achieving real-time analysis for immediate decision-making in the field is another key issue.
Although these challenges are a hindrance for the widespread adoption of technology in the field, certain steps can be taken to overcome these challenges. The implementation of dimensionality reduction methods like the principal component analysis (PCA) can e used to compress data while preserving essential information. The development of efficient band selection algorithms can be used to identify most relevant spectral bands for specific applications. The utilization of deep learning approaches, particularly convolutional neural networks (CNNs), can be used to better handle high-dimensional HSI data. The development of more compact, affordable HSI sensors can be optimized for agricultural use. And the integration of HSI technology with drones and other mobile platforms can be used for easier field deployment. Industry-wide standards need to be established for HSI data acquisition, processing, and sharing. Cloud computing can be leveraged for data storage and processing to reduce on-site computational requirements. The latest edge computing solutions can be adopted to enable real-time processing of HSI data in the field. All this can ultimately assist in better adoption of the technology for broader field applications.
Future developments in AI-driven remote sensing will also benefit from enhanced collaboration between researchers, industries, and governments. Data sharing initiatives and open access platforms will foster a collective effort to create comprehensive and dynamic databases, supporting the development of robust AI models that can be fine-tuned for specific regions and crops. Additionally, ongoing advancements in hardware and sensor technology will likely lead to the development of more compact, efficient, and affordable hyperspectral sensors, making them accessible to a wider range of users. In this perspective, startups are also of great significant and, together with major corporations, along with government support can result in efficient problem assessment, solutions development, and technology dissemination to local levels. In the context of sustainable agriculture and forest management, AI-driven remote sensing holds the potential to address critical global challenges, such as food security, climate change mitigation, and biodiversity conservation. By leveraging the power of AI to analyze vast amounts of hyperspectral data, researchers and policymakers can make informed decisions to optimize resource utilization, minimize environmental impacts, and improve overall ecosystem health. AI-based remote sensing is poised to revolutionize agriculture and forest research by providing valuable insights, enhancing efficiency, and enabling proactive and targeted management approaches. As these technologies continue to evolve, we can envision a future where precision agriculture and sustainable forestry become the norm, leading us towards a more resilient and sustainable planet. To achieve this vision, continued investment in research, technology, and collaboration is crucial to unlocking the full potential of AI-driven remote sensing in the pursuit of a more sustainable and prosperous future for all.

Author Contributions

F.A.: investigation, conceptualization, software, validation, visualization, writing—original draft. A.R. (Ali Razzaq): data curation, investigation, software, visualization, writing—original draft. W.T.: conceptualization, resources, software, validation, writing—original draft. A.H.: conceptualization, validation, visualization, writing—original draft. A.R. (Abdul Rehman): conceptualization, resources, software, validation, visualization, writing—review and editing. K.R.: formal analysis, investigation, software, validation, writing—review and editing. S.S.: data curation, investigation, validation, writing—review and editing. N.A.R.: data curation, formal analysis, investigation, software, validation, writing—review and editing. H.E.M.Z.: data curation, formal analysis, investigation, validation, writing—review and editing. M.S.S.: conceptualization, investigation, supervision, visualization, writing—review and editing. G.O.: resources, validation, visualization, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partly performed under the European Union program Next GenerationEU.

Data Availability Statement

All data created was used in this article.

Acknowledgments

During the preparation of this work the authors used Canva to make figures. Author Haitham E. M. Zaki thanks and acknowledges the Department of Research and Consultation at the University of Technology and Applied Sciences-Sur, Oman, for their ongoing support and facilities.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bizikova, L.; Jungcurt, S.; McDougal, K.; Tyler, S. How can agricultural interventions enhance contribution to food security and SDG 2.1? Glob. Food Secur. 2020, 26, 100450. [Google Scholar] [CrossRef]
  2. Bandara, A.Y.; Weerasooriya, D.K.; Bradley, C.A.; Allen, T.W.; Esker, P.D. Dissecting the economic impact of soybean diseases in the United States over two decades. PLoS ONE 2020, 15, e0231141. [Google Scholar] [CrossRef]
  3. Savary, S.; Willocquet, L.; Pethybridge, S.J.; Esker, P.; McRoberts, N.; Nelson, A. The global burden of pathogens and pests on major food crops. Nat. Ecol. Evol. 2019, 3, 430–439. [Google Scholar] [CrossRef] [PubMed]
  4. Fisher, M.C.; Henk, D.A.; Briggs, C.J.; Brownstein, J.S.; Madoff, L.C.; McCraw, S.L.; Gurr, S.J. Emerging fungal threats to animal, plant and ecosystem health. Nature 2012, 484, 186–194. [Google Scholar] [CrossRef] [PubMed]
  5. Boyd, L.A.; Ridout, C.; O’Sullivan, D.M.; Leach, J.E.; Leung, H. Plant–pathogen interactions: Disease resistance in modern agriculture. Trends Genet. TIG 2013, 29, 233–240. [Google Scholar] [CrossRef]
  6. Peralta, A.L.; Sun, Y.; McDaniel, M.D.; Lennon, J.T. Crop rotational diversity increases disease suppressive capacity of soil microbiomes. Ecosphere 2018, 9, e02235. [Google Scholar] [CrossRef]
  7. Uehara-Ichiki, T.; Shiba, T.; Matsukura, K.; Ueno, T.; Hirae, M.; Sasaya, T. Detection and diagnosis of rice-infecting viruses. Front. Microbiol. 2013, 4, 289. [Google Scholar] [CrossRef] [PubMed]
  8. Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective, 2nd ed.; Pearson Education: New Delhi, India, 2009. [Google Scholar]
  9. Thenkabail, P.S. Biophysical and yield information for precision farming from near-real-time and historical Landsat TM images. Int. J. Remote Sens. 2003, 24, 2879–2904. [Google Scholar] [CrossRef]
  10. Skauli, T.; Goa, P.E.; Baarstad, I.; Løke, T. (Eds.) A compact combined hyperspectral and polarimetric imager. In Electro-Optical and Infrared Systems: Technology and Applications III; SPIE: Bellingham, WA, USA, 2006. [Google Scholar]
  11. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef]
  12. Marshall, M.; Thenkabail, P. Advantage of hyperspectral EO-1 Hyperion over multispectral IKONOS, GeoEye-1, WorldView-2, Landsat ETM+, and MODIS vegetation indices in crop biomass estimation. ISPRS J. Photogramm. Remote Sens. 2015, 108, 205–218. [Google Scholar] [CrossRef]
  13. Sahoo, R.N.; Ray, S.S.; Manjunath, K.R. Hyperspectral remote sensing of agriculture. Curr. Sci. 2015, 108, 848–859. [Google Scholar]
  14. Malczyk, J.; King, K. (Eds.) Crop Modeling on the Descartes Labs Platform; AGU Fall Meeting Abstracts: Washington, DC, USA, 2018. [Google Scholar]
  15. Miglani, A.; Ray, S.S.; Pandey, R.; Parihar, J.S. Evaluation of EO-1 Hyperion data for agricultural applications. J. Indian Soc. Remote Sens. 2008, 36, 255–266. [Google Scholar] [CrossRef]
  16. Jiménez-Muñoz, J.C.; Sobrino, J.A.; Plaza, A.; Guanter, L.; Moreno, J.; Martinez, P. Comparison between fractional vegetation cover retrievals from vegetation indices and spectral mixture analysis: Case study of PROBA/CHRIS data over an agricultural area. Sensors 2009, 9, 768–793. [Google Scholar] [CrossRef]
  17. Mariotto, I.; Thenkabail, P.S.; Huete, A.; Slonecker, E.T.; Platonov, A. Hyperspectral versus multispectral crop-productivity modeling and type discrimination for the HyspIRI mission. Remote Sens. Environ. 2013, 139, 291–305. [Google Scholar] [CrossRef]
  18. Bostan, S.; Ortak, M.A.; Tuna, C.; Akoguz, A.; Sertel, E.; Ustundag, B.B. Comparison of classification accuracy of co-located hyperspectral & multispectral images for agricultural purposes. In Proceedings of the 2016 5th International Conference on Agro-geoinformatics (Agro-geoinformatics), Tianjin, China, 18–20 July 2016; pp. 1–4. [Google Scholar]
  19. Tong, Q.; Xue, Y.; Zhang, L. Progress in hyperspectral remote sensing science and technology in China over the past three decades. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 70–91. [Google Scholar] [CrossRef]
  20. Qian, S.E. Hyperspectral satellites, evolution, and development history. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7032–7056. [Google Scholar] [CrossRef]
  21. Xu, X.; Fan, L.; Li, Z.; Meng, Y.; Feng, H.; Yang, H.; Xu, B. Estimating leaf nitrogen content in corn based on information fusion of multiple-sensor imagery from UAV. Remote Sens. 2021, 13, 340. [Google Scholar] [CrossRef]
  22. Martínez, L.; Tardà, A.; Palà, V.; Arbiol, R. Atmospheric correction algorithm applied to CASI multi-height hyperspectral imagery. Parameters 2006, 1, 4. [Google Scholar]
  23. Hruska, R.; Mitchell, J.; Anderson, M.; Glenn, N.F. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle. Remote Sens. 2012, 4, 2736–2752. [Google Scholar] [CrossRef]
  24. Capolupo, A.; Kooistra, L.; Berendonk, C.; Boccia, L.; Suomalainen, J. Estimating plant traits of grasslands from UAV-acquired hyperspectral images: A comparison of statistical approaches. ISPRS Int. J. Geo-Inf. 2015, 4, 2792–2820. [Google Scholar] [CrossRef]
  25. Lu, B.; He, Y. Optimal spatial resolution of Unmanned Aerial Vehicle (UAV)-acquired imagery for species classification in a heterogeneous grassland ecosystem. GIScience Remote Sens. 2018, 55, 205–220. [Google Scholar] [CrossRef]
  26. Zarco-Tejada, P.J.; González-Dugo, V.; Berni, J.A.J. Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. Environ. 2012, 117, 322–337. [Google Scholar] [CrossRef]
  27. Yue, J.; Feng, H.; Jin, X.; Yuan, H.; Li, Z.; Zhou, C.; Yang, G.; Tian, Q. A comparison of crop parameters estimation using images from UAV-mounted snapshot hyperspectral sensor and high-definition digital camera. Remote Sens. 2018, 10, 1138. [Google Scholar] [CrossRef]
  28. Zhu, W.; Sun, Z.; Huang, Y.; Lai, J.; Li, J.; Zhang, J.; Yang, B.; Li, B.; Li, S.; Zhu, K.; et al. Improving field-scale wheat LAI retrieval based on UAV remote-sensing observations and optimized VI-LUTs. Remote Sens. 2019, 11, 2456. [Google Scholar] [CrossRef]
  29. Zhao, J.; Zhong, Y.; Hu, X.; Wei, L.; Zhang, L. A robust spectral-spatial approach to identifying heterogeneous crops using remote sensing imagery with high spectral and spatial resolutions. Remote Sens. Environ. 2020, 239, 111605. [Google Scholar] [CrossRef]
  30. Malmir, M.; Tahmasbian, I.; Xu, Z.; Farrar, M.B.; Bai, S.H. Prediction of soil macro- and micro-elements in sieved and ground air-dried soils using laboratory-based hyperspectral imaging technique. Geoderma 2019, 340, 70–80. [Google Scholar] [CrossRef]
  31. Van De Vijver, R.; Mertens, K.; Heungens, K.; Somers, B.; Nuyttens, D.; Borra-Serrano, I.; Lootens, P.; Roldán-Ruiz, I.; Vangeyte, J.; Saeys, W. In-field detection of Alternaria solani in potato crops using hyperspectral imaging. Comput. Electron. Agric. 2020, 168, 105106. [Google Scholar] [CrossRef]
  32. Behmann, J.; Mahlein, A.-K.; Paulus, S.; Dupuis, J.; Kuhlmann, H.; Oerke, E.-C.; Plümer, L. Generation and application of hyperspectral 3D plant models: Methods and challenges. Mach. Vis. Appl. 2016, 27, 611–624. [Google Scholar] [CrossRef]
  33. Asaari, M.S.M.; Mishra, P.; Mertens, S.; Dhondt, S.; Inzé, D.; Wuyts, N.; Scheunders, P. Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform. ISPRS J. Photogramm. Remote Sens. 2018, 138, 121–138. [Google Scholar] [CrossRef]
  34. Kotsiantis, S.B.; Zaharakis, I.; Pintelas, P. Supervised machine learning: A review of classification techniques. Emerg. Artif. Intell. Appl. Comput. Eng. 2007, 160, 3–24. [Google Scholar]
  35. Ayodele, T.O. Types of machine learning algorithms. New Adv. Mach. Learn. 2010, 3, 5-1. [Google Scholar]
  36. Dhanaraj, R.K.; Rajkumar, K.; Hariharan, U. Enterprise IoT modeling: Supervised, unsupervised, and reinforcement learning. Bus. Intell. Enterp. Internet Things 2020, 55–79. [Google Scholar]
  37. Haldorai, A.; Ramu, A.; Suriya, M. Organization internet of things (IoTs): Supervised, unsupervised, and reinforcement learning. In Business Intelligence for Enterprise Internet of Things; Springer: Berlin/Heidelberg, Germany, 2020; pp. 27–53. [Google Scholar]
  38. Nasteski, V. An overview of the supervised machine learning methods. Horizons. b 2017, 4, 56. [Google Scholar] [CrossRef]
  39. Osisanwo, F.Y.; Akinsola JE, T.; Awodele, O.; Hinmikaiye, J.O.; Olakanmi, O.; Akinjobi, J. Supervised machine learning algorithms: Classification and comparison. Int. J. Comput. Trends Technol. (IJCTT) 2017, 48, 128–138. [Google Scholar]
  40. Pisner, D.A.; Schnyer, D.M. Support vector machine. In Machine Learning; Elsevier: Amsterdam, The Netherlands, 2020; pp. 101–121. [Google Scholar]
  41. Elmachtoub, A.N.; Liang, J.C.N.; McNellis, R. (Eds.) Decision trees for decision-making under the predict-then-optimize framework. In International Conference on Machine Learning; PMLR: Vienna, Austria, 2020. [Google Scholar]
  42. Vu, D.Q.; Nguyen, D.D.; Bui QA, T.; Trong, D.K.; Prakash, I.; Pham, B.T. Estimation of California bearing ratio of soils using random forest based machine learning. J. Sci. Transp. Technol. 2021, 1, 48–61. [Google Scholar] [CrossRef]
  43. Yan, J.; Wang, X. Unsupervised and semi-supervised learning: The next frontier in machine learning for plant systems biology. Plant J. 2022, 111, 1527–1538. [Google Scholar] [CrossRef]
  44. Bajal, E.; Katara, V.; Bhatia, M.; Hooda, M. A Review of Clustering Algorithms: Comparison of DBSCAN and K-mean with Oversampling and t-SNE. Recent Patents Eng. 2022, 16, 17–31. [Google Scholar] [CrossRef]
  45. Lohani, Q.D.; Solanki, R.; Muhuri, P.K. A convergence theorem and an experimental study of intuitionistic fuzzy c-mean algorithm over machine learning dataset. Appl. Soft Comput. 2018, 71, 1176–1188. [Google Scholar] [CrossRef]
  46. Walczak, S. Artificial neural networks. In Advanced Methodologies and Technologies in Artificial Intelligence, Computer Simulation, and Human-Computer Interaction; IGI Global: Hershey, PA, USA, 2019; pp. 40–53. [Google Scholar]
  47. Sara, B.; Otman, A. New learning approach for unsupervised neural networks model with application to agriculture field. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 0110548. [Google Scholar] [CrossRef]
  48. Guerri, M.F.; Distante, C.; Spagnolo, P.; Bougourzi, F.; Taleb-Ahmed, A. Deep learning techniques for hyperspectral image analysis in agriculture: A review. ISPRS Open J. Photogramm. Remote Sens. 2024, 12, 100062. [Google Scholar] [CrossRef]
  49. Fayaz, S.A.; Sidiq, S.J.; Zaman, M.; Butt, M.A. Machine learning: An introduction to reinforcement learning. In Machine Learning and Data Science: Fundamentals and Applications; Wiley: Hoboken, NJ, USA, 2022; pp. 1–22. [Google Scholar]
  50. Moerland, T.M.; Broekens, J.; Plaat, A.; Jonker, C.M. Model-based reinforcement learning: A survey. Found. Trends Mach. Learn. 2023, 16, 1–118. [Google Scholar] [CrossRef]
  51. Dong, H.; Dong, H.; Ding, Z.; Zhang, S.; Chang, T. Deep Reinforcement Learning; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  52. Sethy, P.K.; Pandey, C.; Sahu, Y.K.; Behera, S.K. Hyperspectral imagery applications for precision agriculture—A systemic survey. Multimedia Tools Appl. 2022, 81, 3005–3038. [Google Scholar] [CrossRef]
  53. Dwyer, J.L.; Roy, D.P.; Sauer, B.; Jenkerson, C.B.; Zhang, H.K.; Lymburner, L. Analysis ready data: Enabling analysis of the Landsat archive. Remote Sens. 2018, 10, 1363. [Google Scholar] [CrossRef]
  54. Yu, K.; Liu, S.; Zhao, Y. CPBAC: A quick atmospheric correction method using the topographic information. Remote Sens. Environ. 2016, 186, 262–274. [Google Scholar] [CrossRef]
  55. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  56. Xie, C.; Yang, C.; He, Y. Hyperspectral imaging for classification of healthy and gray mold diseased tomato leaves with different infection severities. Comput. Electron. Agric. 2017, 135, 154–162. [Google Scholar] [CrossRef]
  57. Bijeesh, T.V.; Narasimhamurthy, K.N. A comparative study of spectral indices for surface water delineation using Landsat 8 Images. In Proceedings of the International Conference on Data Science and Communication (IconDSC), Bangalore, India, 1–2 March 2019. [Google Scholar]
  58. Mahlein, A.K.; Alisaac, E.; Al Masri, A.; Behmann, J.; Dehne, H.W.; Oerke, E.C. Comparison and combination of thermal, fluorescence, and hyperspectral imaging for monitoring fusarium head blight of wheat on spikelet scale. Sensors 2019, 19, 2281. [Google Scholar] [CrossRef] [PubMed]
  59. Bohnenkamp, D.; Kuska, M.T.; Mahlein, A.; Behmann, J. Hyperspectral signal decomposition and symptom detection of wheat rust disease at the leaf scale using pure fungal spore spectra as reference. Plant Pathol. 2019, 68, 1188–1195. [Google Scholar] [CrossRef]
  60. Oppelt, N.; Mauser, W. Hyperspectral monitoring of physiological parameters of wheat during a vegetation period using AVIS data. Int. J. Remote Sens. 2004, 25, 145–159. [Google Scholar] [CrossRef]
  61. Moharana, S.; Dutta, S. Spatial variability of chlorophyll and nitrogen content of rice from hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2016, 122, 17–29. [Google Scholar] [CrossRef]
  62. Richter, K.; Hank, T.; Mauser, W. Preparatory analyses and development of algorithms for agricultural applications in the context of the EnMAP hyperspectral mission. Remote Sens. Agric. Ecosyst. Hydrol. XII 2010, 7824, 55–65. [Google Scholar]
  63. Lim, J.; Kim, K.-M.; Kim, E.-H.; Jin, R. Machine learning for tree species classification using sentinel-2 spectral information, crown texture, and environmental variables. Remote Sens. 2020, 12, 2049. [Google Scholar] [CrossRef]
  64. Astola, H.; Seitsonen, L.; Halme, E.; Molinier, M.; Lönnqvist, A. Deep neural networks with transfer learning for forest variable estimation using sentinel-2 imagery in boreal forest. Remote Sens. 2021, 13, 2392. [Google Scholar] [CrossRef]
  65. Dang, K.B.; Nguyen, M.H.; Nguyen, D.A.; Phan, T.T.H.; Giang, T.L.; Pham, H.H.; Nguyen, T.N.; Van Tran, T.T.; Bui, D.T. Coastal wetland classification with deep u-net convolutional networks and sentinel-2 imagery: A case study at the tien yen estuary of vietnam. Remote Sens. 2020, 12, 3270. [Google Scholar] [CrossRef]
  66. Pham, T.D.; Yokoya, N.; Xia, J.; Ha, N.T.; Le, N.N.; Nguyen, T.T.T.; Dao, T.H.; Vu, T.T.P.; Pham, T.D.; Takeuchi, W. Comparison of machine learning methods for estimating mangrove above-ground biomass using multiple source remote sensing data in the red river delta biosphere reserve, Vietnam. Remote Sens. 2020, 12, 1334. [Google Scholar] [CrossRef]
  67. Razavi-Termeh, S.V.; Sadeghi-Niaraki, A.; Choi, S.-M. Ubiquitous GIS-based forest fire susceptibility mapping using artificial intelligence methods. Remote Sens. 2020, 12, 1689. [Google Scholar] [CrossRef]
  68. Yuan, L.; Huang, Y.; Loraamm, R.W.; Nie, C.; Wang, J.; Zhang, J. Spectral analysis of winter wheat leaves for detection and differentiation of diseases and insects. Field Crop. Res. 2014, 156, 199–207. [Google Scholar] [CrossRef]
  69. Alisaac, E.; Behmann, J.; Kuska, M.T.; Dehne, H.-W.; Mahlein, A.-K. Hyperspectral quantification of wheat resistance to Fusarium head blight: Comparison of two Fusarium species. Eur. J. Plant Pathol. 2018, 152, 869–884. [Google Scholar] [CrossRef]
  70. Zhang, J.C.; Pu, R.L.; Wang, J.H.; Huang, W.J.; Yuan, L.; Luo, J.H. Detecting powdery mildew of winter wheat using leaf level hyperspectral measurements. Comput. Electron. Agric. 2012, 85, 13–23. [Google Scholar] [CrossRef]
  71. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A deep learning-based approach for automated yellow rust disease detection from high-resolution hyperspectral UAV images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
  72. Thomas, S.; Behmann, J.; Steier, A.; Kraska, T.; Muller, O.; Rascher, U.; Mahlein, A.-K. Quantitative assessment of disease severity and rating of barley cultivars based on hyperspectral imaging in a non-invasive, automated phenotyping platform. Plant Methods 2018, 14, 1–12. [Google Scholar] [CrossRef] [PubMed]
  73. Kuska, M.; Wahabzada, M.; Leucker, M.; Dehne, H.-W.; Kersting, K.; Oerke, E.-C.; Steiner, U.; Mahlein, A.-K. Hyperspectral phenotyping on the microscopic scale: Towards automated characterization of plant-pathogen interactions. Plant Methods 2015, 11, 1–15. [Google Scholar] [CrossRef] [PubMed]
  74. Näsi, R.; Viljanen, N.; Kaivosoja, J.; Alhonoja, K.; Hakala, T.; Markelin, L.; Honkavaara, E. Estimating biomass and nitrogen amount of barley and grass using UAV and aircraft based spectral and photogrammetric 3D features. Remote Sens. 2018, 10, 1082. [Google Scholar] [CrossRef]
  75. Mahlein, A.-K.; Rumpf, T.; Welke, P.; Dehne, H.-W.; Plümer, L.; Steiner, U.; Oerke, E.-C. Development of spectral indices for detecting and identifying plant diseases. Remote Sens. Environ. 2013, 128, 21–30. [Google Scholar] [CrossRef]
  76. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Deng, X.; Zhang, L. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery. PLoS ONE 2018, 13, e0196302. [Google Scholar] [CrossRef] [PubMed]
  77. Sibiya, M.; Sumbwanyambe, M. A computational procedure for the recognition and classification of maize leaf diseases out of healthy leaves using convolutional neural networks. Agriengineering 2019, 1, 119–131. [Google Scholar] [CrossRef]
  78. Wiesner-Hanks, T.; Wu, H.; Stewart, E.; DeChant, C.; Kaczmar, N.; Lipson, H.; Gore, M.A.; Nelson, R.J. Millimeter-level plant disease detection from aerial photographs via deep learning and crowdsourced data. Front. Plant Sci. 2019, 10, 1550. [Google Scholar] [CrossRef] [PubMed]
  79. Wang, T.; Thomasson, J.A.; Yang, C.; Isakeit, T.; Nichols, R.L. Automatic classification of cotton root rot disease based on UAV remote sensing. Remote Sens. 2020, 12, 1310. [Google Scholar] [CrossRef]
  80. Xavier, T.W.F.; Souto, R.N.V.; Statella, T.; Galbieri, R.; Santos, E.S.; Suli, G.S.; Zeilhofer, P. Identification of Ramularia leaf blight cotton disease infection levels by multispectral, multiscale UAV imagery. Drones 2019, 3, 33. [Google Scholar] [CrossRef]
  81. Sandino, J.; Pegg, G.; Gonzalez, F.; Smith, G. Aerial mapping of forests affected by pathogens using UAVs, hyperspectral sensors, and artificial intelligence. Sensors 2018, 18, 944. [Google Scholar] [CrossRef]
  82. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  83. Sambuddha Ghosal, S.G.; Blystone, D.; Singh, A.K.; Baskar Ganapathysubramanian, B.G.; Arti Singh, A.S.; Soumik Sarkar, S.S. An explainable deep machine vision framework for plant stress phenotyping. Proc. Natl. Acad. Sci. USA 2018, 115, 4613–4618. [Google Scholar] [CrossRef] [PubMed]
  84. Wallelign, S.; Polceanu, M.; Buche, C. Soybean plant disease identification using convolutional neural network. In Proceedings of the Thirty-First International Flairs Conference, Melbourne, FL, USA, 21–23 May 2018. [Google Scholar]
  85. Tetila, E.C.; Machado, B.B.; Menezes, G.K.; Oliveira, A.D.S.; Alvarez, M.; Amorim, W.P.; Belete, N.A.; Da Silva, G.G.; Pistori, H. Automatic recognition of soybean leaf diseases using UAV images and deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2019, 17, 903–907. [Google Scholar] [CrossRef]
  86. Hu, G.; Yin, C.; Wan, M.; Zhang, Y.; Fang, Y. Recognition of diseased Pinus trees in UAV images using deep learning and AdaBoost classifier. Biosyst. Eng. 2020, 194, 138–151. [Google Scholar] [CrossRef]
  87. Gibson-Poole, S.; Humphris, S.; Toth, I.; Hamilton, A. Identification of the onset of disease within a potato crop using a UAV equipped with un-modified and modified commercial off-the-shelf digital cameras. Adv. Anim. Biosci. 2017, 8, 812–816. [Google Scholar] [CrossRef]
  88. Sugiura, R.; Tsuda, S.; Tamiya, S.; Itoh, A.; Nishiwaki, K.; Murakami, N.; Shibuya, Y.; Hirafuji, M.; Nuske, S. Field phenotyping system for the assessment of potato late blight resistance using RGB imagery from an unmanned aerial vehicle. Biosyst. Eng. 2016, 148, 1–10. [Google Scholar] [CrossRef]
  89. Abdulridha, J.; Ampatzidis, Y.; Kakarla, S.C.; Roberts, P. Detection of target spot and bacterial spot diseases in tomato using UAV-based and benchtop-based hyperspectral imaging techniques. Precis. Agric. 2020, 21, 955–978. [Google Scholar] [CrossRef]
  90. Zhang, K.; Wu, Q.; Liu, A.; Meng, X. Can deep learning identify tomato leaf disease? Adv. Multimed. 2018, 2018, 6710865. [Google Scholar] [CrossRef]
  91. Zhao, Y.-R.; Li, X.; Yu, K.-Q.; Cheng, F.; He, Y. Hyperspectral imaging for determining pigment contents in cucumber leaves in response to angular leaf spot disease. Sci. Rep. 2016, 6, 27790. [Google Scholar] [CrossRef]
  92. Sugiura, R.; Tsuda, S.; Tsuji, H.; Murakami, N. Virus-infected plant detection in potato seed production field by UAV imagery. In Proceedings of the ASABE Annual International Meeting, Detroit, MI, USA, 29 July–1 August 2018. [Google Scholar]
  93. Franceschini, M.H.D.; Bartholomeus, H.; van Apeldoorn, D.F.; Suomalainen, J.; Kooistra, L. Feasibility of unmanned aerial vehicle optical imagery for early detection and severity assessment of late blight in potato. Remote Sens. 2019, 11, 224. [Google Scholar] [CrossRef]
  94. Rajendran, D.K.; Park, E.; Nagendran, R.; Hung, N.B.; Cho, B.-K.; Kim, K.-H.; Lee, Y.H. Visual analysis for detection and quantification of Pseudomonas cichorii disease severity in tomato plants. Plant Pathol. J. 2016, 32, 300–310. [Google Scholar] [CrossRef] [PubMed]
  95. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
  96. Wang, W.; Li, C.; Tollner, E.W.; Gitaitis, R.D.; Rains, G.C. Shortwave infrared hyperspectral imaging for detecting sour skin (Burkholderia cepacia)-infected onions. J. Food Eng. 2012, 109, 38–48. [Google Scholar] [CrossRef]
  97. Abdulridha, J.; Batuman, O.; Ampatzidis, Y. UAV-based remote sensing technique to detect citrus canker disease utilizing hyperspectral imaging and machine learning. Remote Sens. 2019, 11, 1373. [Google Scholar] [CrossRef]
  98. Kerkech, M.; Hafiane, A.; Canals, R.; Ros, F. (Eds.) Vine disease detection by deep learning method combined with 3d depth information. In Image and Signal Processing, Proceedings of the 9th International Conference, ICISP 2020, Marrakesh, Morocco, 4–6 June 2020, Proceedings 9; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  99. Sarkar, S.K.; Das, J.; Ehsani, R.; Kumar, V. Towards autonomous phytopathology: Outcomes and challenges of citrus greening disease detection through close-range remote sensing. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5143–5148. [Google Scholar]
  100. Garcia-Ruiz, F.; Sankaran, S.; Maja, J.M.; Lee, W.S.; Rasmussen, J.; Ehsani, R. Comparison of two aerial imaging platforms for identification of Huanglongbing-infected citrus trees. Comput. Electron. Agric. 2013, 91, 106–115. [Google Scholar] [CrossRef]
  101. Yadav, P.K.; Burks, T.; Frederick, Q.; Qin, J.; Kim, M.; Ritenour, M.A. Citrus disease detection using convolution neural network generated features and Softmax classifier on hyperspectral image data. Front. Plant Sci. 2022, 13, 1043712. [Google Scholar] [CrossRef]
  102. Kerkech, M.; Hafiane, A.; Canals, R. Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images. Comput. Electron. Agric. 2018, 155, 237–243. [Google Scholar] [CrossRef]
  103. Apolo-Apolo, O.E.; Pérez-Ruiz, M.; Martínez-Guanter, J.; Valente, J. A cloud-based environment for generating yield estimation maps from apple orchards using UAV imagery and a deep learning technique. Front. Plant Sci. 2020, 11, 1086. [Google Scholar] [CrossRef]
  104. Selvaraj, M.G.; Vergara, A.; Ruiz, H.; Safari, N.; Elayabalan, S.; Ocimati, W.; Blomme, G. AI-powered banana diseases and pest detection. Plant Methods 2019, 15, 1–11. [Google Scholar] [CrossRef]
Figure 1. (A) Satellite-based: Uses satellite imagery for large-scale crop monitoring, detecting diseases and pests over vast areas, but with lower resolution compared to other methods. (B) Airplane-base: Aerial sensors on airplanes offer medium-range, high-resolution imaging, ideal for identifying issues like water stress or pest damage over extensive fields. (C) Drone-based: Drones provide high-resolution, close-range crop monitoring, offering real-time data for precision farming tasks like targeted spraying and disease detection. (D) Ground-based: Ground-level sensors and systems collect detailed data on soil moisture, plant health, and nutrient levels, offering precise, continuous monitoring at the field level.
Figure 1. (A) Satellite-based: Uses satellite imagery for large-scale crop monitoring, detecting diseases and pests over vast areas, but with lower resolution compared to other methods. (B) Airplane-base: Aerial sensors on airplanes offer medium-range, high-resolution imaging, ideal for identifying issues like water stress or pest damage over extensive fields. (C) Drone-based: Drones provide high-resolution, close-range crop monitoring, offering real-time data for precision farming tasks like targeted spraying and disease detection. (D) Ground-based: Ground-level sensors and systems collect detailed data on soil moisture, plant health, and nutrient levels, offering precise, continuous monitoring at the field level.
Agronomy 14 02260 g001
Figure 2. (A) Supervised learning approach uses classification or regression technique to produce result output. (B) Unsupervised learning focuses on clustering and association techniques for result production.
Figure 2. (A) Supervised learning approach uses classification or regression technique to produce result output. (B) Unsupervised learning focuses on clustering and association techniques for result production.
Agronomy 14 02260 g002
Figure 3. Several stages involved in the pre-processing of hyperspectral imagery data collection to disease detection modelling.
Figure 3. Several stages involved in the pre-processing of hyperspectral imagery data collection to disease detection modelling.
Agronomy 14 02260 g003
Table 1. Companies working with remote sensing data in Agri. Sector.
Table 1. Companies working with remote sensing data in Agri. Sector.
Sr.# *Company NamePlatform TypeFunctionCountry of Origin
1Farmers EdgeSatellite BaseFertilizer application recommendation Canada
2CIBO Technologies Satellite BaseDecision-making based on environmental simulations USA
3Descartes LabsSatellite BaseYield forecastUSA
4Orbital InsightSatellite BaseTracking of macro-level production flowsUSA
5SenseflyUAVCrop monitoring Switzerland
6MavrxPlane BaseAnalyzing crop growth pattern USA
7Precision HawkUAVAnalyzing Canopy cover, field uniformity, USA
8Parrot UAVCrop monitoring USA
9TerrAvionPlane BaseAerial imagery and monitoring of agricultural fieldsUSA
10TaranisMulti BasePests and disease prediction USA
11SlantrangeUAVFarmland and field data analyticsUSA
12Prospera Multi BaseDisease detection Israel
13aWhereMulti BasePests and disease risk assessment USA
14GamayaMulti BaseYield predictions and weed detectionSwitzerland
* Serial Number. # represents number, i.e. Sr. number is written as Sr. #
Table 2. Application of various AI models on field crops and forests.
Table 2. Application of various AI models on field crops and forests.
CropResearch AimResearch ContributionAI Model/AlgorithmReference
Wheat Detection of Fusarium head blightSVM model for classifying infected and non-infected wheat spikelets with an accuracy of 89% Support vector machine (SVM)[58]
Detection and differentiation of brown and yellow rust disease Introduction of novel approach for detecting plant diseases using hyperspectral imaging without requiring pixel-wise labeling at the leaf scaleLeast-Square regression (LSR)[59]
Detection of powdery mildew disease A discriminant model for differentiating powdery mildew, yellow rust, and aphid infestation in winter wheat the discriminant model achieved an overall accuracy of 75%Partial least square
regression (PLSR)
[68]
Differentiation of Fusarium head blight SVM classification accurately differentiates between healthy and infected spikes with about 76% accuracy rate SVM[69]
Severity assessment of powdery mildew disease PLSR outperforms MLR in estimating disease severityPartial least square regression (PLSR) and Multivariate linear
regression (MLR)
[70]
Detection of yellow rust disease Highlighting the combination of both spectral and spatial information increase disease detection accuracy for the modelRandom forest (RF)[71]
Barley Severity assessment of powdery mildew disease Disease severity assessment without the need for visual rating, as well as detection of disease symptoms in early stages. SVM[72]
Assessing genotype resistance against powdery mildew Developing new approach for the hyperspectral assessment and characterization of plant diseases and early processes during pathogenesisSimplex Volume
Maximization
[73]
Assessment of biomass of barley plants Highlighted that the integration of spectral and high spatial resolution 3D features assists in optimization of models and increase the accuracy.Random forest (RF)[74]
Sugar beetDetection of Cercospora leaf spot and rust disease Development of spectral plant disease indices based on hyperspectral signaturesRELIEF-F [75]
Rice Weed mapping and recognition Generation of accurate weed cover maps with 93.5% accuracy Fully convoluted networks (FCN)[76]
CornDetection of corn leaf blight and grey leaf spot disease Recognition of three different types of maize leaf diseases (northern corn leaf blight, common rust and gray leaf spot out of healthy leavesConvolution neural network (CNN)[77]
Assessment of biomass of maize plantsStreamlining of deep learning approaches for plant disease detection, and for complex plant phenotyping tasks in general.CNN[78]
Cotton Detection of cotton root rot disease Accurate disease detection with 89% precision.K-means SVM (KMSVM)[79]
Detection of Ramularia blight disease Achieving 79% model accuracy with relatively low-cost imagery system.Random forest tree (RFT)[80]
Tea Myrtle rust disease detection Effective detection and mapping of indicators of poor health in forest and plantation trees; with detection rates of 97% for healthy trees and 94% for affected treeseXtreme Gradient Boosting (XGBoost)[81]
SoybeanYield prediction model Multimodal data fusion using low-cost UAV within a DNN framework provide a relatively accurate and robust estimation of crop yieldDeep neural networks (DNN)[82]
Detection of bacterial blight and sudden death syndrome disease Explainable deep machine learning to automate the process of plant stress identification, classification, and quantificationConvolution neural network (CNN)[83]
Detection of Septoria leaf blight brown spot and downy mildew disease Extraction of important features and classify plant diseases with 99% classification accuracyCNN[84]
Detection of Asian rust and powdery mildew disease Demonstrated how a deep learning model can be implemented in a computer vision system to operate in a
real field setting, under different lighting conditions, object size, and background variation
CNN[85]
ForestsTree typing of Goseong-gun Forest in Korean peninsula Promotion of research that reveals the relationships between the structural and chemical characteristics of each species and their spectral features. Random forest (RF)[63]
Boreal forest inventory in FinlandImproved structural variable estimation performance in boreal forests with the proposed image sampling and input feature concept. RF and Deep neural network (DNN)[64]
Updating new wetland forest types in VietnamHighlighting the advantages of integrating deep learning and multi-temporal remote sensing images for monitoring wetland classificationRes-Net-CNN[65]
Forest biomass assessment in red river delta area of Vietnam Multisource optical and synthetic aperture radar (SAR) combined with the XGBR-GA model can be used to estimate the mangrove biomassSupport vector machine (SVM) and RF[66]
Wild fire alert system for forest regionsPrepare forest fire susceptibility mapping (FFSM) using a ubiquitous GIS with up to 90% accuracyFuzzy logic and Artificial neural networks (ANN)[67]
Recognition of diseased Pinus treesShows that the DCNN has better recognition performance than other approaches, i.e., K-means clustering, support vector machine, Alexnet, etc. Deep convolutional neural networks (DCNN)[86]
Table 4. Application of various AI models on fruit crops.
Table 4. Application of various AI models on fruit crops.
CropResearch AimResearch ContributionAI Model/AlgorithmReference
Citrus Detection of citrus canker disease Successful detection of the late-stage canker-infected fruit with 92% classification accuracyk-nearest neighbor (KNN)[97]
Detection of citrus greening/yellow dragon disease Discriminating between healthy and infected leaves with validation accuracy up to 93%.Support vector machine (SVM)[99]
Detection of citrus greening/yellow dragon disease Highlights that UAV-based spectral images resulted in higher classification accuracies than aircraft-based images. SVM[100]
Detection of citrus canker disease CNN models were able to classify HSI images of citrus peel with eight different conditions at a higher accuracyConvolution neural network (CNN)[101]
Grape Vine disease detection Method development, based on segmentation by a convolutional neuron network SegNet and a depth map (DM) for precise disease detection from vine canopyCNN [98]
Esca complex of trunk diseasesIncorporates CNNs and color information to identify infected areas in vineyard with nearly 96% accuracy.CNN [102]
Apple Detection of cedar apple rust diseaseDemonstrating the potential of convolutional neural networks for accurate, smartphone-assisted crop disease diagnosis CNN[95]
Fruit detection and yield estimationRapid yield estimation method for apple orchards using UAV imagery and machine learning, by detecting and counting apple fruits on individual trees from orthomosaic images.CNN[103]
Banana Detection of Xanthomonas wilt, Fusarium wilt, and bunchy top diseaseDifferentiation and identification of Banana disease with accuracy of up to 99%DCNN[104]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, F.; Razzaq, A.; Tariq, W.; Hameed, A.; Rehman, A.; Razzaq, K.; Sarfraz, S.; Rajput, N.A.; Zaki, H.E.M.; Shahid, M.S.; et al. Spectral Intelligence: AI-Driven Hyperspectral Imaging for Agricultural and Ecosystem Applications. Agronomy 2024, 14, 2260. https://doi.org/10.3390/agronomy14102260

AMA Style

Ali F, Razzaq A, Tariq W, Hameed A, Rehman A, Razzaq K, Sarfraz S, Rajput NA, Zaki HEM, Shahid MS, et al. Spectral Intelligence: AI-Driven Hyperspectral Imaging for Agricultural and Ecosystem Applications. Agronomy. 2024; 14(10):2260. https://doi.org/10.3390/agronomy14102260

Chicago/Turabian Style

Ali, Faizan, Ali Razzaq, Waheed Tariq, Akhtar Hameed, Abdul Rehman, Khizar Razzaq, Sohaib Sarfraz, Nasir Ahmed Rajput, Haitham E. M. Zaki, Muhammad Shafiq Shahid, and et al. 2024. "Spectral Intelligence: AI-Driven Hyperspectral Imaging for Agricultural and Ecosystem Applications" Agronomy 14, no. 10: 2260. https://doi.org/10.3390/agronomy14102260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop