Next Article in Journal
Risk Assessment of Urban Floods Based on a SWMM-MIKE21-Coupled Model Using GF-2 Data
Next Article in Special Issue
Multi-Classifier Fusion for Open-Set Specific Emitter Identification
Previous Article in Journal
Rotation-Invariant and Relation-Aware Cross-Domain Adaptation Object Detection Network for Optical Remote Sensing Images
Previous Article in Special Issue
Managing Time-Sensitive IoT Applications via Dynamic Application Task Distribution and Adaptation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey

School of Computer Science, China University of Geosciences, Wuhan 430078, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(21), 4387; https://doi.org/10.3390/rs13214387
Submission received: 28 August 2021 / Revised: 7 October 2021 / Accepted: 25 October 2021 / Published: 30 October 2021
(This article belongs to the Special Issue Internet of Things (IoT) Remote Sensing)

Abstract

:
In recent years unmanned aerial vehicles (UAVs) have emerged as a popular and cost-effective technology to capture high spatial and temporal resolution remote sensing (RS) images for a wide range of precision agriculture applications, which can help reduce costs and environmental impacts by providing detailed agricultural information to optimize field practices. Furthermore, deep learning (DL) has been successfully applied in agricultural applications such as weed detection, crop pest and disease detection, etc. as an intelligent tool. However, most DL-based methods place high computation, memory and network demands on resources. Cloud computing can increase processing efficiency with high scalability and low cost, but results in high latency and great pressure on the network bandwidth. The emerging of edge intelligence, although still in the early stages, provides a promising solution for artificial intelligence (AI) applications on intelligent edge devices at the edge of the network close to data sources. These devices are with built-in processors enabling onboard analytics or AI (e.g., UAVs and Internet of Things gateways). Therefore, in this paper, a comprehensive survey on the latest developments of precision agriculture with UAV RS and edge intelligence is conducted for the first time. The major insights observed are as follows: (a) in terms of UAV systems, small or light, fixed-wing or industrial rotor-wing UAVs are widely used in precision agriculture; (b) sensors on UAVs can provide multi-source datasets, and there are only a few public UAV dataset for intelligent precision agriculture, mainly from RGB sensors and a few from multispectral and hyperspectral sensors; (c) DL-based UAV RS methods can be categorized into classification, object detection and segmentation tasks, and convolutional neural network and recurrent neural network are the mostly common used network architectures; (d) cloud computing is a common solution to UAV RS data processing, while edge computing brings the computing close to data sources; (e) edge intelligence is the convergence of artificial intelligence and edge computing, in which model compression especially parameter pruning and quantization is the most important and widely used technique at present, and typical edge resources include central processing units, graphics processing units and field programmable gate arrays.

1. Introduction

Agriculture is the foundation of society and national economies, and one of the most important industries in China. Acquiring timely and reliable agriculture information such as crop growth and yields is crucial to the establishment of related policies and plans for food security, poverty reduction and sustainable development. In recent years precision agriculture (PA) has developed rapidly, which refers to a management strategy that gathers, processes and analyzes temporal, spatial and individual data in agricultural production. This data is combined with other information to support management decisions with estimated variability for improved resource use efficiency, productivity, quality, profitability and sustainability of agricultural production according to the International Society of Precision Agriculture (ISPA) [1,2]. It can help to reduce costs and environmental impacts by providing farmers with detailed spatial information to optimize field practices [3,4].
The traditional way to get the prerequisite knowledge for PA depends on labor-intensive and subjective investigation, which consumes a large amount of human time and financial resources. Since remote sensing (RS) allows for a high frequency of information gathering without making physical contact at a low cost [5], it has been widely used as a powerful tool for rapid, accurate and dynamic agriculture applications [6,7]. RS data are mainly collected by three kinds of platforms, i.e., spaceborne, airborne, and ground-based [5]. Spaceborne includes satellite RS and can provide large-scale spatial coverage, but can suffer from fixed and long revisit periods and cloud occlusion, limiting its application for fine-scale PA [8,9]. Additionally, relatively low spatial and temporal resolution and high equipment costs become critical bottlenecks [10]. Ground-based remote sensors (onboard vehicles, ships, fixed or movable elevated platforms) are suitable for small scale monitoring. In comparison, airborne platforms can collect data with high spatial resolution and flexibility in terms of flight configurations such as observation angles, flight routes [7]. An unmanned aerial vehicle (UAV) is a powered, aerial vehicle without any human operator, which can fly autonomously or be controlled remotely with various payloads [11]. Due to their advantages in terms of flexible data acquisition and high spatial resolution [12], UAVs are quickly evolving and provide a powerful technical approach for many applications in PA, for example, crop state mapping [13,14], crop yield prediction [15,16], diseases detection [17,18], weed management [19,20] rapidly and nondestructively.
Compared with traditional mechanism-based methods, machine learning (ML) methods have long been applied in a variety of agriculture applications to discover patterns and correlations due to their capability to address linear and non-linear issues from large numbers of inputs [7,21]. For example, Su et al. [22] proposed a support vector machine-based crop model for large-scale simulation of rice production in China, and Everingham et al. [23] utilize a random forest model to predict sugarcane yield with simulated and observed variables as inputs. An ML pipeline typically consists of feature extraction and a classification or regression module for prediction, and its performance heavily relies on the handcrafted feature extraction techniques [24,25]. In the past years, with the development of computing and storage capability, deep learning (DL), which is composed of “deep” layers to learn the representation of data with multiple levels of abstraction and discovers intricate structure in large datasets by using the backpropagation algorithm [26], has improved the state-of-the-art in a variety of tasks. This includes computer vision, natural language processing, speech recognition etc. In the RS community, even for typical PA applications with UAV data (e.g., weed detection [27], crops and plants counting [28], land cover and crop type classification [29]), DL has emerged as an intelligent and robust tool [30].
DL has achieved success with high accuracy for PA, for instance, the DL model in [27] provides much better weed detection results than ML methods in the bean field with a performance gain greater than 20%, and more PA applications boosted by DL have shown similar promising superiority. However, the successful implementation of DL comes at the cost of high computational, memory and network requirements at both the training and inference stages [31]. For example, the VGG-16, an early classic convolutional neural network (CNN) used for classification contains around 140 million parameters, consumes over 500MB of memory and has 15 billion floating point of operations (FLOPs) [32]. It is challenging to deploy deep neural network models in scenarios onboard mobile airborne and spaceborne platforms with limited computation, storage, power consumption, and bandwidth resources [33]. To meet the computational requirements of DL, a common way is to utilize cloud computing, where data are moved from the data sources located at the network edge such as smartphones and internet-of-things (IoT) sensors to the cloud [31]. However, the cloud-computing mode might put great pressure on network bandwidth and cause significant latency when moving massive data across the wide area network (WAN) [34]. Besides the above, privacy leakage is also a major concern [35]. The emerging of edge computing fulfills the above-mentioned issues.
According to the Edge Computing Consortium (ECC), edge computing is a distributed open platform at the network edge, close to the things or data sources, and integrating the capabilities of networks, storage and applications [36]. In this new computing paradigm, data does not need to be sent to a Cloud or other remote centralized or distributed systems for further processing. The combination of edge computing and artificial intelligence (AI) yields edge intelligence, the next stage of edge computing. It aims to use AI technology to empower the edge. Edge is a relative concept, which refers to any resource, storage, and network resource from the data source to the cloud-computing center. The resources on this path can be regarded as a continuous system. Currently, there is no formal definition of edge intelligence internationally. Most organizations refer to edge intelligence as the paradigm of running AI algorithms locally on an end device, with data created on the device [34]. It enables the deployment of AI algorithms on intelligent edge devices with built-in processors for onboard analytics or AI (e.g., UAVs, sensors and IoT gateways) that are closer to the data sources [34,37]. However, more researchers consider that edge intelligence should not be restricted to running AI models on edge devices or servers. A broader definition divides edge intelligence into AI for edge (intelligence-enabled edge computing) and AI on edge. The former tries to provide optimal solutions to key problems in edge computing with AI technologies, while the latter focuses on the way to carry out the entire process of building AI models, i.e., model training and inference, on the edge [38]. Zhou et al. further present a definition of six levels to fully exploit the available data and resources across end devices, edge nodes, and cloud datacenters, thus optimizing the performance of training and inferencing an AI model [34].
There already exist many reviews for agriculture with UAVs [2,8,9,39,40,41] and DL [24,42,43,44]. However, the research and practice of edge intelligence are still in an early stage, and to the best of our knowledge, there is a literature gap to review the advances combining edge intelligence and UAV RS in the PA area. Therefore, in this paper we attempt to provide an in-depth and comprehensive survey on the latest development of PA with UAV RS and edge intelligence. The main contributions of this paper are as follows:
1. The most relevant DL techniques and their latest implementations in PA are reviewed in detail. Specifically, this paper gives a comprehensive publicly available UAV-based RS datasets for intelligent agriculture, which attempts to facilitate the validation of DL-based methods for the community.
2. The cloud computing and edge computing paradigms for the UAV RS in PA are discussed in this paper.
3. The relevant edge intelligence techniques are thoroughly reviewed and analyzed for UAV RS in PA for the first time to the best of our knowledge. Particularly, this paper gives a compilation of the UAV intelligent edge devices and the latest development of edge inference with model compression in detail.
The remainder of this paper is structured as follows. Section 2 presents the application of UAV RS technology in PA, including the relevant fundamentals of UAV systems, RS sensors and typical applications in PA. Section 3 gives the DL methods and publicly available datasets used in PA. Section 4 emphatically analyzes the edge intelligence for UAV RS in PA, including the cloud and edge computing paradigms, basic concepts and major components of edge intelligence, network model design and edge resources. Future directions are given in Section 5 and conclusions are drawn in Section 6.

2. UAV Remote Sensing in Precision Agriculture

2.1. UAV Systems and Sensors for Precision Agriculture

UAV systems differ in size, weight, load, power, endurance time, purpose etc., and there are many kinds of taxonomic approaches. According to the Civil Aviation Administration of China, UAVs mainly serve for military and civilian fields. Agriculture belongs to the latter. In terms of the operational risk, mainly including the metrics of size, the weight of UAVs and the ability to carry payloads when performing missions, civilian UAVs can be divided into mini UAV, light UAV, small UAV, medium UAV, and large UAV [45]. Their major characteristics are listed in Table 1. In addition, according to the aerodynamic features, UAVs are usually classified into fixed-wing, rotary-wing, flapping-wing and hybrid UAVs shown in Table 2 [9,46,47]. For fixed-wing UAVs, the main wing surface that generates lift is fixed relative to the fuselage, and the power device generates the forward force. Rotary-wing UAVs possess power devices and rotor blades that are rotating relative to the fuselage for generating lift during flight, and further mainly include unmanned helicopters and multi-rotor UAVs, for instance, tricopters, quadcopters, hexacopters and octocopters, which can take off, land and hover vertically. Flapping-wing UAVs obtain lift and power by flapping wings up and down like birds and insects, and are suitable for small, light and mini UAVs. The hybrid layout UAVs consists of a combination of basic layout types, mainly including tilt-rotor UAVs and rotor/fixed UAVs. Figure 1 shows the examples of typical UAVs.
In the agriculture RS field concerned in this paper, UAVs used are less than 116 kg in general, and most belong to the “small” (≤15 kg) or “light” (≤7 kg) categories, and fly lower than 1 km, i.e., at a low altitude of 100 to 1000 m or ultra-low altitude of 1 to 100 m [9,45]. On the other hand, flapping-wing UAVs and hybrid UAVs are not often used; fixed-wing UAVs and industrial rotor-wing UAVs are the mainstream currently. Specifically, since multi-rotor UAVs are more cost-effective than the other types, and are generally more stable than unmanned helicopters during flight, they are the most widely used in the PA field [8].
Besides, UAVs can be equipped with a variety of payloads for different purposes. To capture agriculture information, UAVs used in PA are generally with remote sensors including RGB imaging, multispectral and hyperspectral imaging sensors, thermal infrared sensors, light detection and ranging (LiDAR), and synthetic aperture radar (SAR) [9,48,49]. Their major characteristics and applications in PA are summarized in Table 3.

2.2. Application of UAV Remote Sensing in Precision Agriculture

The major objectives of PA are to increase crop yields, improve product quality, make efficient use of agrochemical products, save energy and protect the physical environment against pollution [47]. With the advantages of cost-effective, high-resolution imagery [67], UAVs have now been commonly used in the PA area, mainly for monitoring [12,68,69,70] and spraying [71,72,73]. For the former, different sensors onboard UAVs capture RS data, which are utilized to identify specific spatial features and time variant information of crop characteristics; for the latter, UAV systems are used to spray accurate amounts of pesticides and fertilizers, thus to mitigate possible diseases and pests and increase crop yields and product quality [47]. RS provides an effective tool for UAV-based PA monitoring, and the most common related applications are as follows.
  • Weed detection and mapping: As weeds have been responsible for most agricultural yield losses, the utilization of herbicides is important in the growth of crops, but the unreasonable use will cause a series of environmental problems. To fulfill precision weed management [19,74], UAV RS can help to accurately locate weed areas, analyze weed types and weed density etc., thus using herbicides at fixed points quantitatively or applying improved and targeted mechanical soil tillage [27]. Weeds detection and mapping tries to find/map the locations of weeds in the obtained UAV RS images, and is achieved generally based on the different spatial distribution [27,75], shape [76], spectral signatures [53,77,78,79], or their combinations [80] of weeds compared to normal crops. Accordingly, the most important sensors as UAV payload are mainly RGB sensors [27,76,77,80], multispectral sensors [53,78] and hyperspectral sensors [79] ADDIN.
  • Crop pest and disease detection: Field crops are subjected to the attack of various pests and diseases at stages from sowing to harvest, which affects the yield and quality of crops and become one of the main limits to agricultural production in China. As the main part of pest and disease management, early detection of pests and diseases from UAV RS images allows efficient application of pesticides and an early response to the production costs and environmental impact. Crop pest and disease detection tries to locate the possible pest or disease infected areas on leaves from observed UAV RS images, and the detection basis is mainly their spectral difference [81]. To obtain more details of pests and diseases on leaves, UAVs are usually with low flight height for observations with high spatial or spectral resolution [82,83,84]. Commonly mounted sensors are RGB sensors [83,85,86,87], multispectral sensors [88], infrared sensors [89] and hyperspectral sensors [82,84].
  • Crop growth monitoring: RS can be used to monitor group and individual characteristics of crop growth, e.g., crop seedling condition, growth status and changes. Crop growth information monitoring is fundamental for regulating crop growth, diagnosing crop nutrient deficiencies, analyzing and predicting crop yield etc., and can provide decision-making basis for agricultural policy formulation and food trade. Crop growth monitoring is to build a multitemporal crop model to allow for comparison of different phenological stages [90], and UAV provides a good platform for obtaining crop information [91]. The crop growth is generally quantified by several indices, such as LAI, leaf dry weight, leaf nitrogen accumulation, etc., in which multiple spectral bands are usually needed. As a relatively more comprehensive task, sensors onboard UAVs are usually multispectral/hyperspectral ones [91,92] or the combination of RGB and infrared [93] or LiDAR [63].
  • Crop yield estimation: Accurate yield estimates are essential for predicting the volume of stock needed and organizing harvesting operations. RS information can be used as input variables or parameters to directly or indirectly reflect the influencing factors in the process of crop growth and yield formation, alone or in combination with other information for crop yield estimation. It tries to estimate the crop yield by observing the morphological characteristics of crops in a non-destructive way [16]. Similar to the task of crop growth monitoring, crop yield estimation also relies on multiple spectral bands for better and richer information. Therefore, UAVs are usually equipped with multimodal sensors, for example, hyperspectral/multispectral [15,16,94,95,96,97], thermal infrared [95], and combination with RGB [15,16,94,95,96,97] or SAR [66].
  • Crop type classification: Crop type maps are one of the most essential inputs for agriculture tasks such as crop yield estimation, and accurate crop type identification is important for subsidy control, biodiversity monitoring, soil protection, sustainable application of fertilizer etc. There exist practices to explore the discrimination of different crop types from RS images in complex and heterogeneous environments. Crop type classification task tries to discriminate different types of crop into a map based on the information captured by RS data, and is similar to land cover/land use classification [98]. According to the demands of different tasks, it can be implemented from different spatial scales. For larger scale classification, SAR sensors are used [64,99,100], and for smaller scale, RGB images from UAVs can be utilized [101], or with SAR data fused [102].

3. Deep Learning in Precision Agriculture with UAV Remote Sensing

3.1. Deep Learning Methods in Precision Agriculture

DL is a subset of artificial neural network (ANN) methods in machine learning. DL consists of several connected layers of neurons with activations like ANNs, but with more hidden layers and deeper and more complex combinations, which is responsible for obtaining better learning patterns than a common ANN. The concept of DL is proposed in 2006 by Hinton et al. [103] in which key issues for deep ANN training are solved. With the advance of computational capacity of computer hardware and the availability of large amounts of labeled samples, the massive training and inference of DL become possible and efficient, which makes DL outperform traditional ML methods in a variety of applications. In the last decade, DL methods have gained increasingly more attention, and become the de facto mainstream of ML. More fundamental details of DL models such as the activation functions, loss functions, optimizers, basic structures are referred to [26].
According to the data type processed and the type of network architectures, different types of DL models are designed and representative ones are CNN, recurrent neural network (RNN), and generative adversarial network (GAN) etc. These three types of network architectures are the most widely used in agriculture applications, especially for the UAV RS. CNNs are designed to deal with grid-like data, such as images, and it is therefore very suitable for supervised image processing and computer vision applications. It is usually composed of three distinct hierarchical structures, such as convolutional layers, pooling layers, and fully connected layers. Typical CNN architectures are AlexNet [104], GoogleNet [105], and ResNet [106] etc. RNNs, as supervised models, have also been applied to deal with time-series data via modeling time-related features. One typical RNN architecture is long short-term memory with its basic unit remembering information from arbitrary time intervals. Another network architecture that is popular and successful in recent years is GAN [107]. A GAN model consists of two sub-networks, one is generative network and the other is discriminative network. Its main idea comes from Zero-Sum Game, in which the generative network tries to generate samples as vivid as possible and the discriminative network tries to discriminate the fake ones and real ones. GANs have also been applied to image-to-image translation [108,109], sample augmentation [110,111] in the field of RS.
In UAV RS scenarios, most applications utilize images captured by cameras as their data inputs, i.e., they are computer vision related tasks. In this way, UAV RS tasks in PA that uses DL methods (mainly CNN) can be divided into three typical and principal computer vision tasks: classification, detection and segmentation [112].
  • Classification tries to predict the presence/absence of at least one object of a particular object class in the image, and DL algorithms are required to provide a real-valued confidence of the object’s presence. Classification methods are mainly used to recognize crop diseases [86,113,114], weed type [27,115,116], or crop type [117,118].
  • Detection tasks try to predict the bounding box of each object of a particular object class in the image with associated confidence, i.e., answer the question “where are the instances in the image, if any?” It means that the extracted object information is relatively more precise. Typical applications are finding the crops with pests [119] or other diseases [120], locating the weeds in the images [121,122], counting the crop number for yield estimation [123,124,125] or disaster evaluation [126], etc.
  • Segmentation is a task that predicts the object label (for semantic segmentation) or instance label (for instance segmentation) of each pixel in the test image, which can be viewed as a more precise classification for each pixel. It can not only locate objects, but also obtain their pixels at a finer-grained level. Therefore, segmentation methods are usually used to accurately locate features of interest in images. Semantic segmentation can help locate crop leaf diseases [127,128], generating weed maps [76,78,129], or assessing crop growth [130,131] and yields [132], while instance segmentation can detect crop and weed plants [133,134], or conduct crop seed phenotyping [135] at a finer level.
Overall, the three principal kinds of computer vision techniques have played a crucial role in UAV-based RS for PA and support various typical applications as mentioned in Section 2.2, mainly including crop pest and disease detection, weed detection and mapping, crop growth monitoring and yield estimation, crop type classification etc. Table 4 shows a compilation of typical examples in PA using DL methods.

3.2. Dataset for Intelligent Precision Agriculture

The sensors integrated with a UAV depend on the purpose, size, weight, power consumption etc. A number of reviews discuss the sensors on UAVs in the PA field, and to the best of our knowledge, only Zhang et al. [158] listed some datasets in agricultural dense scenes, but whether these datasets can be publicly available are not indicated. Hence, in this paper we give a compilation of publicly available UAV dataset with labels for the PA applications together with their descriptions including the platform, data type, major applications and links in Table 5, which attempt to facilitate the development, testing and comparison of relevant DL methods. We also summarized the available datasets with labels for related tasks from sensors onboard satellites, which may be used to obtain a pre-trained model.

4. Edge Intelligence for UAV RS in Precision Agriculture

4.1. Cloud and Edge Computing for UAVs

4.1.1. Cloud Computing Paradigm for UAVs

Cloud computing is a computing paradigm that provides end-users with infrastructure, platforms, software and other on-demand shared services through integrating large-scale and scalable computing, storage, data, applications and other distributed computing resources over an Internet connection [172,173]. The main characteristic of cloud computing is the change in the way resources are used. End-users normally run applications on their end-devices while the core service and processing are performed on cloud servers. At the same time, end-users do not need to master the corresponding technology or operation skills for device maintenance, but only focus on the required services. It improves service quality while reducing operation and maintenance costs. The key services that cloud computing offers include infrastructure as a service (IAAS), platform as a service (PAAS) and software as a service (SAAS) [174]. The cloud-computing paradigm provides the major following advantages:
  • The number of servers in the cloud is huge, providing users with powerful computing and storage resources for massive UAV RS data processing.
  • Cloud computing supports users to obtain services at any location from various terminals such as a laptop or a phone through virtualization.
  • Cloud computing is a distributed computing architecture, and issues such as single-point errors are inevitable. The fault-tolerant mechanisms such as copy strategies ensure high reliability for various processing and analysis services.
  • Cloud centers can dynamically allocate or release resources according to the needs of specific users, and can meet the dynamic scale growth requirements of applications and users. It benefits from the scalability of cloud computing.
There exist researches to construct cloud-based systems for UAV RS. Jeong et al. [175] proposes a UAV control system that performs real-time image processing in a cloud system and controls a UAV according to these computed results, wherein the UAV contains the minimal essential control function and shares data with the cloud server via WiFi. In [176], a cloud-based environment for generating yield estimation maps from apple orchards is presented using UAV images. The DL model for apple detection is trained and verified offline in the cloud service of Google Colab, along with the aid of GIS tools. Similarly, Ampatzidis et al. [143] developed a cloud and AI based application, i.e., Agroview to accurately and rapidly process, analyze and visualize data collected from UAVs. Agroview uses the Amazon Web Service (AWS) that provides highly reliable and scalable infrastructure for deploying cloud-based applications, a main application control machine as user interface, one instance for central processing units (CPU) intensive usage like data stitching, and one instance for graphics processing units (GPU) intensive usage like tree detection algorithm.
Current cloud-based applications generally follow the pipeline as shown in Figure 2a. According to the pattern of cloud computing, it suffers from the following disadvantages [177]. (a) With the growing quantity of data generated at the edge, the speed of data transportation through network is becoming the bottleneck for the cloud-computing paradigm. (b) The number of sensors at the edge of the network is increasing dramatically and data produced will be enormous, making conventional cloud computing not efficient enough to handle all these data. (c) In the cloud-computing paradigm, the end-devices at the edge usually play as a data consumer. The change from data consumer to data producer/consumer requires more function placement at the edge.

4.1.2. Edge Computing Paradigm for UAVs

Edge computing fulfills the above-mentioned disadvantages by bringing the computing and storage resources to the edge of the network, which is close to mobile devices or sensors [177]. In the edge computing paradigm, the edge can perform computing offloading, data storage, caching and processing, as well as distribute request and delivery service from cloud to end-users. In recent years, edge computing has attracted tremendous attention for its low latency, mobility, proximity to the end-users and location awareness, compared to the cloud computing paradigm [137,172,178] as shown in Figure 2b. Compared with cloud computing, edge computing has the following characteristics:
  • With the rapid development of IoT, devices around the world generate massive data, but only a few are critical and most are temporary, which do not require long-term storage. A large amount of temporary data are processed at the edge of the network, thereby reducing the pressure on network bandwidth and data centers.
  • Although cloud computing can provide services for mobile devices to make up for their lack of computing, storage, power resources, the network transmission speed is limited by the development of communication technology, and there are issues such as unstable links and routing in a complex network environment. These factors can cause high latency, excessive jitter and slow data transmission speed, thus reducing the response of cloud services. Edge computing provides services near users, which can enhance the responsiveness of services.
  • Edge computing provides infrastructures for the storage and use of critical data and improves data security.
For UAV-based RS applications in PA, edge computing is ideal for online tasks that require the above promising features. There are ways to avoid massive data from transferring to the cloud. The first one is to implement a UAV on-board real-time processing platform and delivery only the key information to the network, and the other one is to deploy a local ground station for UAV information processing.
As the computing and storage resources are quite limited, the optimization of computing workflow and algorithms are therefore required. For example, Li et al. [120] developed an airborne edge computing system for pine wilt disease detection of coniferous forests. The images captured by an onboard camera are directly passed to the edge computing module in which the lightweight YOLO model is implemented due to the limited processing and storage resources. Similarly, Deng et al. [137] used a lightweight segmentation model for real-time weed mapping on a NIVIDA Jetson TX board. Camargo et al. [115] specially optimized ResNet model and changed it from 32-bit to 16-bit to reduce computing on an NVIDIA Jetson AGX Xavier embedded system. Many researchers [179,180] also exploited the acceleration of traditional image processing algorithms for various RS data types on UAV platforms.

4.2. Edge Intelligence: Convergence of AI and Edge Computing

Artificial intelligence methods are computationally and storage intensive. Although it can achieve excellent performance in most applications, it also places high computing and storage demands on resources, making it challenge for real applications. The emerging of edge computing solves the above key problems in artificial intelligence applications on edge devices. The combination of edge computing and artificial intelligence yields edge intelligence [38]. Currently, there is no formal definition for edge intelligence internationally. Edge intelligence is regarded as the paradigm of running AI algorithms locally on an intelligent edge device [34]. Researchers also try to give a broader definition, which mainly includes AI for edge and AI on edge. The former part solves the problems in edge computing with AI, while the latter is the common definition [34] and the focus of this paper. In edge intelligence applications, edge devices can reduce the amount of data transferred to the central cloud and greatly save bandwidth resources. Meanwhile, running DL models at edge devices has lower computing consumption, higher performance, and can avoid possible privacy risks.
In the scope of this review, the combination of intelligent UAV RS and edge computing results in more effective PA applications. To obtain better performance, DL models tend to be designed deeper and more complex, which inevitably causes delays. Limited by processing and storage resources, these complex DL models can hardly be directly applied to UAVs. Much work needs to be fulfilled before implementing on the resources-limited UAV edge platforms for efficient PA applications. According to existing work, the major components of edge intelligence include: (a) edge caching, a distributed data system near end users to collect and store the data produced by edge devices and surrounding environments, and also the data received from the Internet; (b) edge training, a distributed learning procedure that learns the optimal models with the training set cached at the edge; (c) edge inference, which infers the testing instance on edge devices and servers using a trained model or algorithm; and d) edge offloading, a distributed computing paradigm that offers computing service for edge caching, edge training and edge inference [181]. As for UAV RS in PA, the existing studies mainly focus on edge training and edge inference, especially the inference onboard UAVs. On the other hand, for an edge intelligence system and industrial ecosystem, algorithms and computing resources are the key elements. As a result, we discuss the relevant developments from the perspectives of model design and edge resources for the edge intelligence in PA with UAV RS in Section 4.3 and Section 4.4.

4.3. Lightweight Network Model Design

To obtain higher classification, detection or segmentation accuracy, the deep CNN models are designed with deeper, wider and more complex architectures, which inevitably leads to computation-intensive and storage-intensive algorithms on computing devices. When it comes to edge devices, especially UAVs, their limited computing, storage, power consumption and bandwidth resources can hardly meet the requirements of intelligent applications.
Research has found that the structure of deep neural networks is redundant. Based on this property, the compression of deep neural networks will therefore greatly ease the burden of inference and accelerate computing to accommodate the usage on UAV platforms. In recent years, researchers have made great efforts to compress and accelerate deep neural network models from the aspects of algorithm optimization, hardware implementation and co-design [182]. The work in [183] by Han et al. is widely considered to be the first to systematically carry out deep model compression. Its main work includes pruning network connections to keep the more important ones only, quantitating model parameters to reduce the model volume and improve the efficiency, and further compressing the model through Huffman Coding. Through model compression, it is conducive to reducing computing, memory and power consumption, which makes it easier to be deployed to the UAV systems.
The mainstream deep model compression methods can be divided into the following categories: (1) lightweight convolution design, (2) parameter pruning, (3) low-rank factorization, (4) parameter quantization, and (5) knowledge distillation. Each tries to compress the model from different aspects, and they are always used with the combination. Table 6 shows a compilation of lightweight inference applications onboard UAVs for PA. As shown in Table 6, the research of the edge inference of UAV RS in the PA field is in the starting stage yet, and there are only a few attempts at general model quantization and pruning methods. Hence, we describe the categories and the corresponding development in details, taking more domains in addition to agriculture for edge inference onboard UAVs into consideration below.

4.3.1. Lightweight Convolution Design

Lightweight convolution design refers to the compact design of convolutional filters. The convolutional filter is actually used for translation invariant feature extraction, and makes up the majority of CNN operations. Therefore, lightweight convolution design has been a hot research direction in DL.
It tries to replace the original heavy convolutional filters with compact ones. Specifically, it transforms the convolutional filter with large size into several smaller-sized ones and concatenates their results to achieve equivalent convolution results, as smaller-sized filters calculate much faster. Typical designs are SqueezeNet [187], MobileNet [188], and ShuffleNet [189]. SqueezeNet designs a fire module composed of a squeeze layer with 1×1 filters to reduce the input channels, and an expanding layer with a mix of 1×1 and 3×3 filters. MobileNet adopts the idea of depthwise separable convolutions to reduce the volume of parameters and computations, in which depthwise convolutions are used for feature extraction and pointwise convolutions are deployed to build feature via linear combinations of input channels. ShuffleNet designs with group convolution and channel shuffle to reduce parameters, and can obtain similar results compared with the original convolutions.
In [190], a DL fire recognition algorithm is proposed for embedded intelligent forest fire monitoring using UAVs. It is based on the lightweight MobileNet V3 to reduce the complexity of the conventional YOLOv4 network architecture. With regards to the model parameters, a decline of 63.91% from 63.94 million to 23.08 million is obtained. Egli et al. [184] designs a computationally lightweight CNN with a sequential model design with four consecutive convolution/pooling layers for tree species classification that uses high-resolution RGB images from automated UAV observations, which outperforms several different architectures on the available data set. Similarly, in order to accommodate the real-time performance on UAVs, Hua et al. [191] designs a lightweight E-Mobile Net as the backbone network of feature extraction for real-time tracking.

4.3.2. Parameter Pruning

In deep models, not all parameters contribute to the outstanding discriminative performance, thus many of them can be removed from the network while having the least effect on the accuracy of the trained models. Based on the principle, parameter pruning tries to prune out the redundant non-informative parameters from convolutional layers and fully connected layers for less computational operations and memory consumption.
There are several ways of pruning with different granularity. Some unimportant weight connection can be pruned out with certain threshold [192]. Similarly, individual redundant neurons, along with their input and output connections, can be pruned [193]. Furthermore, the filters composed of neurons can also be removed according to their importance which is indicated by L1 or L2 norm [194]. With the coarsest granularity, layers that are least informative can also be pruned, as shown in [185]. As for connection-level and neuron-level pruning, they introduce unstructured sparse connections in the network, which will also affect the computational efficiency. On the contrary, filter-level and layer-level pruning does not interfere with the normal forward computing, which can therefore compress the model and accelerate the model inference. Worth noting that pruning is always accompanied with model fine-tuning.
Wang et al. [190] eliminated the redundant channels through channel-level sparsity-induced regularization, and achieved a significant drop of model parameter number and inference time by over 95% and 75% but with comparable accuracy, thus making it suitable for real-time fire monitoring on UAV platforms. [185] adopts two ways of pruning, with one-shot pruning to achieve the desired compression ratio in a single step and iterative pruning to gradually remove connections until obtaining the targeted compression ratio. The model is retrained to readapt the parameters after pruning iterations to recover the accuracy drop. Aiming at secure edge computing for agricultural object detection application, Fan et al. [195] use layer pruning and filter pruning together to achieve a smaller structure and maximize real-time performance.

4.3.3. Low-rank Factorization

Low-rank factorization tries to factorize a large weight matrix or tensor into several smaller dimension matrices or tensors. It can be applied to both convolutional layer and fully-connected layer. When convolutional filters are factorized, it will make the inference process faster, and when applied to denser fully-connected layers, it will remove redundancy and reduce the storage requirements.
Lebedev et al. [196] explore the low-rank factorization of deep network through tensor decomposition and discriminative fine-tuning. Based on CP-decomposition, they decompose the original convolutional layer into a sequence of four layers with smaller filters, thus reducing the computations. Similarly, famous factorization methods like Tucker decomposition [197], and singular value decomposition [198,199] are also widely applied with low-rank constraints in the model training process to reduce the number of parameters and speed-up the network.
To meet the severe constraints of typical embedded systems in the applications for grape leaf disease detection, a low-rank CNN architecture LR-Net based on Tensor decomposition is developed in [200] for both convolutional layer and fully-connected layer, and the obvious performance gain is obtained compared with other lightweight network architectures.

4.3.4. Parameter Quantization

The intention of parameter quantization is to reduce the volume of the trained model during storage and transmission. Generally, weights in deep models are stored as 32-bit floating-point numbers. If their number of bits is reduced, it will lead to the reduction of operations and model sizes.
In recent years, low-bit quantization is becoming popular for deep model compression and acceleration. There are two types of quantization, one is parameter sharing for the trained model, and the other is the low-bit representation for model training. Parameter sharing designs a function that maps various weight parameters to the same value. In [201], a new network architecture HashedNets is designed, in which the weight parameters are randomly mapped to hash bucket through hash function and every parameter shares the same weight value. [202] develops an approximation that quantizes the gradients to 8-bit for GPU cluster parallelism. Further, [203] proposes incremental network quantization method that lossless quantizes parameters to low 5-bit. The challenging binary neural network [137] is also in the spot of researches.
To develop a lightweight network architecture for weed mapping tasks onboard UAVs, [137] conducted optimization and precision calibration during the inference process. The precision was reduced from 32-bit to 16-bit. Similarly, Camargo et al. [115] shifted their ResNet-18 model from 32-bit to 16-bit and observed speed performance decline on NVIDIA Jetson AGX Xavier. To be able to execute deep models efficiently in embedded platforms, Blekos et al. [186] perform quantization on the trained U-Net model to 8-bit integers with acceptable losses.

4.3.5. Knowledge Distillation

The main objective of knowledge distillation is to train a student network from the teacher network while maintaining its generalization capability [204]. The student network is lighter, i.e., having a smaller model size and less computation, but with the same or comparable performance as the larger network.
Great efforts have been done to improve the supervision of student network by different knowledge transferred. Romero et al. [205] proposed a FitNets model which teaches the student network to imitate the hints from both middle layers and output layer of the teach network. Instead of hard labels that are used, the work in [206] utilizes soft labels as the representation from teacher network. Kim et al. [207] proposed a paraphrasing based knowledge transfer method which uses convolution operations to paraphrase the teacher model knowledge and translate it to a student model. From the point of teacher networks, student networks can also learn knowledge from multiple teachers [208].
In the field of UAV based deep model inference, knowledge distillation is a promising direction. In [209], YOLO + MobileNet model acts as the teacher network, while the pruned model functions as the student network, and knowledge distillation algorithm is used to improve the detection accuracy of the pruned model. Qiu et al. [210] propose to distill knowledge to a lighter distilled network through soft labels from trained teacher network MobileNet. Similar applications using knowledge distillation for model compression can be found in [211,212].

4.4. Edge Resources for UAV RS

The key idea of edge computing is that computing should be closer to the data sources and users. It can avoid massive data transfer to the cloud and process data near the places where things and people produce or consume data, thus reducing the latency, pressure on network bandwidth and demand for computing and storage resources. Edge is a relative concept to the network core. It refers to any resource, storage, and network resource from the data source to the cloud-computing center. The resources on this path (from the data sources to cloud centers) can be regarded as a continuous system. Generally, the resources at the edge mainly include user terminals such as mobile phones and personal computers, infrastructure such as WiFi access points, cellular network base stations and routers, and embedded devices such as cameras and set-top boxes. These numerous resources around users are independent of each other, which are called edge nodes. In this paper, we focus on the scope of AI on edge among the edge intelligence, which is to run AI models on intelligent edge devices. Such devices have built-in processors with onboard analytics or AI capabilities, mainly including sensors, UAVs, autonomous cars, etc. Rather than uploading, processing and storing data to a cloud, intelligent edge devices offer the ability to process certain amounts of data directly, while reducing latency, bandwidth requirement, cost, privacy threats, etc.
For the scenario of edge computing for UAV RS in PA, applications can be deployed on the UAV intelligent edge devices with embedded computing platforms or edge servers. Here in this paper we mainly discuss the former. To accelerate the processing of complex DL models, a few types of onboard hardware accelerators are mainly included in UAV solutions currently.
The following list the popular examples, which are divided into the general-purpose CPU based solutions, GPU solutions and field programmable gate arrays (FPGA) solutions. Very few studies also use microcontroller unit (MCU) [213] and vision processing unit (VPU) [214] for UAV image recognition and monitoring.
  • General-purpose CPU based solutions: Multi-core CPUs are latency-oriented architectures, which have more computational power per core, but less number of cores, and are more suitable for task-level parallelism [215,216]. As for the general-purpose software-programmable platforms, Raspberry Pi has been widely adopted as ready-to-use solutions for a variety of UAV applications due to their weight, size and low power consumption.
  • GPU solutions: GPUs have been designed as throughput-oriented architectures, and own less powerful cores than that of CPUs but have hundreds or thousands of cores and significantly larger memory bandwidth, which make GPUs suitable for data-level parallelism [215]. In recent years, the embedded GPUs especially from NVIDIA, for example, the Jetson boards, standing out among the other manufacturers have been widely used to provide flexible solutions compared with FPGAs.
  • FPGA solutions: The advent of FPGA-based embedded platforms allows combining high-level management capabilities of processors and flexible operations of programmable hardware [217]. With the advantages of: a) relatively smaller size and weight compared with clusters, multi-core and many-core processors, b) significantly lower power consumption compared with GPUs, and c) reprogrammed ability during the flight different from application-specific integrated circuit (ASIC), FPGA-based platforms such as the Xilinx Zynq-7000 family provide plenty of solutions for real-time processing onboard UAVs [218].
Table 7 gives a compilation of edge computing platforms onboard UAVs for typical RS applications with specific platform vendor, model, configurations and applications.

5. Future Directions

Despite the great progress of DL and UAV RS techniques in the PA field, the research and practice of edge intelligence, especially in PA is still in an early stage. In addition to common challenges in PA, UAV RS and edge intelligence, here we list a few specific issues that need to be addressed within the scope of this paper.
  • Lightweight intelligent model design in PA for edge inference. As mentioned in Section 4, most DL-based models for UAV RS data processing and analytics in PA are highly resources intensive. Hardware with powerful computing capability is important to support the training and inference of these large AI models. Currently, there are just a few studies towards applying common parameter pruning and quantization methods in PA with UAV RS. The metrics of size and efficiency can be further improved by considering the data and algorithm characteristics and exploiting other sophisticated model compression techniques such as knowledge distillation and a combination of multiple compression methods [183]. In addition, instead of using existing AI models, the neural architecture search (NAS) technique [243] can be utilized to derive models tailored to the hardware resource constraints on the performance metrics, e.g., latency and energy efficiency considering the underlying edge devices [34].
  • Transfer learning and incremental learning for intelligent PA models on UAV edge devices. The performance of many DL models heavily relies on the quantity and quality of datasets. However, it is difficult or expensive to collect a large amount of data with labels. Therefore, edge devices can exploit transfer learning to learn a competitive model, in which a pre-trained model with a large-scale dataset is further fine-tuned according to the domain-specific data [244,245]. Secondly, edge devices such as UAVs may collect data with different distributions or even data belonging to an unknown class compared with the original training data during flight. The model on the edge devices can be updated by incremental learning to give better prediction performance [246].
  • Collaboration of RS cloud centers, ground control stations and UAV edge devices. To bridge the gap between the low computing and storage capabilities of edge devices and the high resource requirements of DL training, the collaborative computing between the end, the edge and the cloud is a possible solution. It has become the trend for edge intelligence architectures and application scenes. A good cloud-edge-end collaboration architecture should take into account the characteristics of heterogeneous devices, asynchronous communication and diverse computing and storage resources, thus achieving collaborative model training and inference [247]. In the conventional mode, the model training is often performed in the cloud, and the trained model is deployed on edge devices. This mode is simple, but cannot fully utilize resources. For the case of edge intelligence for UAV RS in PA, decentralized edge devices and data centers can cooperate with each other to train or improve a model by using federated learning [248].

6. Conclusions

This paper gives a systematic and comprehensive overview of the latest development of PA promoted by UAV RS and edge intelligence techniques. We first introduce the application of UAV RS in PA, including the fundamentals of various types of UAV systems and sensors and typical applications to give a preliminary picture. The latest development of DL methods and public datasets in PA with UAV RS are then presented. Subsequently, we give a thorough analysis of the development of edge intelligence in PA with UAV RS, including the cloud computing and edge computing paradigms, the basic concepts and major components (i.e., edge caching, edge training, edge inference and edge offloading) of edge intelligence, the developments from the perspectives of network model design and edge resources. Finally, we present several issues that need to be further addressed.
Through this survey, we provide preliminary insights into how PA benefits from UAV RS together with edge intelligence. In recent years, the small and light, fixed-wing or industrial rotor-wing UAV systems have been widely adopted in PA. Due to the advantages of easy-to-use, high flexibility, high resolution and being less affected by clouds during flight at low altitudes, UAV RS has become a powerful manner to monitor agricultural conditions. In addition, the integration of DL techniques in PA with UAV RS reached higher accuracies compared with traditional analysis methods. These PA applications have been transformed into computer vision tasks including classification, object detection and segmentation, and CNN and RNN are the most widely adopted network architectures. There are also a few publicly available UAV datasets for intelligent PA, mainly from RGB sensors and very few from multispectral and hyperspectral sensors. These datasets can facilitate the validation and comparison of DL methods. However, deep models generally bring higher computing, memory and network requirements, hence cloud computing is a common solution to increase efficiency with high scalability and low cost, but at the cost of high latency and pressure on the network bandwidth. The emerging of edge computing brings the computing to the edge of the network close to the data sources. The AI and edge computing further yield edge intelligence, providing a promising solution for efficient intelligent UAV RS applications. In terms of hardware, typical computing solutions include CPUs, GPUs and FPGAs. From the perspectives of algorithm, lightweight model design deriving from model compression techniques especially model pruning and quantization is one of the most significant and widely used technique. The PA supported by advanced UAV RS and edge intelligence techniques offers the capabilities to increase productivity and efficiency while reducing costs.
The research and practice of edge intelligence, especially in PA with UAV RS is still in an early stage. In the future, in addition to the general challenges of PA, UAV RS and edge intelligence, there are issues within the scope of this paper that need to be addressed. These directions can include designing and implementing lightweight models for PA with UAV RS on edge devices, realizing transfer learning and incremental learning for intelligent PA models on UAV edge devices, and efficient collaboration of RS cloud centers, ground control stations and UAV edge devices.

Author Contributions

Conceptualization, J.L., J.X. and J.Y.; literature investigation and analysis, J.L., Y.J. and J.X.; writing—original draft preparation, J.L., J.X., R.L. and J.Y.; writing—review and editing, J.L., J.Y. and L.W; visualization, J.L., Y.J. and J.X.; supervision, J.L. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant No. 41901376 and No. 42172333, and in part by the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan).

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ISPA. Precision Ag Definition. Available online: https://www.ispag.org/about/definition (accessed on 17 October 2021).
  2. Messina, G.; Modica, G. Applications of UAV Thermal Imagery in Precision Agriculture: State of the Art and Future Research Outlook. Remote Sens. 2020, 12, 1491. [Google Scholar] [CrossRef]
  3. Schimmelpfennig, D. Farm profits and adoption of precision agriculture; U.S. Department of Agriculture, Economic Research Service: Washington, DA, USA, 2016.
  4. Maes, W.H.; Steppe, K. Perspectives for Remote Sensing with Unmanned Aerial Vehicles in Precision Agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef]
  5. Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  6. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  7. Eskandari, R.; Mahdianpari, M.; Mohammadimanesh, F.; Salehi, B.; Brisco, B.; Homayouni, S. Meta-Analysis of Unmanned Aerial Vehicle (UAV) Imagery for Agro-Environmental Monitoring Using Machine Learning and Statistical Models. Remote Sens. 2020, 12, 3511. [Google Scholar] [CrossRef]
  8. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, H.; Wang, L.; Tian, T.; Yin, J. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  10. Jang, G.; Kim, J.; Yu, J.-K.; Kim, H.-J.; Kim, Y.; Kim, D.-W.; Kim, K.-H.; Lee, C.W.; Chung, Y.S. Review: Cost-Effective Unmanned Aerial Vehicle (UAV) Platform for Field Plant Breeding Application. Remote Sens. 2020, 12, 998. [Google Scholar] [CrossRef] [Green Version]
  11. US Department of Defense. Unmanned Aerial Vehicle. Available online: https://www.thefreedictionary.com/Unmanned+Aerial+Vehicle (accessed on 19 October 2021).
  12. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  13. Christiansen, M.P.; Laursen, M.S.; Jørgensen, R.N.; Skovsen, S.; Gislum, R. Designing and Testing a UAV Mapping System for Agricultural Field Surveying. Sensors 2017, 17, 2703. [Google Scholar] [CrossRef] [Green Version]
  14. Popescu, D.; Stoican, F.; Stamatescu, G.; Ichim, L.; Dragana, C. Advanced UAV–WSN System for Intelligent Monitoring in Precision Agriculture. Sensors 2020, 20, 817. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Zhou, X.; Zheng, H.; Xu, X.; He, J.; Ge, X.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.; Tian, Y. Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 246–255. [Google Scholar] [CrossRef]
  16. Yang, Q.; Shi, L.; Han, J.; Zha, Y.; Zhu, P. Deep convolutional neural networks for rice grain yield estimation at the ripening stage using UAV-based remotely sensed images. Field Crops Res. 2019, 235, 142–153. [Google Scholar] [CrossRef]
  17. Su, J.; Liu, C.; Coombes, M.; Hu, X.; Wang, C.; Xu, X.; Li, Q.; Guo, L.; Chen, W.-H. Wheat yellow rust monitoring by learning from multispectral UAV aerial imagery. Comput. Electron. Agric. 2018, 155, 157–166. [Google Scholar] [CrossRef]
  18. Guo, A.; Huang, W.; Dong, Y.; Ye, H.; Ma, H.; Liu, B.; Wu, W.; Ren, Y.; Ruan, C.; Geng, Y. Wheat Yellow Rust Detection Using UAV-Based Hyperspectral Technology. Remote Sens. 2021, 13, 123. [Google Scholar] [CrossRef]
  19. Bajwa, A.; Mahajan, G.; Chauhan, B. Nonconventional Weed Management Strategies for Modern Agriculture. Weed Sci. 2015, 63, 723–747. [Google Scholar] [CrossRef]
  20. Huang, Y.; Reddy, K.N.; Fletcher, R.S.; Pennington, D. UAV Low-Altitude Remote Sensing for Precision Weed Management. Weed Technol. 2018, 32, 2–6. [Google Scholar] [CrossRef]
  21. Van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
  22. Su, Y.-X.; Xu, H.; Yan, L.-J. Support vector machine-based open crop model (SBOCM): Case of rice production in China. Saudi J. Biol. Sci. 2017, 24, 537–547. [Google Scholar] [CrossRef]
  23. Everingham, Y.; Sexton, J.; Skocaj, D.; Inman-Bamber, G. Accurate prediction of sugarcane yield using a random forest algorithm. Agron. Sustain. Dev. 2016, 36, 27. [Google Scholar] [CrossRef] [Green Version]
  24. Chandra, A.L.; Desai, S.V.; Guo, W.; Balasubramanian, V.N. Computer vision with deep learning for plant phenotyping in agriculture: A survey. arXiv Prepr. 2020, arXiv:2006.11391. [Google Scholar]
  25. Zhou, L.; Zhang, C.; Liu, F.; Qiu, Z.; He, Y. Application of Deep Learning in Food: A Review. Compr. Rev. Food Sci. Food Saf. 2019, 18, 1793–1811. [Google Scholar] [CrossRef] [Green Version]
  26. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  27. Bah, M.D.; Hafiane, A.; Canals, R. Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images. Remote Sens. 2018, 10, 1690. [Google Scholar] [CrossRef] [Green Version]
  28. Kitano, B.T.; Mendes, C.C.T.; Geus, A.R.; Oliveira, H.C.; Souza, J.R. Corn plant counting using deep learning and UAV images. IEEE Geosci. Remote. Sens. Lett. 2019, 1–5. [Google Scholar] [CrossRef]
  29. Nowakowski, A.; Mrziglod, J.; Spiller, D.; Bonifacio, R.; Ferrari, I.; Mathieu, P.P.; Garcia-Herranz, M.; Kim, D.-H. Crop type mapping by using transfer learning. Int. J. Appl. Earth Obs. Geoinf. 2021, 98, 102313. [Google Scholar] [CrossRef]
  30. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnarson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  31. Chen, J.; Ran, X. Deep Learning with Edge Computing: A Review. Proc. IEEE 2019, 107, 1655–1674. [Google Scholar] [CrossRef]
  32. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  33. Liu, J.; Liu, R.; Ren, K.; Li, X.; Xiang, J.; Qiu, S. High-Performance Object Detection for Optical Remote Sensing Images with Lightweight Convolutional Neural Networks. In Proceedings of the 2020 IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Conference on Smart City; IEEE 6th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Yanuca Island, Cuvu, Fiji, 14–16 December 2020; pp. 585–592. [Google Scholar]
  34. Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing. Proc. IEEE 2019, 107, 1738–1762. [Google Scholar] [CrossRef] [Green Version]
  35. Pu, Q.; Ananthanarayanan, G.; Bodik, P.; Kandula, S.; Akella, A.; Bahl, P.; Stoica, I. Low latency geo-distributed data analytics. ACM SIGCOMM Comp. Com. Rev. 2015, 45, 421–434. [Google Scholar] [CrossRef] [Green Version]
  36. Sittón-Candanedo, I.; Alonso, R.S.; Rodríguez-González, S.; Coria, J.A.G.; De La Prieta, F. Edge Computing Architectures in Industry 4.0: A General Survey and Comparison. International Workshop on Soft Computing Models in Industrial and Environmental Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 121–131. [Google Scholar]
  37. Plastiras, G.; Terzi, M.; Kyrkou, C.; Theocharidcs, T. Edge intelligence: Challenges and opportunities of near-sensor machine learning applications. In Proceedings of the 2018 IEEE 29th International Conference on Application-Specific Systems, Architectures and Processors (ASAP), Milano, Italy, 10–12 July 2018; pp. 1–7. [Google Scholar]
  38. Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [Google Scholar] [CrossRef] [Green Version]
  39. Boursianis, A.D.; Papadopoulou, M.S.; Diamantoulakis, P.; Liopa-Tsakalidi, A.; Barouchas, P.; Salahas, G.; Karagiannidis, G.; Wan, S.; Goudos, S.K. Internet of Things (IoT) and Agricultural Unmanned Aerial Vehicles (UAVs) in smart farming: A comprehensive review. Internet Things 2020, 100187, in press. [Google Scholar] [CrossRef]
  40. Kim, J.; Kim, S.; Ju, C.; Son, H.I. Unmanned Aerial Vehicles in Agriculture: A Review of Perspective of Platform, Control, and Applications. IEEE Access 2019, 7, 105100–105115. [Google Scholar] [CrossRef]
  41. Mogili, U.R.; Deepak, B.B.V.L. Review on Application of Drone Systems in Precision Agriculture. Procedia Comput. Sci. 2018, 133, 502–509. [Google Scholar] [CrossRef]
  42. Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef] [Green Version]
  43. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  44. Santos, L.; Santos, F.N.; Oliveira, P.M.; Shinde, P. Deep Learning Applications in Agriculture: A Short Review. Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2019; pp. 139–151. [Google Scholar]
  45. Civil Aviation Administration of China. Interim Regulations on Flight Management of Unmanned Aerial Vehicles. 2018; Volume 2021. Available online: http://www.caac.gov.cn/HDJL/YJZJ/201801/t20180126_48853.html (accessed on 17 October 2021).
  46. Park, M.; Lee, S.; Lee, S. Dynamic topology reconstruction protocol for uav swarm networking. Symmetry 2020, 12, 1111. [Google Scholar] [CrossRef]
  47. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
  48. Hayat, S.; Yanmaz, E.; Muzaffar, R. Survey on Unmanned Aerial Vehicle Networks for Civil Applications: A Communications Viewpoint. IEEE Commun. Surv. Tutor. 2016, 18, 2624–2661. [Google Scholar] [CrossRef]
  49. Xie, C.; Yang, C. A review on plant high-throughput phenotyping traits using UAV-based sensors. Comput. Electron. Agric. 2020, 178, 105731. [Google Scholar] [CrossRef]
  50. Delavarpour, N.; Koparan, C.; Nowatzki, J.; Bajwa, S.; Sun, X. A Technical Study on UAV Characteristics for Precision Agriculture Applications and Associated Practical Challenges. Remote Sens. 2021, 13, 1204. [Google Scholar] [CrossRef]
  51. Tsouros, D.C.; Triantafyllou, A.; Bibi, S.; Sarigannidis, P.G. Data acquisition and analysis methods in UAV-based applications for Precision Agriculture. In Proceedings of the 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS), Santorini Island, Greece, 29–31 May 2019; pp. 377–384. [Google Scholar]
  52. Tahir, M.N.; Lan, Y.; Zhang, Y.; Wang, Y.; Nawaz, F.; Shah, M.A.A.; Gulzar, A.; Qureshi, W.S.; Naqvi, S.M.; Naqvi, S.Z.A. Real time estimation of leaf area index and groundnut yield using multispectral UAV. Int. J. Precis. Agric. Aviat. 2020, 3. [Google Scholar]
  53. Stroppiana, D.; Villa, P.; Sona, G.; Ronchetti, G.; Candiani, G.; Pepe, M.; Busetto, L.; Migliazzi, M.; Boschetti, M. Early season weed mapping in rice crops using multi-spectral UAV data. Int. J. Remote Sens. 2018, 39, 5432–5452. [Google Scholar] [CrossRef]
  54. Wang, H.; Mortensen, A.K.; Mao, P.; Boelt, B.; Gislum, R. Estimating the nitrogen nutrition index in grass seed crops using a UAV-mounted multispectral camera. Int. J. Remote Sens. 2019, 40, 2467–2482. [Google Scholar] [CrossRef]
  55. Ishida, T.; Kurihara, J.; Viray, F.A.; Namuco, S.B.; Paringit, E.C.; Perez, G.J.; Takahashi, Y.; Marciano, J.J., Jr. A novel approach for vegetation classification using UAV-based hyperspectral imaging. Comput. Electron. Agric. 2018, 144, 80–85. [Google Scholar] [CrossRef]
  56. Ge, X.; Wang, J.; Ding, J.; Cao, X.; Zhang, Z.; Liu, J.; Li, X. Combining UAV-based hyperspectral imagery and machine learning algorithms for soil moisture content monitoring. PeerJ 2019, 7, e6926. [Google Scholar] [CrossRef] [PubMed]
  57. Zhao, X.; Yang, G.; Liu, J.; Zhang, X.; Xu, B.; Wang, Y.; Zhao, C.; Gai, J. Estimation of soybean breeding yield based on optimization of spatial scale of UAV hyperspectral image. Trans. Chin. Soc. Agric. Eng. 2017, 33, 110–116. [Google Scholar]
  58. Prakash, A. Thermal remote sensing: Concepts, issues and applications. Int. Arch. Photogramm. Remote Sens. 2000, 33, 239–243. [Google Scholar]
  59. Weng, Q. Thermal infrared remote sensing for urban climate and environmental studies: Methods, applications, and trends. ISPRS J. Photogramm. Remote Sens. 2009, 64, 335–344. [Google Scholar] [CrossRef]
  60. Khanal, S.; Fulton, J.; Shearer, S. An overview of current and potential applications of thermal remote sensing in precision agriculture. Comput. Electron. Agric. 2017, 139, 22–32. [Google Scholar] [CrossRef]
  61. Dong, P.; Chen, Q. LiDAR Remote Sensing and Applications; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  62. Zhou, L.; Gu, X.; Cheng, S.; Yang, G.; Shu, M.; Sun, Q. Analysis of plant height changes of lodged maize using UAV-LiDAR data. Agriculture 2020, 10, 146. [Google Scholar] [CrossRef]
  63. Shendryk, Y.; Sofonia, J.; Garrard, R.; Rist, Y.; Skocaj, D.; Thorburn, P. Fine-scale prediction of biomass and leaf nitrogen content in sugarcane using UAV LiDAR and multispectral imaging. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102177. [Google Scholar] [CrossRef]
  64. Ndikumana, E.; Minh, D.H.T.; Baghdadi, N.; Courault, D.; Hossard, L. Deep Recurrent Neural Network for Agricultural Classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef] [Green Version]
  65. Lyalin, K.S.; Biryuk, A.A.; Sheremet, A.Y.; Tsvetkov, V.K.; Prikhodko, D.V. UAV synthetic aperture radar system for control of vegetation and soil moisture. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), St. Petersburg and Moscow, Russia, 29 January–1 February 2018; pp. 1673–1675. [Google Scholar]
  66. Liu, C.-A.; Chen, Z.-X.; Shao, Y.; Chen, J.-S.; Hasi, T.; Pan, H.-Z. Research advances of SAR remote sensing for agriculture applications: A review. J. Integr. Agric. 2019, 18, 506–525. [Google Scholar] [CrossRef] [Green Version]
  67. Pádua, L.; Vanko, J.; Hruška, J.; Adão, T.; Sousa, J.J.; Peres, E.; Morais, R. UAS, sensors, and data processing in agroforestry: A review towards practical applications. Int. J. Remote Sens. 2017, 38, 2349–2391. [Google Scholar] [CrossRef]
  68. Allred, B.; Eash, N.; Freeland, R.; Martinez, L.; Wishart, D. Effective and efficient agricultural drainage pipe mapping with UAS thermal infrared imagery: A case study. Agric. Water Manag. 2018, 197, 132–137. [Google Scholar] [CrossRef]
  69. Santesteban, L.G.; Di Gennaro, S.F.; Herrero-Langreo, A.; Miranda, C.; Royo, J.; Matese, A. High-resolution UAV-based thermal imaging to estimate the instantaneous and seasonal variability of plant water status within a vineyard. Agric. Water Manag. 2017, 183, 49–59. [Google Scholar] [CrossRef]
  70. Xue, J.; Su, B. Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications. J. Sensors 2017, 2017, 1–17. [Google Scholar] [CrossRef] [Green Version]
  71. Dai, B.; He, Y.; Gu, F.; Yang, L.; Han, J.; Xu, W. A vision-based autonomous aerial spray system for precision agriculture. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, China, 5–8 December 2017; pp. 507–513. [Google Scholar]
  72. Faiçal, B.S.; Freitas, H.; Gomes, P.H.; Mano, L.; Pessin, G.; de Carvalho, A.; Krishnamachari, B.; Ueyama, J. An adaptive approach for UAV-based pesticide spraying in dynamic environments. Comput. Electron. Agric. 2017, 138, 210–223. [Google Scholar] [CrossRef]
  73. Faiçal, B.S.; Pessin, G.; Filho, G.P.R.; Carvalho, A.C.P.L.F.; Gomes, P.H.; Ueyama, J. Fine-Tuning of UAV Control Rules for Spraying Pesticides on Crop Fields: An Approach for Dynamic Environments. Int. J. Artif. Intell. Tools 2016, 25, 1660003. [Google Scholar] [CrossRef] [Green Version]
  74. Esposito, M.; Crimaldi, M.; Cirillo, V.; Sarghini, F.; Maggio, A. Drone and sensor technology for sustainable weed management: A review. Chem. Biol. Technol. Agric. 2021, 8, 18. [Google Scholar] [CrossRef]
  75. Bah, M.D.; Dericquebourg, E.; Hafiane, A.; Canals, R. Deep Learning based Classification System for Identifying Weeds using High-Resolution UAV Imagery. Science and Information Conference; Springer: Berlin/Heidelberg, Germany, 2018; pp. 176–187. [Google Scholar]
  76. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Deng, X.; Zhang, L. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery. PLoS ONE 2018, 13, e0196302. [Google Scholar] [CrossRef] [Green Version]
  77. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. UK 2019, 9, 1–12. [Google Scholar] [CrossRef]
  78. Sa, I.; Popović, M.; Khanna, R.; Chen, Z.; Lottes, P.; Liebisch, F.; Nieto, J.; Stachniss, C.; Walter, A.; Siegwart, R. WeedMap: A Large-Scale Semantic Weed Mapping Framework Using Aerial Multispectral Imaging and Deep Neural Network for Precision Farming. Remote Sens. 2018, 10, 1423. [Google Scholar] [CrossRef] [Green Version]
  79. Scherrer, B.; Sheppard, J.; Jha, P.; Shaw, J.A. Hyperspectral imaging and neural networks to classify herbicide-resistant weeds. J. Appl. Remote Sens. 2019, 13, 044516. [Google Scholar] [CrossRef]
  80. Huang, H.; Lan, Y.; Yang, A.; Zhang, Y.; Wen, S.; Deng, J. Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. Int. J. Remote Sens. 2020, 41, 3446–3479. [Google Scholar] [CrossRef]
  81. Hasan, R.I.; Yusuf, S.M.; Alzubaidi, L. Review of the State of the Art of Deep Learning for Plant Diseases: A Broad Analysis and Discussion. Plants 2020, 9, 1302. [Google Scholar] [CrossRef] [PubMed]
  82. Abdulridha, J.; Batuman, O.; Ampatzidis, Y. UAV-Based Remote Sensing Technique to Detect Citrus Canker Disease Utilizing Hyperspectral Imaging and Machine Learning. Remote Sens. 2019, 11, 1373. [Google Scholar] [CrossRef] [Green Version]
  83. Tetila, E.C.; Machado, B.B.; Astolfi, G.; Belete, N.A.D.S.; Amorim, W.P.; Roel, A.R.; Pistori, H. Detection and classification of soybean pests using deep learning with UAV images. Comput. Electron. Agric. 2020, 179, 105836. [Google Scholar] [CrossRef]
  84. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef] [Green Version]
  85. Hu, G.; Yin, C.; Wan, M.; Zhang, Y.; Fang, Y. Recognition of diseased Pinus trees in UAV images using deep learning and AdaBoost classifier. Biosyst. Eng. 2020, 194, 138–151. [Google Scholar] [CrossRef]
  86. Tetila, E.C.; Machado, B.B.; Menezes, G.K.; Oliveira, A.D.S.; Alvarez, M.; Amorim, W.P.; Belete, N.A.D.S.; Da Silva, G.G.; Pistori, H. Automatic Recognition of Soybean Leaf Diseases Using UAV Images and Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2019, 17, 903–907. [Google Scholar] [CrossRef]
  87. Wiesner-Hanks, T.; Wu, H.; Stewart, E.; DeChant, C.; Kaczmar, N.; Lipson, H.; Gore, M.A.; Nelson, R.J. Millimeter-Level Plant Disease Detection from Aerial Photographs via Deep Learning and Crowdsourced Data. Front. Plant Sci. 2019, 10, 1550. [Google Scholar] [CrossRef] [Green Version]
  88. Albetis, J.; Jacquin, A.; Goulard, M.; Poilvé, H.; Rousseau, J.; Clenet, H.; Dedieu, G.; Duthoit, S. On the Potentiality of UAV Multispectral Imagery to Detect Flavescence dorée and Grapevine Trunk Diseases. Remote Sens. 2018, 11, 23. [Google Scholar] [CrossRef] [Green Version]
  89. Kerkech, M.; Hafiane, A.; Canals, R. Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 2020, 174, 105446. [Google Scholar] [CrossRef]
  90. Bendig, J.; Willkomm, M.; Tilly, N.; Gnyp, M.L.; Bennertz, S.; Qiang, C.; Miao, Y.; Lenz-Wiedemann, V.I.S.; Bareth, G. Very high resolution crop surface models (CSMs) from UAV-based stereo images for rice growth monitoring In Northeast China. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 40, 45–50. [Google Scholar] [CrossRef] [Green Version]
  91. Ni, J.; Yao, L.; Zhang, J.; Cao, W.; Zhu, Y.; Tai, X. Development of an Unmanned Aerial Vehicle-Borne Crop-Growth Monitoring System. Sensors 2017, 17, 502. [Google Scholar] [CrossRef] [Green Version]
  92. Fu, Z.; Jiang, J.; Gao, Y.; Krienke, B.; Wang, M.; Zhong, K.; Cao, Q.; Tian, Y.; Zhu, Y.; Cao, W.; et al. Wheat Growth Monitoring and Yield Estimation based on Multi-Rotor Unmanned Aerial Vehicle. Remote Sens. 2020, 12, 508. [Google Scholar] [CrossRef] [Green Version]
  93. Zhao, J.; Zhang, X.; Gao, C.; Qiu, X.; Tian, Y.; Zhu, Y.; Cao, W. Rapid Mosaicking of Unmanned Aerial Vehicle (UAV) Images for Crop Growth Monitoring Using the SIFT Algorithm. Remote Sens. 2019, 11, 1226. [Google Scholar] [CrossRef] [Green Version]
  94. Li, B.; Xu, X.; Zhang, L.; Han, J.; Bian, C.; Li, G.; Liu, J.; Jin, L. Above-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2020, 162, 161–172. [Google Scholar] [CrossRef]
  95. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  96. Nebiker, S.; Lack, N.; Abächerli, M.; Läderach, S. Light-weight multispectral UAV sensors and their capabilities for predicting grain yield and detecting plant diseases. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41. [Google Scholar]
  97. Stroppiana, D.; Migliazzi, M.; Chiarabini, V.; Crema, A.; Musanti, M.; Franchino, C.; Villa, P. Rice yield estimation using multispectral data from UAV: A preliminary experiment in northern Italy. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4467–4664. [Google Scholar]
  98. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  99. Teimouri, N.; Dyrmann, M.; Jørgensen, R.N. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sens. 2019, 11, 990. [Google Scholar] [CrossRef] [Green Version]
  100. Wang, S.; Di Tommaso, S.; Faulkner, J.; Friedel, T.; Kennepohl, A.; Strey, R.; Lobell, D. Mapping Crop Types in Southeast India with Smartphone Crowdsourcing and Deep Learning. Remote Sens. 2020, 12, 2957. [Google Scholar] [CrossRef]
  101. Rebetez, J.; Satizábal, H.F.; Mota, M.; Noll, D.; Büchi, L.; Wendling, M.; Cannelle, B.; Perez-Uribe, A.; Burgos, S. Augmenting a Convolutional Neural Network with Local Histograms-A Case Study in Crop Classification from High-Resolution UAV Imagery; ESANN: Bruges, Belgium, 2016. [Google Scholar]
  102. Zhao, L.; Shi, Y.; Liu, B.; Hovis, C.; Duan, Y.; Shi, Z. Finer Classification of Crops by Fusing UAV Images and Sentinel-2A Data. Remote Sens. 2019, 11, 3012. [Google Scholar] [CrossRef] [Green Version]
  103. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  104. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  105. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  106. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  107. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
  108. Reyes, M.F.; Auer, S.; Merkle, N.M.; Henry, C.; Schmitt, M. SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks - Optimization, Opportunities and Limits. Remote Sens. 2019, 11, 2067. [Google Scholar] [CrossRef] [Green Version]
  109. Wang, X.; Yan, H.; Huo, C.; Yu, J.; Pant, C. Enhancing Pix2Pix for Remote Sensing Image Classification. In Proceedings of the International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018; pp. 2332–2336. [Google Scholar]
  110. Lv, N.; Ma, H.; Chen, C.; Pei, Q.; Zhou, Y.; Xiao, F.; Li, J. Remote Sensing Data Augmentation Through Adversarial Training. Int. Geosci. Remote Sens. Symp. 2020, 2511–2514. [Google Scholar]
  111. Ren, C.X.; Ziemann, A.; Theiler, J.; Durieux, A.M.S. Deep snow: Synthesizing remote sensing imagery with generative adversarial nets. In Proceedings of the 2020 Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, Online only. 19 May 2020; pp. 196–205. [Google Scholar] [CrossRef]
  112. Everingham, M.; Eslami, S.M.A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
  113. Ha, J.G.; Moon, H.; Kwak, J.T.; Hassan, S.I.; Dang, M.; Lee, O.N.; Park, H.Y. Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles. J. Appl. Remote Sens. 2017, 11. [Google Scholar] [CrossRef]
  114. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Zhang, L.; Wen, S.; Zhang, H.; Zhang, Y.; Deng, Y. Detection of Helminthosporium Leaf Blotch Disease Based on UAV Imagery. Appl. Sci. 2019, 9, 558. [Google Scholar] [CrossRef] [Green Version]
  115. De Camargo, T.; Schirrmann, M.; Landwehr, N.; Dammer, K.-H.; Pflanz, M. Optimized Deep Learning Model as a Basis for Fast UAV Mapping of Weed Species in Winter Wheat Crops. Remote Sens. 2021, 13, 1704. [Google Scholar] [CrossRef]
  116. Ukaegbu, U.; Tartibu, L.; Okwu, M.; Olayode, I. Development of a Light-Weight Unmanned Aerial Vehicle for Precision Agriculture. Sensors 2021, 21, 4417. [Google Scholar] [CrossRef]
  117. Onishi, M.; Ise, T. Automatic classification of trees using a UAV onboard camera and deep learning. arXiv Prepr. 2018, arXiv:1804.10390. [Google Scholar]
  118. Zhao, J.; Zhong, Y.; Hu, X.; Wei, L.; Zhang, L. A robust spectral-spatial approach to identifying heterogeneous crops using remote sensing imagery with high spectral and spatial resolutions. Remote Sens. Environ. 2020, 239, 111605. [Google Scholar] [CrossRef]
  119. Chen, C.-J.; Huang, Y.-Y.; Li, Y.-S.; Chen, Y.-C.; Chang, C.-Y.; Huang, Y.-M. Identification of Fruit Tree Pests with Deep Learning on Embedded Drone to Achieve Accurate Pesticide Spraying. IEEE Access 2021, 9, 21986–21997. [Google Scholar] [CrossRef]
  120. Li, F.; Liu, Z.; Shen, W.; Wang, Y.; Wang, Y.; Ge, C.; Sun, F.; Lan, P. A Remote Sensing and Airborne Edge-Computing Based Detection System for Pine Wilt Disease. IEEE Access 2021, 9, 66346–66360. [Google Scholar] [CrossRef]
  121. Valente, J.; Doldersum, M.; Roers, C.; Kooistra, L. Detecting rumex obtusifolius weed plants in grasslands from UAV RGB imagery using deep learning. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 4, 179–185. [Google Scholar] [CrossRef] [Green Version]
  122. Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of object detection and patch-based classification deep learning models on mid-to late-season weed detection in UAV imagery. Remote Sens. 2020, 12, 2136. [Google Scholar] [CrossRef]
  123. Apolo-Apolo, O.; Martínez-Guanter, J.; Egea, G.; Raja, P.; Pérez-Ruiz, M. Deep learning techniques for estimation of the yield and size of citrus fruits using a UAV. Eur. J. Agron. 2020, 115, 126030. [Google Scholar] [CrossRef]
  124. Chen, Y.; Lee, W.S.; Gan, H.; Peres, N.; Fraisse, C.; Zhang, Y.; He, Y. Strawberry Yield Prediction Based on a Deep Neural Network Using High-Resolution Aerial Orthoimages. Remote Sens. 2019, 11, 1584. [Google Scholar] [CrossRef] [Green Version]
  125. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of Citrus Trees from Unmanned Aerial Vehicle Imagery Using Convolutional Neural Networks. Drones 2018, 2, 39. [Google Scholar] [CrossRef] [Green Version]
  126. Zhang, Z.; Flores, P.; Igathinathane, C.; Naik, D.L.; Kiran, R.; Ransom, J.K. Wheat Lodging Detection from UAS Imagery Using Machine Learning Algorithms. Remote Sens. 2020, 12, 1838. [Google Scholar] [CrossRef]
  127. Stewart, E.L.; Wiesner-Hanks, T.; Kaczmar, N.; DeChant, C.; Wu, H.; Lipson, H.; Nelson, R.J.; Gore, M.A. Quantitative Phenotyping of Northern Leaf Blight in UAV Images Using Deep Learning. Remote Sens. 2019, 11, 2209. [Google Scholar] [CrossRef] [Green Version]
  128. Kerkech, M.; Hafiane, A.; Canals, R. VddNet: Vine Disease Detection Network Based on Multispectral Images and Depth Map. Remote Sens. 2020, 12, 3305. [Google Scholar] [CrossRef]
  129. Zou, K.; Chen, X.; Zhang, F.; Zhou, H.; Zhang, C. A Field Weed Density Evaluation Method Based on UAV Imaging and Modified U-Net. Remote Sens. 2021, 13, 310. [Google Scholar] [CrossRef]
  130. Osco, L.P.; Nogueira, K.; Ramos, A.P.M.; Pinheiro, M.M.F.; Furuya, D.E.G.; Gonçalves, W.N.; Jorge, L.A.D.C.; Junior, J.M.; dos Santos, J.A. Semantic segmentation of citrus-orchard using deep neural networks and multispectral UAV-based imagery. Precis. Agric. 2021, 22, 1–18. [Google Scholar] [CrossRef]
  131. Zhang, J.; Xie, T.; Yang, C.; Song, H.; Jiang, Z.; Zhou, G.; Zhang, D.; Feng, H.; Xie, J. Segmenting Purple Rapeseed Leaves in the Field from UAV RGB Imagery Using Deep Learning as an Auxiliary Means for Nitrogen Stress Detection. Remote Sens. 2020, 12, 1403. [Google Scholar] [CrossRef]
  132. Xu, W.; Yang, W.; Chen, S.; Wu, C.; Chen, P.; Lan, Y. Establishing a model to predict the single boll weight of cotton in northern Xinjiang by using high resolution UAV remote sensing data. Comput. Electron. Agric. 2020, 179, 105762. [Google Scholar] [CrossRef]
  133. Champ, J.; Mora-Fallas, A.; Goëau, H.; Mata-Montero, E.; Bonnet, P.; Joly, A. Instance segmentation for the fine detection of crop and weed plants by precision agricultural robots. Appl. Plant Sci. 2020, 8, e11373. [Google Scholar] [CrossRef] [PubMed]
  134. Mora-Fallas, A.; Goëau, H.; Joly, A.; Bonnet, P.; Mata-Montero, E. Instance segmentation for automated weeds and crops detection in farmlands. A first approach to Acoustic Characterization of Costa Rican Children’s Speech. 2020. Available online: https://redmine.mdpi.cn/issues/2225524#change-20906846 (accessed on 17 October 2021).
  135. Toda, Y.; Okura, F.; Ito, J.; Okada, S.; Kinoshita, T.; Tsuji, H.; Saisho, D. Training instance segmentation neural network with synthetic datasets for crop seed phenotyping. Commun. Biol. 2020, 3, 173. [Google Scholar] [CrossRef] [Green Version]
  136. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Iqbal, J.; Wasim, A. Real-time recognition of spraying area for UAV sprayers using a deep learning approach. PLoS ONE 2021, 16, e0249436. [Google Scholar] [CrossRef]
  137. Deng, J.; Zhong, Z.; Huang, H.; Lan, Y.; Han, Y.; Zhang, Y. Lightweight Semantic Segmentation Network for Real-Time Weed Mapping Using Unmanned Aerial Vehicles. Appl. Sci. 2020, 10, 7132. [Google Scholar] [CrossRef]
  138. Liu, C.; Li, H.; Su, A.; Chen, S.; Li, W. Identification and Grading of Maize Drought on RGB Images of UAV Based on Improved U-Net. IEEE Geosci. Remote Sens. Lett. 2020, 18, 198–202. [Google Scholar] [CrossRef]
  139. Tri, N.C.; Duong, H.N.; Van Hoai, T.; Van Hoa, T.; Nguyen, V.H.; Toan, N.T.; Snasel, V. A novel approach based on deep learning techniques and UAVs to yield assessment of paddy fields. In Proceedings of the 2017 9th International Conference on Knowledge and Systems Engineering (KSE), Hue, Vietnam, 19–21 October 2017; pp. 257–262. [Google Scholar]
  140. Osco, L.P.; Arruda, M.D.S.D.; Gonçalves, D.N.; Dias, A.; Batistoti, J.; de Souza, M.; Gomes, F.D.G.; Ramos, A.P.M.; Jorge, L.A.D.C.; Liesenberg, V.; et al. A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery. ISPRS J. Photogramm. Remote Sens. 2021, 174, 1–17. [Google Scholar] [CrossRef]
  141. Osco, L.P.; Arruda, M.D.S.D.; Junior, J.M.; da Silva, N.B.; Ramos, A.P.M.; Moryia, A.S.; Imai, N.N.; Pereira, D.R.; Creste, J.E.; Matsubara, E.; et al. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2020, 160, 97–106. [Google Scholar] [CrossRef]
  142. Zheng, J.; Fu, H.; Li, W.; Wu, W.; Yu, L.; Yuan, S.; Tao, W.Y.W.; Pang, T.K.; Kanniah, K.D. Growing status observation for oil palm trees using Unmanned Aerial Vehicle (UAV) images. ISPRS J. Photogramm. Remote Sens. 2021, 173, 95–121. [Google Scholar] [CrossRef]
  143. Ampatzidis, Y.; Partel, V.; Costa, L. Agroview: Cloud-based application to process, analyze and visualize UAV-collected data for precision agriculture applications utilizing artificial intelligence. Comput. Electron. Agric. 2020, 174, 105457. [Google Scholar] [CrossRef]
  144. Pang, Y.; Shi, Y.; Gao, S.; Jiang, F.; Veeranampalayam-Sivakumar, A.-N.; Thompson, L.; Luck, J.; Liu, C. Improved crop row detection with deep neural network for early-season maize stand count in UAV imagery. Comput. Electron. Agric. 2020, 178, 105766. [Google Scholar] [CrossRef]
  145. Wu, J.; Yang, G.; Yang, X.; Xu, B.; Han, L.; Zhu, Y. Automatic Counting of in situ Rice Seedlings from UAV Images Based on a Deep Fully Convolutional Neural Network. Remote Sens. 2019, 11, 691. [Google Scholar] [CrossRef] [Green Version]
  146. Yang, M.-D.; Tseng, H.-H.; Hsu, Y.-C.; Yang, C.-Y.; Lai, M.-H.; Wu, D.-H. A UAV Open Dataset of Rice Paddies for Deep Learning Practice. Remote Sens. 2021, 13, 1358. [Google Scholar] [CrossRef]
  147. Zhao, W.; Yamada, W.; Li, T.; Digman, M.; Runge, T. Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection. Remote Sens. 2020, 13, 23. [Google Scholar] [CrossRef]
  148. Ampatzidis, Y.; Partel, V. UAV-Based High Throughput Phenotyping in Citrus Utilizing Multispectral Imaging and Artificial Intelligence. Remote Sens. 2019, 11, 410. [Google Scholar] [CrossRef] [Green Version]
  149. Aeberli, A.; Johansen, K.; Robson, A.; Lamb, D.; Phinn, S. Detection of Banana Plants Using Multi-Temporal Multispectral UAV Imagery. Remote Sens. 2021, 13, 2123. [Google Scholar] [CrossRef]
  150. Fan, Z.; Lu, J.; Gong, M.; Xie, H.; Goodman, E.D. Automatic Tobacco Plant Detection in UAV Images via Deep Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 876–887. [Google Scholar] [CrossRef]
  151. Zan, X.; Zhang, X.; Xing, Z.; Liu, W.; Zhang, X.; Su, W.; Liu, Z.; Zhao, Y.; Li, S. Automatic Detection of Maize Tassels from UAV Images by Combining Random Forest Classifier and VGG16. Remote Sens. 2020, 12, 3049. [Google Scholar] [CrossRef]
  152. Liu, Y.; Cen, C.; Che, Y.; Ke, R.; Ma, Y.; Ma, Y. Detection of Maize Tassels from UAV RGB Imagery with Faster R-CNN. Remote Sens. 2020, 12, 338. [Google Scholar] [CrossRef] [Green Version]
  153. Yuan, W.; Choi, D. UAV-Based Heating Requirement Determination for Frost Management in Apple Orchard. Remote Sens. 2021, 13, 273. [Google Scholar] [CrossRef]
  154. Dyson, J.; Mancini, A.; Frontoni, E.; Zingaretti, P. Deep Learning for Soil and Crop Segmentation from Remotely Sensed Data. Remote Sens. 2019, 11, 1859. [Google Scholar] [CrossRef] [Green Version]
  155. Feng, Q.; Yang, J.; Liu, Y.; Ou, C.; Zhu, D.; Niu, B.; Liu, J.; Li, B. Multi-Temporal Unmanned Aerial Vehicle Remote Sensing for Vegetable Mapping Using an Attention-Based Recurrent Convolutional Neural Network. Remote Sens. 2020, 12, 1668. [Google Scholar] [CrossRef]
  156. Der Yang, M.; Tseng, H.H.; Hsu, Y.C.; Tseng, W.C. Real-time Crop Classification Using Edge Computing and Deep Learning. In Proceedings of the 2020 IEEE 17th Annual Consumer Communications & Networking Conference, Las Vegas, NV, USA, 10–13 January 2020; pp. 1–4. [Google Scholar]
  157. Yang, M.-D.; Boubin, J.G.; Tsai, H.P.; Tseng, H.-H.; Hsu, Y.-C.; Stewart, C.C. Adaptive autonomous UAV scouting for rice lodging assessment using edge computing with deep learning EDANet. Comput. Electron. Agric. 2020, 179, 105817. [Google Scholar] [CrossRef]
  158. Zhang, Q.; Liu, Y.; Gong, C.; Chen, Y.; Yu, H. Applications of Deep Learning for Dense Scenes Analysis in Agriculture: A Review. Sensors 2020, 20, 1520. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  159. Zhong, Y.; Hu, X.; Luo, C.; Wang, X.; Zhao, J.; Zhang, L. WHU-Hi: UAV-borne hyperspdectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF. Remote Sens. Environ. 2020, 250, 112012. [Google Scholar] [CrossRef]
  160. Wiesner-Hanks, T.; Stewart, E.L.; Kaczmar, N.; DeChant, C.; Wu, H.; Nelson, R.J.; Lipson, H.; Gore, M.A. Image set for deep learning: Field images of maize annotated with disease symptoms. BMC Res. Notes 2018, 11, 440. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  161. Daudt, R.C.; Le Saux, B.; Boulch, A.; Gousseau, Y. Multitask learning for large-scale semantic change detection. Comput. Vis. Image Underst. 2019, 187, 102783. [Google Scholar] [CrossRef] [Green Version]
  162. Zhang, Y. CSIF. figshare. Dataset. 2018. [Google Scholar]
  163. Oldoni, L.V.; Sanches, I.D.; Picoli, M.C.A.; Covre, R.M.; Fronza, J.G. LEM+ dataset: For agricultural remote sensing applications. Data Brief 2020, 33, 106553. [Google Scholar] [CrossRef] [PubMed]
  164. Ferreira, A.; Felipussi, S.C.; Pires, R.; Avila, S.; Santos, G.; Lambert, J.; Huang, J.; Rocha, A. Eyes in the Skies: A Data-Driven Fusion Approach to Identifying Drug Crops from Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4773–4786. [Google Scholar] [CrossRef]
  165. Rußwurm, M.; Pelletier, C.; Zollner, M.; Lefèvre, S.; Körner, M. BreizhCrops: A time series dataset for crop type mapping. arXiv Prepr. 2019, arXiv:1905.11893. [Google Scholar] [CrossRef]
  166. Rustowicz, R.; Cheong, R.; Wang, L.; Ermon, S.; Burke, M.; Lobell, D. Semantic Segmentation of Crop Type in Ghana Dataset. Available online: https://doi.org/10.34911/rdnt.ry138p (accessed on 17 October 2021). [CrossRef]
  167. Rustowicz, R.; Cheong, R.; Wang, L.; Ermon, S.; Burke, M.; Lobell, D. Semantic Segmentation of Crop Type in South Sudan Dataset. Available online: https://doi.org/10.34911/rdnt.v6kx6n (accessed on 17 October 2021). [CrossRef]
  168. Torre, M.; Remeseiro, B.; Radeva, P.; Martinez, F. DeepNEM: Deep Network Energy-Minimization for Agricultural Field Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 726–737. [Google Scholar] [CrossRef]
  169. United States Geological Survey. EarthExplorer. Available online: https://earthexplorer.usgs.gov/ (accessed on 17 October 2021).
  170. European Space Agency. Copernicus Open Access Hub. Available online: https://scihub.copernicus.eu/dhus/#/home (accessed on 17 October 2021).
  171. Weikmann, G.; Paris, C.; Bruzzone, L. TimeSen2Crop: A Million Labeled Samples Dataset of Sentinel 2 Image Time Series for Crop-Type Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4699–4708. [Google Scholar] [CrossRef]
  172. Khan, W.Z.; Ahmed, E.; Hakak, S.; Yaqoob, I.; Ahmed, A. Edge computing: A survey. Future Gener. Comput. Syst. 2019, 97, 219–235. [Google Scholar] [CrossRef]
  173. Liu, J.; Xue, Y.; Ren, K.; Song, J.; Windmill, C.; Merritt, P. High-Performance Time-Series Quantitative Retrieval from Satellite Images on a GPU Cluster. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2810–2821. [Google Scholar] [CrossRef] [Green Version]
  174. Hakak, S.; A Latif, S.; Amin, G. A Review on Mobile Cloud Computing and Issues in it. Int. J. Comput. Appl. 2013, 75, 1–4. [Google Scholar] [CrossRef]
  175. Jeong, H.-J.; Choi, J.D.; Ha, Y.-G. Vision Based Displacement Detection for Stabilized UAV Control on Cloud Server. Mob. Inf. Syst. 2016, 2016, 1–11. [Google Scholar] [CrossRef]
  176. Apolo-Apolo, O.E.; Pérez-Ruiz, M.; Guanter, J.M.; Valente, J. A Cloud-Based Environment for Generating Yield Estimation Maps from Apple Orchards Using UAV Imagery and a Deep Learning Technique. Front. Plant Sci. 2020, 11, 1086. [Google Scholar] [CrossRef]
  177. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  178. Ahmed, E.; Akhunzada, A.; Whaiduzzaman, M.; Gani, A.; Ab Hamid, S.H.; Buyya, R. Network-centric performance analysis of runtime application migration in mobile cloud computing. Simul. Model. Pr. Theory 2015, 50, 42–56. [Google Scholar] [CrossRef]
  179. Horstrand, P.; Guerra, R.; Rodriguez, A.; Diaz, M.; Lopez, S.; Lopez, J.F. A UAV Platform Based on a Hyperspectral Sensor for Image Capturing and On-Board Processing. IEEE Access 2019, 7, 66919–66938. [Google Scholar] [CrossRef]
  180. Da Silva, J.F.; Brito, A.V.; De Lima, J.A.G.; De Moura, H.N. An embedded system for aerial image processing from unmanned aerial vehicles. In Proceedings of the 2015 Brazilian Symposium on Computing Systems Engineering (SBESC), Foz do Iguacu, Brazil, 3–6 November 2015; pp. 154–157. [Google Scholar]
  181. Xu, D.; Li, T.; Li, Y.; Su, X.; Tarkoma, S.; Jiang, T.; Crowcroft, J.; Hui, P. Edge Intelligence: Architectures, Challenges, and Applications. arXiv Prepr. 2020, arXiv:2003.12172. [Google Scholar]
  182. Sze, V.; Chen, Y.-H.; Yang, T.-J.; Emer, J.S. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proc. IEEE 2017, 105, 2295–2329. [Google Scholar] [CrossRef] [Green Version]
  183. Han, S.; Mao, H.; Dally, W.J. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  184. Egli, S.; Höpke, M. CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations. Remote Sens. 2020, 12, 3892. [Google Scholar] [CrossRef]
  185. Fountsop, A.N.; Fendji, J.L.E.K.; Atemkeng, M. Deep Learning Models Compression for Agricultural Plants. Appl. Sci. 2020, 10, 6866. [Google Scholar] [CrossRef]
  186. Blekos, K.; Nousias, S.; Lalos, A.S. Efficient automated U-Net based tree crown delineation using UAV multi-spectral imagery on embedded devices. In Proceedings of the 2020 IEEE 18th International Conference on Industrial Informatics (INDIN), Warwick, UK, 20–23 July 2020; Volume 1, pp. 541–546. [Google Scholar]
  187. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv Prepr. 2016, arXiv:1602.07360. [Google Scholar]
  188. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  189. Ma, N.; Zhang, X.; Zheng, H.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision, Salt Lake City, UT, USA, 18–23 June 2018; pp. 122–138. [Google Scholar]
  190. Wang, S.; Zhao, J.; Ta, N.; Zhao, X.; Xiao, M.; Wei, H. A real-time deep learning forest fire monitoring algorithm based on an improved Pruned + KD model. J. Real-Time Image Process. 2021, 1–11. [Google Scholar] [CrossRef]
  191. Hua, X.; Wang, X.; Rui, T.; Shao, F.; Wang, D. Light-weight UAV object tracking network based on strategy gradient and attention mechanism. Knowledge-Based Syst. 2021, 224, 107071. [Google Scholar] [CrossRef]
  192. Han, S.; Pool, J.; Tran, J.; Dally, W.J. Learning both weights and connections for efficient neural networks. Neural Inf. Process. Syst. 2015, 28, 1135–1143. [Google Scholar]
  193. Srinivas, S.; Babu, R.V. Data-free Parameter Pruning for Deep Neural Networks. In Proceedings of the British Machine Vision Conference 2015 (BMVC), Swansea, UK, 7–10 September 2015. [Google Scholar] [CrossRef] [Green Version]
  194. Li, H.; Kadav, A.; Durdanovic, I.; Samet, H.; Graf, H.P. Pruning Filters for Efficient ConvNets. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  195. Fan, W.; Xu, Z.; Liu, H.; Zongwei, Z. Machine Learning Agricultural Application Based on the Secure Edge Computing Platform. In Proceedings of the International Conference on Machine Learning, Online. 13–18 July 2020; pp. 206–220. [Google Scholar]
  196. Lebedev, V.; Ganin, Y.; Rakhuba, M.; Oseledets, I.; Lempitsky, V. Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  197. Kim, Y.; Park, E.; Yoo, S.; Choi, T.; Yang, L.; Shin, D. Compression of deep convolutional neural networks for fast and low power mobile applications. arXiv Prepr. 2015, arXiv:1511.06530. [Google Scholar]
  198. Jaderberg, M.; Vedaldi, A.; Zisserman, A. Speeding up Convolutional Neural Networks with Low Rank Expansions. In Proceedings of the British Machine Vision Conference, Nottingham, UK, 1–5 September 2014. [Google Scholar]
  199. Zhang, X.; Zou, J.; He, K.; Sun, J. Accelerating Very Deep Convolutional Networks for Classification and Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1943–1955. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  200. Falaschetti, L.; Manoni, L.; Rivera, R.C.F.; Pau, D.; Romanazzi, G.; Silvestroni, O.; Tomaselli, V.; Turchetti, C. A Low-Cost, Low-Power and Real-Time Image Detector for Grape Leaf Esca Disease Based on a Compressed CNN. IEEE J. Emerg. Sel. Top. Circuits Syst. 2021, 11, 468–481. [Google Scholar] [CrossRef]
  201. Chen, W.; Wilson, J.; Tyree, S.; Weinberger, K.; Chen, Y. Compressing Neural Networks with the Hashing Trick. Int. Conf. Mach. Learn. 2015, 3, 2285–2294. [Google Scholar]
  202. Dettmers, T. 8-Bit Approximations for Parallelism in Deep Learning. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  203. Zhou, A.; Yao, A.; Guo, Y.; Xu, L.; Chen, Y. Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  204. Choudhary, T.; Mishra, V.; Goswami, A.; Sarangapani, J. A comprehensive survey on model compression and acceleration. Artif. Intell. Rev. 2020, 53, 5113–5155. [Google Scholar] [CrossRef]
  205. Romero, A.; Ballas, N.; Kahou, S.E.; Chassang, A.; Gatta, C.; Bengio, Y. FitNets: Hints for Thin Deep Nets. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  206. Korattikara, A.; Rathod, V.; Murphy, K.; Welling, M. Bayesian dark knowledge. Neural Inf. Process. Syst. 2015, 28, 3438–3446. [Google Scholar]
  207. Kim, J.; Park, S.; Kwak, N. Paraphrasing Complex Network: Network Compression via Factor Transfer. Neural Inf. Process. Syst. 2018, 31, 2760–2769. [Google Scholar]
  208. Gou, J.; Yu, B.; Maybank, S.J.; Tao, D. Knowledge Distillation: A Survey. Int. J. Comput. Vis. 2021, 129, 1789–1819. [Google Scholar] [CrossRef]
  209. La Rosa, L.E.C.; Oliveira, D.A.B.; Zortea, M.; Gemignani, B.H.; Feitosa, R.Q. Learning Geometric Features for Improving the Automatic Detection of Citrus Plantation Rows in UAV Images. IEEE Geosci. Remote Sens. Lett. 2020, 1–5. [Google Scholar] [CrossRef]
  210. Qiu, W.; Ye, J.; Hu, L.; Yang, J.; Li, Q.; Mo, J.; Yi, W. Distilled-MobileNet Model of Convolutional Neural Network Simplified Structure for Plant Disease Recognition. Smart Agric. 2021, 3, 109. [Google Scholar]
  211. Ding, M.; Li, N.; Song, Z.; Zhang, R.; Zhang, X.; Zhou, H. A Lightweight Action Recognition Method for Unmanned-Aerial-Vehicle Video. In Proceedings of the 2020 IEEE 3rd International Conference on Electronics and Communication Engineering (ICECE), Xi’an, China, 14–16 December 2020; pp. 181–185. [Google Scholar]
  212. Dong, J.; Ota, K.; Dong, M. Real-Time Survivor Detection in UAV Thermal Imagery Based on Deep Learning. In Proceedings of the 2020 16th International Conference on Mobility, Sensing and Networking (MSN), Tokyo, Japan, 17–19 December 2020; pp. 352–359. [Google Scholar]
  213. Sherstjuk, V.; Zharikova, M.; Sokol, I. Forest fire monitoring system based on UAV team, remote sensing, and image processing. In Proceedings of the 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine, 21–25 August 2018; pp. 590–594. [Google Scholar]
  214. Sandino, J.; Vanegas, F.; Maire, F.; Caccetta, P.; Sanderson, C.; Gonzalez, F. UAV Framework for Autonomous Onboard Navigation and People/Object Detection in Cluttered Indoor Environments. Remote Sens. 2020, 12, 3386. [Google Scholar] [CrossRef]
  215. Jaiswal, D.; Kumar, P. Real-time implementation of moving object detection in UAV videos using GPUs. J. Real-Time Image Process. 2020, 17, 1301–1317. [Google Scholar] [CrossRef]
  216. Saifullah, A.; Agrawal, K.; Lu, C.; Gill, C. Multi-Core Real-Time Scheduling for Generalized Parallel Task Models. Real-Time Syst. 2013, 49, 404–435. [Google Scholar] [CrossRef]
  217. Madroñal, D.; Palumbo, F.; Capotondi, A.; Marongiu, A. Unmanned Vehicles in Smart Farming: A Survey and a Glance at Future Horizons. In Proceedings of the 2021 Drone Systems Engineering (DroneSE) and Rapid Simulation and Performance Evaluation: Methods and Tools Proceedings (RAPIDO’21), Budapest, Hungary, 18–20 January 2021; pp. 1–8. [Google Scholar]
  218. Li, W.; He, C.; Fu, H.; Zheng, J.; Dong, R.; Xia, M.; Yu, L.; Luk, W. A Real-Time Tree Crown Detection Approach for Large-Scale Remote Sensing Images on FPGAs. Remote Sens. 2019, 11, 1025. [Google Scholar] [CrossRef] [Green Version]
  219. Ma, Y.; Li, Q.; Chu, L.; Zhou, Y.; Xu, C. Real-Time Detection and Spatial Localization of Insulators for UAV Inspection Based on Binocular Stereo Vision. Remote Sens. 2021, 13, 230. [Google Scholar] [CrossRef]
  220. Rodríguez-Canosa, G.R.; Thomas, S.; Del Cerro, J.; Barrientos, A.; MacDonald, B. A Real-Time Method to Detect and Track Moving Objects (DATMO) from Unmanned Aerial Vehicles (UAVs) Using a Single Camera. Remote Sens. 2012, 4, 1090–1111. [Google Scholar] [CrossRef] [Green Version]
  221. Opromolla, R.; Fasano, G.; Accardo, D. A Vision-Based Approach to UAV Detection and Tracking in Cooperative Applications. Sensors 2018, 18, 3391. [Google Scholar] [CrossRef] [Green Version]
  222. Li, B.; Zhu, Y.; Wang, Z.; Li, C.; Peng, Z.-R.; Ge, L. Use of Multi-Rotor Unmanned Aerial Vehicles for Radioactive Source Search. Remote Sens. 2018, 10, 728. [Google Scholar] [CrossRef] [Green Version]
  223. Rebouças, R.A.; Da Cruz Eller, Q.; Habermann, M.; Shiguemori, E.H. Embedded system for visual odometry and localization of moving objects in images acquired by unmanned aerial vehicles. In Proceedings of the 2013 III Brazilian Symposium on Computing Systems Engineering, Rio De Janeiro, Brazil, 4–8 December 2013; pp. 35–40. [Google Scholar]
  224. Kizar, S.N.; Satyanarayana, G. Object detection and location estimation using SVS for UAVs. In Proceedings of the International Conference on Automatic Control and Dynamic Optimization Techniques, Pune, India, 9–10 September 2016; pp. 920–924. [Google Scholar]
  225. Abughalieh, K.M.; Sababha, B.H.; Rawashdeh, N.A. A video-based object detection and tracking system for weight sensitive UAVs. Multimedia Tools Appl. 2019, 78, 9149–9167. [Google Scholar] [CrossRef]
  226. Choi, H.; Geeves, M.; Alsalam, B.; Gonzalez, F. Open source computer-vision based guidance system for UAVs on-board decision making. In Proceedings of the 2016 IEEE aerospace conference, Big Sky, MO, USA, 5–12 March 2016; pp. 1–5. [Google Scholar]
  227. De Oliveira, D.C.; Wehrmeister, M.A. Using Deep Learning and Low-Cost RGB and Thermal Cameras to Detect Pedestrians in Aerial Images Captured by Multirotor UAV. Sensors 2018, 18, 2244. [Google Scholar] [CrossRef] [Green Version]
  228. Kersnovski, T.; Gonzalez, F.; Morton, K. A UAV system for autonomous target detection and gas sensing. In Proceedings of the 2017 IEEE aerospace conference, Big Sky, MO, USA, 4–11 March 2017; pp. 1–12. [Google Scholar]
  229. Basso, M.; Stocchero, D.; Henriques, R.V.B.; Vian, A.L.; Bredemeier, C.; Konzen, A.A.; De Freitas, E.P. Proposal for an Embedded System Architecture Using a GNDVI Algorithm to Support UAV-Based Agrochemical Spraying. Sensors 2019, 19, 5397. [Google Scholar] [CrossRef] [Green Version]
  230. Daryanavard, H.; Harifi, A. Implementing face detection system on uav using raspberry pi platform. In Proceedings of the Iranian Conference on Electrical Engineering, Mashhad, Iran, 8–10 May 2018; pp. 1720–1723. [Google Scholar]
  231. Safadinho, D.; Ramos, J.; Ribeiro, R.; Filipe, V.; Barroso, J.; Pereira, A. UAV Landing Using Computer Vision Techniques for Human Detection. Sensors 2020, 20, 613. [Google Scholar] [CrossRef] [Green Version]
  232. Natesan, S.; Armenakis, C.; Benari, G.; Lee, R. Use of UAV-Borne Spectrometer for Land Cover Classification. Drones 2018, 2, 16. [Google Scholar] [CrossRef] [Green Version]
  233. Benhadhria, S.; Mansouri, M.; Benkhlifa, A.; Gharbi, I.; Jlili, N. VAGADRONE: Intelligent and Fully Automatic Drone Based on Raspberry Pi and Android. Appl. Sci. 2021, 11, 3153. [Google Scholar] [CrossRef]
  234. Ayoub, N.; Schneider-Kamp, P. Real-Time On-Board Deep Learning Fault Detection for Autonomous UAV Inspections. Electronics 2021, 10, 1091. [Google Scholar] [CrossRef]
  235. Xu, L.; Luo, H. Towards autonomous tracking and landing on moving target. In Proceedings of the 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, Cambodia, 6–9 June 2016; pp. 620–628. [Google Scholar]
  236. Genc, H.; Zu, Y.; Chin, T.-W.; Halpern, M.; Reddi, V.J. Flying IoT: Toward Low-Power Vision in the Sky. IEEE Micro 2017, 37, 40–51. [Google Scholar] [CrossRef]
  237. Meng, L.; Peng, Z.; Zhou, J.; Zhang, J.; Lu, Z.; Baumann, A.; Du, Y. Real-Time Detection of Ground Objects Based on Unmanned Aerial Vehicle Remote Sensing with Deep Learning: Application in Excavator Detection for Pipeline Safety. Remote Sens. 2020, 12, 182. [Google Scholar] [CrossRef] [Green Version]
  238. Tijtgat, N.; Van Ranst, W.; Goedeme, T.; Volckaert, B.; De Turck, F. Embedded real-time object detection for a UAV warning system. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2110–2118. [Google Scholar]
  239. Melián, J.; Jiménez, A.; Díaz, M.; Morales, A.; Horstrand, P.; Guerra, R.; López, S.; López, J. Real-Time Hyperspectral Data Transmission for UAV-Based Acquisition Platforms. Remote Sens. 2021, 13, 850. [Google Scholar] [CrossRef]
  240. Balamuralidhar, N.; Tilon, S.; Nex, F. MultEYE: Monitoring System for Real-Time Vehicle Detection, Tracking and Speed Estimation from UAV Imagery on Edge-Computing Platforms. Remote Sens. 2021, 13, 573. [Google Scholar] [CrossRef]
  241. Lammie, C.; Olsen, A.; Carrick, T.; Azghadi, M.R. Low-Power and High-Speed Deep FPGA Inference Engines for Weed Classification at the Edge. IEEE Access 2019, 7, 51171–51184. [Google Scholar] [CrossRef]
  242. Caba, J.; Díaz, M.; Barba, J.; Guerra, R.; López, J. FPGA-Based On-Board Hyperspectral Imaging Compression: Benchmarking Performance and Energy Efficiency against GPU Implementations. Remote Sens. 2020, 12, 3741. [Google Scholar] [CrossRef]
  243. Zoph, B.; Le, Q.V. Neural architecture search with reinforcement learning. arXiv Prepr. 2016, arXiv:1611.01578. [Google Scholar]
  244. Liu, D.; Kong, H.; Luo, X.; Liu, W.; Subramaniam, R. Bringing AI to Edge: From Deep Learning’s Perspective. arXiv Prepr. 2020, arXiv:2011.14808. [Google Scholar]
  245. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  246. Xiao, T.; Zhang, J.; Yang, K.; Peng, Y.; Zhang, Z. Error-driven incremental learning in deep convolutional neural network for large-scale image classification. In Proceedings of the 22nd ACM international conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 177–186. [Google Scholar]
  247. Kai, C.; Zhou, H.; Yi, Y.; Huang, W. Collaborative Cloud-Edge-End Task Offloading in Mobile-Edge Computing Networks With Limited Communication Capability. IEEE Trans. Cogn. Commun. Netw. 2020, 7, 624–634. [Google Scholar] [CrossRef]
  248. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, H.B. Towards federated learning at scale: System design. arXiv Prepr. 2019, arXiv:1902.01046. [Google Scholar]
Figure 1. Examples of typical UAVs: (a) A fixed-wing UAV “AgEagle RX60” from AgEagle (https://ageagle.com/agriculture/# (accessed on 17 October 2021)); (b) An unmanned helicopter “Shuixing No.1” from Hanhe (http://www.hanhe-aviation.com/products2.html (accessed on 17 October 2021)); (c) A flapping-wing UAV from the Drone Bird Company “AVES Series” (https://www.thedronebird.com/aves/ (accessed on 17 October 2021)); (d) A hybrid UAV “Linglong” from Northwestern Polytechnical University (https://wurenji.nwpu.edu.cn/cpyf/cpjj1/xzjyfj_ll_.htm (accessed on 17 October 2021)).
Figure 1. Examples of typical UAVs: (a) A fixed-wing UAV “AgEagle RX60” from AgEagle (https://ageagle.com/agriculture/# (accessed on 17 October 2021)); (b) An unmanned helicopter “Shuixing No.1” from Hanhe (http://www.hanhe-aviation.com/products2.html (accessed on 17 October 2021)); (c) A flapping-wing UAV from the Drone Bird Company “AVES Series” (https://www.thedronebird.com/aves/ (accessed on 17 October 2021)); (d) A hybrid UAV “Linglong” from Northwestern Polytechnical University (https://wurenji.nwpu.edu.cn/cpyf/cpjj1/xzjyfj_ll_.htm (accessed on 17 October 2021)).
Remotesensing 13 04387 g001
Figure 2. Cloud computing paradigm and edge computing paradigm for UAV RS. (a) Cloud computing paradigm; (b) Edge computing paradigm.
Figure 2. Cloud computing paradigm and edge computing paradigm for UAV RS. (a) Cloud computing paradigm; (b) Edge computing paradigm.
Remotesensing 13 04387 g002aRemotesensing 13 04387 g002b
Table 1. Categories and characteristics of UAVs according to operational risks [45].
Table 1. Categories and characteristics of UAVs according to operational risks [45].
CategoryMajor Metrics
mini UAVempty weight < 0.25 kg, flight altitude ≤ 50 m, max speed ≤ 40 km/h
light UAVempty weight ≤ 4 kg, max take-off weight ≤ 7 kg, max speed ≤ 100 km/h
small UAVempty weight ≤ 15 kg, or max take-off weight ≤ 25 kg
medium UAVempty weight > 15 kg, 25 kg < max take-off weight ≤ 150 kg
large UAVmax take-off weight > 150 kg
Table 2. Categories and characteristics of UAVs according to aerodynamic features [9,46,47].
Table 2. Categories and characteristics of UAVs according to aerodynamic features [9,46,47].
CategoryAdvantagesDrawbacks
fixed winglong range and endurance, large load, fast flight speedhigh requirements for take-off and landing, poor mobility, no hovering capability
rotary wingunmanned helicopterlong range, large load, hovering capability, low requirement for lifting and landingslow speed, difficulty in operating, high maintenance cost
multi-rotor UAVssmall in size, flexible, hovering capability, almost no requirement for lifting and landingshort range and endurance, slow speed, small load
flapping-wingflexible, small in sizeslow speed, single drive mode
hybridflexibility in vertical take-off and landing, fast speed, long rangecomplex structure, high maintenance cost
Table 3. The major characteristics and applications of sensors mounted on UAVs in PA.
Table 3. The major characteristics and applications of sensors mounted on UAVs in PA.
SensorsMajor CharacteristicsTypical Applications
RGB imagingobtain images in visible spectrum, with advantages of high-resolution, lightweight, low-cost, easy-to-usecrop recognition, plants defects and greenness monitoring [8,49,50,51]
Multispectral imagerhigh spatial resolution at centimeter-level RS data with multiple bands from visible to near infraredleaf area index (LAI) estimation [52], crop diseases and weeds monitoring and mapping [18,53], nutrient deficiency diagnosis [54]
Hyperspectral imagerprovide a large continuous narrow wavebands covering from ultraviolet to longwave infrared spectracrop species classification with similar spectral features [55], soil moisture content monitoring [56], crop yield estimation [57]
Thermal sensorsuse the information at the emitted radiation in the thermal infrared range of electromagnetic spectrum [58], and provide measurements of energy fluxes and temperatures from the earth’s surface [59]monitoring of water stress, crop diseases and plant phenotyping, estimation of crop yield [2], support decision making for irrigation scheduling and harvesting operations [60]
LiDARuse laser as a radiation source, and works at the wavelength of infrared to ultraviolet spectrum generally, with advantages of narrow beams, wide speed measurement ranges, and strong resistance to electromagnetic and clutter interference [61]detect individual crops [13], measure canopy structure and height [62], predict biomass and leaf nitrogen content [63]
SARprovide high-resolution, multi-polarization, multi-frequency images in all weather and all daycrop identification and land cover mapping [64], crop and cropland parameter extraction such as soil salt and moisture [65], crop yield estimation [66]
Table 4. DL-based methods for typical UAV RS applications in PA.
Table 4. DL-based methods for typical UAV RS applications in PA.
AreaTaskSpecific ApplicationTypeModelReference
Crop pest and disease detectionClassificationSoybean leaf diseases recognitionCNNInception, VGG19, Xception, Resnet-50[86]
Classifying fusarium wilt of radishCNNVGG-A[113]
Detection of helminthosporium leaf blotch diseaseCNNCustomize CNN[114]
Object
detection
Detection for pine wilt diseaseCNNYOLOv4[120]
Identification of fruit tree pestsCNNYOLOv3-tiny[119]
Recognition of spraying areaCNNCustomize CNN[136]
Semantic
segmentation
Quantitative phenotyping of northern leaf blightCNNMask R-CNN[127]
Vine disease detectionCNNVddNet[128]
Field weed density evaluationCNNModified U-Net[129]
Weed detection and mappingClassificationMapping of weed species in winter wheat cropsCNNModified Resnet18[115]
Weed detection in line cropsCNNResnet18[27]
Mid-to late-season weed detectionCNNMobilenetV2[122]
Weed classificationCNNResnet50[116]
Detecting rumex obtusifolius weed plantsCNNAlexNet[121]
Object
detection
Mid-to late-season weed detectionCNNSSD, Faster R-CNN[122]
Semantic
segmentation
Large-scale semantic weed mappingCNNModified SegNet[78]
Weed mappingCNNFCN[76]
Real-time weed mappingCNNFCN[137]
Identification and grading of maize droughtCNNModified U-Net[138]
Crop growth monitoring and crop yield estimationClassificationYield assessment of paddy fieldsCNNInception[139]
Rice grain yield estimation at the ripening stageCNNAlexNet[16]
Identification of citrus treesCNNCustomize CNN[134]
Count plants and detect plantation-rowsCNNVGG19[140]
Counting and geolocation citrus-treesCNNVGG16[141]
Object
detection
Strawberry yield predictionCNNFaster R-CNN[124]
Yield estimation of citrus fruitsCNN, RNNFaster R-CNN[123]
Growing status observation for oil palm treesCNNFaster R-CNN[142]
Plant identification and countingCNNFaster R-CNN, YOLO[143]
Crop detection for early-season maize stand countCNNMask Scoring RCNN[144]
Semantic
segmentation
Predict single boll weight of cottonCNNFCN[132]
Segmenting purple rapeseed leaves for nitrogen stress detectionCNNU-Net[131]
Counting of in situ rice seedlingsCNNModified vgg16 + customize segmentation network[145]
Crop type classificationClassificationAutomatic classification of treesCNNGoogLeNet[117]
Identifying heterogeneous cropsCRFSCRF[118]
Rice seedling detectionCNNVGG16[146]
Object
detection
Augmenting bale detectionCNN, GANCycleGAN, Modified YOLOv3[147]
Phenotyping in citrusCNNYOLOv3[148]
Detection of banana plantsCNNCustomize CNN[149]
Automatic tobacco plant detectionCNNCustomize CNN[150]
Detection of maize tasselsCNNTassel region proposals based on morphological processing + VGG16[151]
Detection of maize tasselsCNNFaster R-CNN[152]
Frost management in apple orchardCNNYOLOv4[153]
Semantic
segmentation
Soil and crop segmentationCNNCustomize CNN[154]
Vegetable mappingRNNAttention-based RNN[155]
Crop classificationCNNSegNet[156]
Semantic segmentation of citrus orchardCNNFCN, U-Net, SegNet, DDCN, DeepLabV3+[130]
UAV scouting for rice lodging assessmentCNNEDANet[157]
Table 5. A compilation of publicly available datasets with labels for PA applications.
Table 5. A compilation of publicly available datasets with labels for PA applications.
SourceDatasetPlatformData TypesApplicationsDataset Links
UAVWHU-HI [159]DJI Matrice 600 Pro & Leica Aibot X6 V1hyperspectralAccurate crop classification and hyperspectral image classificationhttp://rsidea.whu.edu.cn/resource_WHUHi_sharing.htm (accessed on 17 October 2021)
RiceSeedlingDataset [146]DJI Phantom 4 Pro & DJI Zenmuse X7RGBRice object detection, rice seedling classificationhttps://github.com/aipal-nchu/RiceSeedlingDataset (accessed on 17 October 2021)
Purple rapeseed leaves dataset [131]Matrice 600RGBSegmentation of purple rapeseed leafhttps://figshare.com/s/e7471d81a1e35d5ab0d1 (accessed on 17 October 2021)
Stewart_NLBimages_2019 [127]DJI Matrice 600 sRGBNorthern Corn Leaf Blight detectionhttps://datacommons.cyverse.org/browse/iplant/home/shared/GoreLab/dataFromPubs/Stewart_NLBimages_2019 (accessed on 17 October 2021)
Field images of maize annotated with disease symptoms [160]DJI Matrice 600 sRGBCorn disease detectionhttps://osf.io/p67rz/ (accessed on 17 October 2021)
RSC [145]DJI S1000RGBRice countinghttps://github.com/JintaoWU/RiceSeedingCounting (accessed on 17 October 2021)
weedMao [78]DJI Inspire2multispectralWeed mappinghttps://github.com/viariasv/weedMap/tree/86bf44d3ecde5470f662ff53693fadc542354343 (accessed on 17 October 2021)
oilPalmUav [142]Skywalker X8RGBOil palm growth status monitoringhttps://github.com/rs-dl/MOPAD (accessed on 17 October 2021)
Spaceborne or airborne platformHRSCD [161]airplaneRGBLand cover mapping, land cover change monitoringhttps://ieee-dataport.org/open-access/hrscd-high-resolution-semantic-change-detection-dataset (accessed on 17 October 2021)
CSIF [162]TERRA & AQUAmultispectralCalculation of chlorophyll fluorescence parametershttps://figshare.com/articles/dataset/CSIF/6387494 (accessed on 17 October 2021)
LEM+ dataset [163]Sentinel-2multispectralCrop classificationhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7701344/ (accessed on 17 October 2021)
EYES IN THE SKIES [164]IKONOSmultispectralDrug crop identificationhttps://ieee-dataport.org/documents/eyes-skies-data-driven-fusion-approach-identifying-drug-crops-remote-sensing-images (accessed on 17 October 2021)
BreizhCrops [165]Sentinel-2multispectralCrop classificationhttps://github.com/dl4sits/BreizhCrops (accessed on 17 October 2021)
Semantic Segmentation of Crop Type in Ghana [166]Sentinel-2 & PlanetScopemultispectral & SAR & RGBSemantic segmentation of crop classificationhttp://registry.mlhub.earth/10.34911/rdnt.ry138p/ (accessed on 17 October 2021)
Semantic Segmentation of Crop Type in South Sudan [167]Sentinel-2 & PlanetScopemultispectral & SAR & RGBSemantic segmentation of crop classificationhttp://registry.mlhub.earth/10.34911/rdnt.v6kx6n/ (accessed on 17 October 2021)
AgriculturalField-Seg [168]airplaneRGBCropland segmentationhttps://www.aic.uniovi.es/bremeseiro/agriculturalfield-seg/ (accessed on 17 October 2021)
EarthExplorer [169]Sentinel-2 & landsat-8 etc.RGB &multispectral &LidarCentral pivot irrigation system identification, paddy field segmentation, crop classificationhttps://earthexplorer.usgs.gov/ (accessed on 17 October 2021)
Copernicus Open Access Hub [170]sentinel-1 & sentinel-2 & sentinel-3SAR & multispectralNitrate and sediment concentration estimation, paddy field segmentation, early crop classification etc.https://scihub.copernicus.eu/dhus/#/home (accessed on 17 October 2021)
TimeSen2Crop [171]Sentinel-2multispectralCrop classificationhttps://rslab.disi.unitn.it/timesen2crop/ (accessed on 17 October 2021)
Table 6. A compilation of lightweight inference applications onboard UAVs for PA.
Table 6. A compilation of lightweight inference applications onboard UAVs for PA.
ApplicationsTaskModel Compression TechniquesPerformanceBenchmark PlatformsReferences
Tree species classificationClassificationDesign a compact CNN model architectureAverage classification accuracy of 92%, and model size reduced from 25 million trainable parameters to 2,982,48412 Intel(R) Xeon(R) CPU E5–1650 v4 units each with 3.60 GHz and four GeForce GTX TITAN X graphics cards[184]
Classification of weed and crop plantsClassificationParameter quantization of ResNet-18 from FP32 to FP16 and optimization to avoid redundant computations in overlapping image patchesThe overall accuracy of 94% on the test data, and the prediction pipeline reached a performance of 2.2 frames per second (FPS) from 1.3 FPSNVIDIA Jetson AGX Xavier[115]
Plant seedling classificationClassificationModel pruning and quantization to LeNet5, VGG16, and AlexNetCompress the size of models by a factor of 38 and to reduce the FLOPs of VGG16 by a factor of 99 without considerable loss of accuracyunknown[185]
Tree crown delineationSemantic segmentation8-bit quantization is performed on an already-trained float TensorFlow model and applied during TensorFlow Lite conversion facilitating the execution of the trained U-NetThe quantized model is 0.1 times the size of the original model; the most efficient inference procedure is achieved with 28ms with quantized-TPU model executed on Coral EdgeGoogle Coral Edge TPU Board[186]
Weed detectionSemantic segmentationParameter quantization, apply FP32 for training and use FP16 for inferenceAn accuracy of 80.9% on the testing samples and its inference speed was 4.5 FPS on a NVIDIA Jetson TX2 moduleNVIDIA Jetson TX2[137]
Table 7. A compilation of computing platforms onboard UAVs for typical RS applications.
Table 7. A compilation of computing platforms onboard UAVs for typical RS applications.
Major Computing ComponentVendorModelSpecificationApplicationsReference
CPUAICSHTERARK-1100CPU: Intel Celeron J1900
Memory: 8 GB LPDDR3
Thermal design power: 10 W
Weight: 2.0 kg
Detection and spatial localization of insulators[219]
IntelEdisonCPU: Intel Atom Dual Core 500 MHz processor
Memory: 1 GB DDR3
Storage: 4 GB
Weight: 16 g
Identification of faulty areas in the plantation[180]
Atom processor boardCPU: 2× ARM7 micro processor
Memory: 1 GB RAM
Weight: 90 g
Object tracking[220]
HardkernelOdroid XU4CPU: Samsung Exynos5422 Cortex™-A15 2Ghz and Cortex™-A7 Octa core processor
GPU: Mali-T628 MP6
Memory: 2 GB LPDDR3 RAM
Cooperative UAV tracking[221]
Texas InstrumentsBeagleBone BlackCPU: ARM Cortex-A8
Memory: 512 MB DDR3
Storage: 4-GB 8-bit eMMC
Sensor data fusion[222]
Raspberry Pi FoundationRaspberry Pi Model BCPU: ARM (ARM1176JZF-S)
GPU: Broadcom VideoCore IV @ 250 MHz
Memory: 512 MB
Moving objects detection and location[223]
Raspberry Pi Model B+CPU: ARM (ARM1176JZF-S)
GPU: Broadcom VideoCore IV @ 250 MHz
Memory: 512 MB
Object detection and range measurement[224]
Raspberry Pi 2 Model BCPU:4× Cortex-A7900 MHz
GPU: Broadcom VideoCore IV @ 250 MHz
Memory: 1 GB
Object detection, air quality data collection and transmission, pedestrian detection, object detection and tracking[225,226,227,228]
Raspberry Pi 3 Model BCPU: 4× Cortex-A531.2 GHz
GPU: Broadcom VideoCore IV @ 250 MHz
Memory: 1 GB
Soybean weed detection, agrochemical spraying, face detection, human detection, land use classification[116,229,230,231,232]
Raspberry Pi 3 Model B+CPU: 4× Cortex-A531.4 GHz
GPU: Broadcom VideoCore IV @ 400 MHz/300 MHz
Memory: 1 GB
Face recognition and object detection[233]
Raspberry Pi 4 Model BCPU: 4× Cortex-A721.5 GHz
GPU: Broadcom VideoCore VI @ 500 MHz
Memory: 4 GB
Fault location for transmission line[234]
GPUNVIDIATegra K1CPU: quad-core, 4-Plus-1™ ARM®
GPU: low-power NVIDIA Kepler™-based GeForce® graphics processor
Memory: 2 GB DDR3L RAM
Storage:16 GB eMMC 4.51
Max power: about 15 W
Weight: less than 200 g
Object tracking and automatic landing[235]
Jetson TK1CPU: NVIDIA 4 Plus 1 quad core ARM Cortex A15 CPU
GPU: NVIDIA Kepler GK20 with 192 SM3.2 CUDA cores
Memory: 2 GB
Storage: 16 GB eMMC 4.51
Vegetation index estimation[179]
Jeston TX1CPU: Quad-core ARM® Cortex®-A57 MPCore Processor
GPU: NVIDIA Maxwell™ GPU with 256 NVIDIA® CUDA® Cores
Memory: 4 GB LPDDR4
Storage: 16 GB eMMC 5.1
Object detection[236]
Jeston TX2CPU: Dual-Core NVIDIA Denver 2 64-Bit CPU and Quad-Core Arm® Cortex®-A57 MPCore processor
GPU: 256-core NVIDIA Pascal™ GPU
Memory: 8 GB 128-bit LPDDR4
Storage: 32 GB eMMC 5.1
Coarse-grained detection of pine wood nematode disease, object detection, pipeline safety excavator inspection, weed detection, pest detection[119,120,137,237,238]
Jetson NanoCPU: Quad-Core Arm® Cortex®-A57 MPCore processor
GPU: 128-core NVIDIA Maxwell™ GPU
Memory: 4 GB 64-bit LPDDR4
Storage: 16 GB eMMC 5.1
Real-time compression of hyperspectral data[239]
Jetson Xavier NXCPU: 6-core NVIDIA Carmel Arm®v8.2 64-bit CPU
GPU: 384-core NVIDIA Volta™ GPU with 48 Tensor Cores
Memory: 8 GB 128-bit LPDDR4x
Storage: 16 GB eMMC 5.1
Real-time vehicle detection and speed monitoring, real-time compression of hyperspectral data[239,240]
Jetson AGX XavierCPU: 8-core NVIDIA Carmel Arm®v8.2 64-bit CPU
GPU: 512-core NVIDIA Volta™ GPU with 64 Tensor Cores
Memory: 32 GB 256-bit LPDDR4x
Storage: 32 GB eMMC 5.1
Weed and crop classification[115]
FPGATerasicAltera DE2i-150CPU: Intel® Atom N2600
FPGA: Altera Cyclone IV FPGA
Memory: 2 GB DDR3
storage: 64 GB SSD
Identification of faulty areas in the plantation[180]
MaxelerMAX4 acceleration cardFPGA: Intel Altera Stratix-V FPGA
Memory: 48 GB DDR3 onboard memory
Tree crown detection[218]
IntelDE1-SoCCPU: dual-core ARM Cortex™-A9 processor
FPGA: Cyclone V SoC 5CSEMA5F31C6
Memory: 4450 Kbits embedded memory
Weed classification[241]
XilinxZynq-7000CPU: single-core ARM Cortex™-A9 processor
FPGA: Artix-7 based programmable logic
UAV hyperspectral data compression[242]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, J.; Xiang, J.; Jin, Y.; Liu, R.; Yan, J.; Wang, L. Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey. Remote Sens. 2021, 13, 4387. https://doi.org/10.3390/rs13214387

AMA Style

Liu J, Xiang J, Jin Y, Liu R, Yan J, Wang L. Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey. Remote Sensing. 2021; 13(21):4387. https://doi.org/10.3390/rs13214387

Chicago/Turabian Style

Liu, Jia, Jianjian Xiang, Yongjun Jin, Renhua Liu, Jining Yan, and Lizhe Wang. 2021. "Boost Precision Agriculture with Unmanned Aerial Vehicle Remote Sensing and Edge Intelligence: A Survey" Remote Sensing 13, no. 21: 4387. https://doi.org/10.3390/rs13214387

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop