In-Field Detection and Monitoring Technology in Precision Agriculture

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 5686

Special Issue Editors

Lingnan Guangdong Laboratory of Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen 518120, China
Interests: image processing in agriculture
College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China
Interests: intelligent agricultural equipment; precision agriculture
Special Issues, Collections and Topics in MDPI journals
College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
Interests: intelligent sensor; agricultural Internet of Things
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Precision Agriculture (PA), also referred to as Precision Farming or Smart Agriculture, is a relatively recent farming management concept based on the use of information technology with the aim of obtaining higher production efficiency, sustainable profitability and better-quality products, while minimizing environmental impacts. Smarter field inspection and monitoring is one of the important technologies driving the rapid development of precision agriculture. High-throughput information data is obtained through advanced technologies, such as agricultural environment, soil and water quality parameters sensing technology, multi-scale HD / multispectral / hyperspectral images such as UAV and satellite remote sensing, LIDAR, acoustic waves and high-speed network technology. Then, new-generation machine learning models are used to improve the robustness of prediction algorithms to complex agricultural environments, spatial and temporal variability, etc., to provide more accurate real-time data for agricultural management.

This Special Issue intends to cover the state of the art and recent progress in different aspects related to in-field detection and monitoring technology in a wide range of agricultural fields (crops, grassland, fruit trees, water, agricultural products, etc.). All types of manuscripts (original research and reviews) providing new insights into the in-field detection and monitoring technology of agriculture are welcome. Articles may include, but are not limited to, the following topics:

  • Detection and monitoring of within-field an on-farm variability;
  • Proximal and remote sensing of soils, crops, weed, plant diseases and insect pests;
  • Vegetation parameter sensing and management;
  • Crop models and decision support systems in PA;
  • Agricultural sensors, robotics and engineering;
  • Plant, disease and pest detection based on high-definition images, multispectral, hyperspectral and sound waves;
  • Precision plant protection;
  • Water quality detection and monitoring;
  • Wireless sensor networks, Internet of Things, big data and database in PA.

Dr. Xi Qiao
Dr. Shuo Zhang
Dr. Cong Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent agriculture
  • image processing
  • spectral analysis
  • quantitative inversion
  • prediction
  • early warning
  • machine learning
  • feature fusion

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 11878 KiB  
Article
Remote Sensing Extraction of Crown Planar Area and Plant Number of Papayas Using UAV Images with Very High Spatial Resolution
by Shuangshuang Lai, Hailin Ming, Qiuyan Huang, Zhihao Qin, Lian Duan, Fei Cheng and Guangping Han
Agronomy 2024, 14(3), 636; https://doi.org/10.3390/agronomy14030636 - 21 Mar 2024
Viewed by 583
Abstract
The efficient management of commercial orchards strongly requires accurate information on plant growing status for the implementation of necessary farming activities such as irrigation, fertilization, and pest control. Crown planar area and plant number are two very important parameters directly relating to fruit [...] Read more.
The efficient management of commercial orchards strongly requires accurate information on plant growing status for the implementation of necessary farming activities such as irrigation, fertilization, and pest control. Crown planar area and plant number are two very important parameters directly relating to fruit growth conditions and the final productivity of an orchard. In this study, in order to propose a novel and effective method to extract the crown planar area and number of mature and young papayas based on visible light images obtained from a DJ Phantom 4 RTK, we compared different vegetation indices (NGRDI, RGBVI, and VDVI), filter types (high- and low-pass filters), and filter convolution kernel sizes (3–51 pixels). Then, Otsu’s method was used to segment the crown planar area of the papayas, and the mean–standard deviation threshold (MSDT) method was used to identify the number of plants. Finally, the extraction accuracy of the crown planar area and number of mature and young papayas was validated. The results show that VDVI had the highest capability to separate the papayas from other ground objects. The best filter convolution kernel size was 23 pixels for the low-pass filter extraction of crown planar areas in mature and young plants. As to the plant number identification, segmentation could be set to the threshold with the highest F-score, i.e., the deviation coefficient n = 0 for single young papaya plants, n = 1 for single mature ones, and n = 1.4 for crown-connecting mature ones. Verification indicated that the average accuracy of crown planar area extraction was 93.71% for both young and mature papaya orchards and 95.54% for extracting the number of papaya plants. This set of methods can provide a reference for information extraction regarding papaya and other fruit trees with a similar crown morphology. Full article
(This article belongs to the Special Issue In-Field Detection and Monitoring Technology in Precision Agriculture)
Show Figures

Figure 1

19 pages, 8467 KiB  
Article
IPMCNet: A Lightweight Algorithm for Invasive Plant Multiclassification
by Ying Chen, Xi Qiao, Feng Qin, Hongtao Huang, Bo Liu, Zaiyuan Li, Conghui Liu, Quan Wang, Fanghao Wan, Wanqiang Qian and Yiqi Huang
Agronomy 2024, 14(2), 333; https://doi.org/10.3390/agronomy14020333 - 6 Feb 2024
Viewed by 701
Abstract
Invasive plant species pose significant biodiversity and ecosystem threats. Real-time identification of invasive plants is a crucial prerequisite for early and timely prevention. While deep learning has shown promising results in plant recognition, the use of deep learning models often involve a large [...] Read more.
Invasive plant species pose significant biodiversity and ecosystem threats. Real-time identification of invasive plants is a crucial prerequisite for early and timely prevention. While deep learning has shown promising results in plant recognition, the use of deep learning models often involve a large number of parameters and high data requirements for training. Unfortunately, the available data for various invasive plant species are often limited. To address this challenge, this study proposes a lightweight deep learning model called IPMCNet for the identification of multiple invasive plant species. IPMCNet attains high recognition accuracy even with limited data and exhibits strong generalizability. Simultaneously, by employing depth-wise separable convolutional kernels, splitting channels, and eliminating fully connected layer, the model’s parameter count is lower than that of some existing lightweight models. Additionally, the study explores the impact of different loss functions, and the insertion of various attention modules on the model’s accuracy. The experimental results reveal that, compared with eight other existing neural network models, IPMCNet achieves the highest classification accuracy of 94.52%. Furthermore, the findings suggest that focal loss is the most effective loss function. The performance of the six attention modules is suboptimal, and their insertion leads to a decrease in model accuracy. Full article
(This article belongs to the Special Issue In-Field Detection and Monitoring Technology in Precision Agriculture)
Show Figures

Figure 1

17 pages, 3397 KiB  
Article
Development and Optimization of a Chamber System Applied to Maize Net Ecosystem Carbon Exchange Measurements
by Chenxin Pan, Junguo Hu, Hanghang Cai, Junjie Jiang, Kechen Gu, Chao Zhu and Guodong Mao
Agronomy 2024, 14(1), 68; https://doi.org/10.3390/agronomy14010068 - 27 Dec 2023
Viewed by 726
Abstract
Net ecosystem carbon exchange (NEE) in agricultural land represents a significant source of global greenhouse gas (GHG) emissions. While there are various tools for measuring NEE in agricultural fields, the chamber method remains the sole tool at the plot scale. In this research, [...] Read more.
Net ecosystem carbon exchange (NEE) in agricultural land represents a significant source of global greenhouse gas (GHG) emissions. While there are various tools for measuring NEE in agricultural fields, the chamber method remains the sole tool at the plot scale. In this research, we evaluated the NEE of maize plants at the nodulation stage using the flow-through chamber method. Many existing flow-through chamber systems directly introduce gases, leading to collisions with plants and subsequent turbulence inside the chamber. Turbulence can extend the time required to achieve a steady state. We modified the traditional flow-through chamber design to minimize turbulence in the measurement zone. Our modifications were validated by modeling the chamber and maize plants and by conducting fluid simulation experiments. In the analysis of our comparative field measurements between the two chamber designs, the use of the improved system notably shortened the time required to reach the steady state, increased the measurement frequency, and reduced the influence of changing environmental factors on the readings. Enhancing the measurement frequency is crucial for ensuring long-term accuracy. By reducing turbulence in the chamber, we anticipate improvements in the precision of NEE measurements in agricultural research, which could significantly contribute to an accurate assessment of the global carbon cycle. Full article
(This article belongs to the Special Issue In-Field Detection and Monitoring Technology in Precision Agriculture)
Show Figures

Figure 1

27 pages, 13352 KiB  
Article
Clustering and Segmentation of Adhesive Pests in Apple Orchards Based on GMM-DC
by Yunfei Wang, Shuangxi Liu, Zhuo Ren, Bo Ma, Junlin Mu, Linlin Sun, Hongjian Zhang and Jinxing Wang
Agronomy 2023, 13(11), 2806; https://doi.org/10.3390/agronomy13112806 - 13 Nov 2023
Viewed by 710
Abstract
The segmentation of individual pests is a prerequisite for pest feature extraction and identification. To address the issue of pest adhesion in the apple orchard pest identification process, this research proposed a pest adhesion image segmentation method based on Gaussian Mixture Model with [...] Read more.
The segmentation of individual pests is a prerequisite for pest feature extraction and identification. To address the issue of pest adhesion in the apple orchard pest identification process, this research proposed a pest adhesion image segmentation method based on Gaussian Mixture Model with Density and Curvature Weighting (GMM-DC). First, in the HSV color space, an image was desaturated by adjusting the hue and inverting to mitigate threshold crossing points. Subsequently, threshold segmentation and contour selection methods were used to separate the image background. Next, a shape factor was introduced to determine the regions and quantities of adhering pests, thereby determining the number of model clustering clusters. Then, point cloud reconstruction was performed based on the color and spatial distribution features of the pests. To construct the GMM-DC segmentation model, a spatial density (SD) and spatial curvature (SC) information function were designed and embedded in the GMM. Finally, experimental analysis was conducted on the collected apple orchard pest images. The results showed that GMM-DC achieved an average accurate segmentation rate of 95.75%, an average over-segmentation rate of 2.83%, and an average under-segmentation rate of 1.42%. These results significantly outperformed traditional image segmentation methods. In addition, the original and improved Mask R-CNN models were used as recognition models, and the mean Average Precision was used as the evaluation metric. Recognition experiments were conducted on pest images with and without the proposed method. The results show the mean Average Precision for pest images segmented with the proposed method as 92.43% and 96.75%. This indicates an improvement of 13.01% and 12.18% in average recognition accuracy, respectively. The experimental results demonstrate that this method provides a theoretical and methodological foundation for accurate pest identification in orchards. Full article
(This article belongs to the Special Issue In-Field Detection and Monitoring Technology in Precision Agriculture)
Show Figures

Figure 1

25 pages, 8130 KiB  
Article
Wheat Lodging Direction Detection for Combine Harvesters Based on Improved K-Means and Bag of Visual Words
by Qian Zhang, Qingshan Chen, Lizhang Xu, Xiangqian Xu and Zhenwei Liang
Agronomy 2023, 13(9), 2227; https://doi.org/10.3390/agronomy13092227 - 25 Aug 2023
Viewed by 706
Abstract
For the inconsistent lodging of wheat with dense growth and overlapped organs, it is difficult to detect lodging direction accurately and quickly using vehicle vision for harvesters. Therefore, in this paper, the k-means algorithm is improved by designing a validity evaluation function, selecting [...] Read more.
For the inconsistent lodging of wheat with dense growth and overlapped organs, it is difficult to detect lodging direction accurately and quickly using vehicle vision for harvesters. Therefore, in this paper, the k-means algorithm is improved by designing a validity evaluation function, selecting initial clustering centers by distance, constructing a multidimensional feature vector, and simplifying calculations using triangle inequality. An adaptive image grid division method based on perspective mapping and inverse perspective mapping with a corrected basic equation is proposed for constructing a dataset of wheat lodging directions. The improved k-means algorithm and direction dataset are used to construct a bag of visual words. Based on scale-invariant feature transform, pyramid word frequency, histogram intersection kernel, and support vector machine, the wheat lodging directions were detected in the grid. The proposed method was verified through experiments with images acquired on an intelligent combine harvester. Compared with single-level word frequencies with existing and improved k-means, the mean accuracy of wheat lodging direction detection by pyramid word frequencies with improved k-means increased by 6.71% and 1.11%, respectively. The average time of detection using the proposed method was 1.16 s. The proposed method can accurately and rapidly detect wheat lodging direction for combine harvesters and further realize closed-loop control of intelligent harvesting operations. Full article
(This article belongs to the Special Issue In-Field Detection and Monitoring Technology in Precision Agriculture)
Show Figures

Figure 1

18 pages, 7238 KiB  
Article
Research on Insect Pest Identification in Rice Canopy Based on GA-Mask R-CNN
by Sitao Liu, Shenghui Fu, Anrui Hu, Pan Ma, Xianliang Hu, Xinyu Tian, Hongjian Zhang and Shuangxi Liu
Agronomy 2023, 13(8), 2155; https://doi.org/10.3390/agronomy13082155 - 17 Aug 2023
Cited by 2 | Viewed by 1322
Abstract
Aiming at difficult image acquisition and low recognition accuracy of two rice canopy pests, rice stem borer and rice leaf roller, we constructed a GA-Mask R-CNN (Generative Adversarial Based Mask Region Convolutional Neural Network) intelligent recognition model for rice stem borer and rice [...] Read more.
Aiming at difficult image acquisition and low recognition accuracy of two rice canopy pests, rice stem borer and rice leaf roller, we constructed a GA-Mask R-CNN (Generative Adversarial Based Mask Region Convolutional Neural Network) intelligent recognition model for rice stem borer and rice leaf roller, and we combined it with field monitoring equipment for them. Firstly, based on the biological habits of rice canopy pests, a variety of rice pest collection methods were used to obtain the images of rice stem borer and rice leaf roller pests. Based on different segmentation algorithms, the rice pest images were segmented to extract single pest samples. Secondly, the bug generator based on a generative adversarial network strategy improves the sensitivity of the classification network to the bug information, generates the pest information images in the real environment, and obtains the sample dataset for deep learning through multi-way augmentation. Then, through adding channel attention ECA module in Mask R-CNN and improving the connection of residual blocks in the backbone network ResNet101, the recognition accuracy of the model is improved. Finally, the GA-Mask R-CNN model was tested on a multi-source dataset with an average precision (AP) of 92.71%, recall (R) of 89.28% and a balanced score F1 of 90.96%. The average precision, recall, and balanced score F1 are improved by 7.07, 7.65, and 8.83%, respectively, compared to the original Mask R-CNN. The results show that the GA-Mask R-CNN model performance indexes are all better than the Mask R-CNN, the Faster R-CNN, the SSD, the YOLOv5, and other network models, which can provide technical support for remote intelligent monitoring of rice pests. Full article
(This article belongs to the Special Issue In-Field Detection and Monitoring Technology in Precision Agriculture)
Show Figures

Figure 1

Back to TopTop