Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (58)

Search Parameters:
Keywords = automatic pest monitoring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1035 KB  
Article
Towards Smart Pest Management in Olives: ANN-Based Detection of Olive Moth (Prays oleae Bernard, 1788)
by Tomislav Kos, Anđelo Zdrilić, Dana Čirjak, Marko Zorica, Šimun Kolega and Ivana Pajač Živković
AgriEngineering 2025, 7(7), 200; https://doi.org/10.3390/agriengineering7070200 - 20 Jun 2025
Viewed by 706
Abstract
Prays oleae Bernard, 1788, or the olive moth, is a significant pest in Croatian olive groves. This study aims to develop a functional model based on an artificial neural network to detect olive moths in real time. This study was conducted in two [...] Read more.
Prays oleae Bernard, 1788, or the olive moth, is a significant pest in Croatian olive groves. This study aims to develop a functional model based on an artificial neural network to detect olive moths in real time. This study was conducted in two different orchards in Zadar County, Croatia, in the periods from April to September 2022 and from May to July 2023. Moth samples were collected by placing traps with adhesive pads in these orchards. Photos of the pads were taken every week and were later annotated and used to develop the dataset for the artificial neural network. This study primarily focused on the average precision parameter to evaluate the model’s detection capabilities. The average AP value for all classes was 0.48, while the average AP value for the Olive_trap_moth class, which detected adult P. oleae, was 0.59. The model showed the best results at an IoU threshold of 50%, achieving an AP50 value of 0.75. The AP75 value was 0.56 at an IoU = 75%. The mean average precision (mAP) was 0.48. This model is a promising tool for P. oleae detection; however, further research is advised. Full article
Show Figures

Figure 1

23 pages, 3562 KB  
Article
A Unmanned Aerial Vehicle-Based Image Information Acquisition Technique for the Middle and Lower Sections of Rice Plants and a Predictive Algorithm Model for Pest and Disease Detection
by Xiaoyan Guo, Yuanzhen Ou, Konghong Deng, Xiaolong Fan, Ruitao Gao and Zhiyan Zhou
Agriculture 2025, 15(7), 790; https://doi.org/10.3390/agriculture15070790 - 7 Apr 2025
Viewed by 614
Abstract
Aiming at the technical bottleneck of monitoring rice stalk, pest, and grass damage in the middle and lower parts of rice, this paper proposes a UAV-based image information acquisition method and disease prediction algorithm model, which provides an efficient and low-cost solution for [...] Read more.
Aiming at the technical bottleneck of monitoring rice stalk, pest, and grass damage in the middle and lower parts of rice, this paper proposes a UAV-based image information acquisition method and disease prediction algorithm model, which provides an efficient and low-cost solution for the accurate early monitoring of rice diseases, and helps improve the scientific and intelligent level of agricultural disease prevention and control. Firstly, the UAV image acquisition system was designed and equipped with an automatic telescopic rod, 360° automatic turntable, and high-definition image sensing equipment to achieve multi-angle and high-precision data acquisition in the middle and lower regions of rice plants. At the same time, a path planning algorithm and ant colony algorithm were introduced to design the flight layout path of the UAV and improve the coverage and stability of image acquisition. In terms of image information processing, this paper proposes a multi-dimensional data fusion scheme, which combines RGB, infrared, and hyperspectral data to achieve the deep fusion of information in different bands. In disease prediction, the YOLOv8 target detection algorithm and lightweight Transformer network are adopted to determine the detection performance of small targets. The experimental results showed that the average accuracy of the YOLOv8 model (mAP@0.5) in the detection of rice curl disease was 90.13%, which was much higher than that of traditional methods such as Faster R-CNN and SSD. In addition, 1496 disease images and autonomous data sets were collected to verify that the system showed good stability and practicability in field environment. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

29 pages, 8325 KB  
Article
Insights into Mosquito Behavior: Employing Visual Technology to Analyze Flight Trajectories and Patterns
by Ning Zhao, Lifeng Wang and Ke Wang
Electronics 2025, 14(7), 1333; https://doi.org/10.3390/electronics14071333 - 27 Mar 2025
Cited by 1 | Viewed by 652
Abstract
Mosquitoes, as vectors of numerous serious infectious diseases, require rigorous behavior monitoring for effective disease prevention and control. Simultaneously, precise surveillance of flying insect behavior is also crucial in agricultural pest management. This study proposes a three-dimensional trajectory reconstruction method for mosquito behavior [...] Read more.
Mosquitoes, as vectors of numerous serious infectious diseases, require rigorous behavior monitoring for effective disease prevention and control. Simultaneously, precise surveillance of flying insect behavior is also crucial in agricultural pest management. This study proposes a three-dimensional trajectory reconstruction method for mosquito behavior analysis based on video data. By employing multiple synchronized cameras to capture mosquito flight images, using background subtraction to extract moving targets, applying Kalman filtering to predict target states, and integrating the Hungarian algorithm for multi-target data association, the system can automatically reconstruct three-dimensional mosquito flight trajectories. Experimental results demonstrate that this approach achieves high-precision flight path reconstruction, with a detection accuracy exceeding 95%, an F1-score of 0.93, and fast processing speeds that enables real-time tracking. The mean error of three-dimensional trajectory reconstruction is only 10 ± 4 mm, offering significant improvements in detection accuracy, tracking robustness, and real-time performance over traditional two-dimensional methods. These findings provide technological support for optimizing vector control strategies and enhancing precision pest control and can be further extended to ecological monitoring and agricultural pest management, thus bearing substantial significance for both public health and agriculture. Full article
Show Figures

Figure 1

37 pages, 3785 KB  
Review
Key Intelligent Pesticide Prescription Spraying Technologies for the Control of Pests, Diseases, and Weeds: A Review
by Kaiqiang Ye, Gang Hu, Zijie Tong, Youlin Xu and Jiaqiang Zheng
Agriculture 2025, 15(1), 81; https://doi.org/10.3390/agriculture15010081 - 1 Jan 2025
Cited by 5 | Viewed by 3989
Abstract
In modern agriculture, plant protection is the key to ensuring crop health and improving yields. Intelligent pesticide prescription spraying (IPPS) technologies monitor, diagnose, and make scientific decisions about pests, diseases, and weeds; formulate personalized and precision control plans; and prevent and control pests [...] Read more.
In modern agriculture, plant protection is the key to ensuring crop health and improving yields. Intelligent pesticide prescription spraying (IPPS) technologies monitor, diagnose, and make scientific decisions about pests, diseases, and weeds; formulate personalized and precision control plans; and prevent and control pests through the use of intelligent equipment. This study discusses key IPSS technologies from four perspectives: target information acquisition, information processing, pesticide prescription spraying, and implementation and control. In the target information acquisition section, target identification technologies based on images, remote sensing, acoustic waves, and electronic nose are introduced. In the information processing section, information processing methods such as information pre-processing, feature extraction, pest and disease identification, bioinformatics analysis, and time series data are addressed. In the pesticide prescription spraying section, the impact of pesticide selection, dose calculation, spraying time, and method on the resulting effect and the formulation of prescription pesticide spraying in a certain area are explored. In the implement and control section, vehicle automatic control technology, precision spraying technology, and droplet characteristic control technology and their applications are studied. In addition, this study discusses the future development prospectives of IPPS technologies, including multifunctional target information acquisition systems, decision-support systems based on generative AI, and the development of precision intelligent sprayers. The advancement of these technologies will enhance agricultural productivity in a more efficient, environmentally sustainable manner. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

21 pages, 6489 KB  
Article
Peach Leaf Shrinkage Disease Recognition Algorithm Based on Attention Spatial Pyramid Pooling Enhanced with Local Attention Network
by Caihong Zhang, Pingchuan Zhang, Yanjun Hu, Zeze Ma, Xiaona Ding, Ying Yang and Shan Li
Electronics 2024, 13(24), 4973; https://doi.org/10.3390/electronics13244973 - 17 Dec 2024
Viewed by 883
Abstract
Aiming at many challenges in the recognition task of peach leaf shrink disease, such as the diversity of object size of diseased leaf disease, complex background interference, and inflexible adjustment of model training learning rate, we propose a peach leaf shrink disease recognition [...] Read more.
Aiming at many challenges in the recognition task of peach leaf shrink disease, such as the diversity of object size of diseased leaf disease, complex background interference, and inflexible adjustment of model training learning rate, we propose a peach leaf shrink disease recognition algorithm based on an attention generalized efficient layer aggregation network. Firstly, the rectified linear unit activation function is used to effectively improve the stability and performance of the model in low-precision computing environments and solve the problem of partial gradient disappearance. Secondly, the integrated squeeze-and-excitation network attention mechanism can adaptively focus on the key areas of pests and diseases in the image, which significantly enhances the recognition ability of the model to the characteristics of pests and diseases. Finally, combined with fast pyramid pooling enhanced with Local Attention Networks, the deep fusion of cross-layer features is realized to improve the ability of the model to identify complex features and optimize the operation efficiency. The experimental results on the peach leaf shrink disease recognition dataset show that the proposed algorithm achieves a significant improvement in performance compared with the original YOLOv8 algorithm. Specifically, mF1, mPrecision, mRecall, and mAP increased by 0.1075, 0.0723, 0.1224, and 0.1184, respectively, which provided strong technical support for intelligent and automatic monitoring of peach pests and diseases. Full article
Show Figures

Figure 1

22 pages, 5317 KB  
Article
An Attention-Based Spatial-Spectral Joint Network for Maize Hyperspectral Images Disease Detection
by Jindai Liu, Fengshuang Liu and Jun Fu
Agriculture 2024, 14(11), 1951; https://doi.org/10.3390/agriculture14111951 - 31 Oct 2024
Cited by 1 | Viewed by 1053
Abstract
Maize is susceptible to pest disease, and the production of maize would suffer a significant decline without precise early detection. Hyperspectral imaging is well-suited for the precise detection of diseases due to its ability to capture the internal chemical characteristics of vegetation. However, [...] Read more.
Maize is susceptible to pest disease, and the production of maize would suffer a significant decline without precise early detection. Hyperspectral imaging is well-suited for the precise detection of diseases due to its ability to capture the internal chemical characteristics of vegetation. However, the abundance of redundant information in hyperspectral data poses challenges in extracting significant features. To overcome the above problems, in this study we proposed an attention-based spatial-spectral joint network model for hyperspectral detection of pest-infected maize. The model contains 3D and 2D convolutional layers that extract features from both spatial and spectral domains to improve the identification capability of hyperspectral images. Moreover, the model is embedded with an attention mechanism that improves feature representation by focusing on important spatial and spectral-wise information and enhances the feature extraction ability of the model. Experimental results demonstrate the effectiveness of the proposed model across different field scenarios, achieving overall accuracies (OAs) of 99.24% and 97.4% on close-up hyperspectral images and middle-shot hyperspectral images, respectively. Even under the condition of a lack of training data, the proposed model performs a superior performance relative to other models and achieves OAs of 98.29% and 92.18%. These results proved the validity of the proposed model, and it is accomplished efficiently for pest-infected maize detection. The proposed model is believed to have the potential to be applied to mobile devices such as field robots in order to monitor and detect infected maize automatically. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

14 pages, 7209 KB  
Article
Detection and Early Warning of Duponchelia fovealis Zeller (Lepidoptera: Crambidae) Using an Automatic Monitoring System
by Edgar Rodríguez-Vázquez, Agustín Hernández-Juárez, Audberto Reyes-Rosas, Carlos Patricio Illescas-Riquelme and Francisco Marcelo Lara-Viveros
AgriEngineering 2024, 6(4), 3785-3798; https://doi.org/10.3390/agriengineering6040216 - 18 Oct 2024
Viewed by 1657
Abstract
In traditional pest monitoring, specimens are manually inspected, identified, and counted. These techniques can lead to poor data quality and hinder effective pest management decisions due to operational and economic limitations. This study aimed to develop an automatic detection and early warning system [...] Read more.
In traditional pest monitoring, specimens are manually inspected, identified, and counted. These techniques can lead to poor data quality and hinder effective pest management decisions due to operational and economic limitations. This study aimed to develop an automatic detection and early warning system using the European Pepper Moth, Duponchelia fovealis (Lepidoptera: Crambidae), as a study model. A prototype water trap equipped with an infrared digital camera controlled using a microprocessor served as the attraction and capture device. Images captured by the system in the laboratory were processed to detect objects. Subsequently, these objects were labeled, and size and shape features were extracted. A machine learning model was then trained to identify the number of insects present in the trap. The model achieved 99% accuracy in identifying target insects during validation with 30% of the data. Finally, the prototype with the trained model was deployed in the field for result confirmation. Full article
Show Figures

Figure 1

21 pages, 21825 KB  
Article
A Time-Frequency Domain Mixed Attention-Based Approach for Classifying Wood-Boring Insect Feeding Vibration Signals Using a Deep Learning Model
by Weizheng Jiang, Zhibo Chen and Haiyan Zhang
Insects 2024, 15(4), 282; https://doi.org/10.3390/insects15040282 - 16 Apr 2024
Cited by 3 | Viewed by 2463
Abstract
Wood borers, such as the emerald ash borer and holcocerus insularis staudinger, pose a significant threat to forest ecosystems, causing damage to trees and impacting biodiversity. This paper proposes a neural network for detecting and classifying wood borers based on their feeding vibration [...] Read more.
Wood borers, such as the emerald ash borer and holcocerus insularis staudinger, pose a significant threat to forest ecosystems, causing damage to trees and impacting biodiversity. This paper proposes a neural network for detecting and classifying wood borers based on their feeding vibration signals. We utilize piezoelectric ceramic sensors to collect drilling vibration signals and introduce a novel convolutional neural network (CNN) architecture named Residual Mixed Domain Attention Module Network (RMAMNet).The RMAMNet employs both channel-domain attention and time-domain attention mechanisms to enhance the network’s capability to learn meaningful features. The proposed system outperforms established networks, such as ResNet and VGG, achieving a recognition accuracy of 95.34% and an F1 score of 0.95. Our findings demonstrate that RMAMNet significantly improves the accuracy of wood borer classification, indicating its potential for effective pest monitoring and classification tasks. This study provides a new perspective and technical support for the automatic detection, classification, and early warning of wood-boring pests in forestry. Full article
(This article belongs to the Special Issue Monitoring and Management of Invasive Insect Pests)
Show Figures

Figure 1

24 pages, 5640 KB  
Article
Sustainable Coffee Leaf Diagnosis: A Deep Knowledgeable Meta-Learning Approach
by Abdullah Ali Salamai and Waleed Tawfiq Al-Nami
Sustainability 2023, 15(24), 16791; https://doi.org/10.3390/su152416791 - 13 Dec 2023
Cited by 8 | Viewed by 1774
Abstract
Multi-task visual recognition plays a pivotal role in addressing the composite challenges encountered during the monitoring of crop health, pest infestations, and disease outbreaks in precision agriculture. Machine learning approaches have been revolutionizing the diagnosis of plant disease in recent years; however, they [...] Read more.
Multi-task visual recognition plays a pivotal role in addressing the composite challenges encountered during the monitoring of crop health, pest infestations, and disease outbreaks in precision agriculture. Machine learning approaches have been revolutionizing the diagnosis of plant disease in recent years; however, they require a large amount of training data and suffer from limited generalizability for unseen data. This work introduces a novel knowledgeable meta-learning framework for the few-shot multi-task diagnosis of biotic stress in coffee leaves. A mixed vision transformer (MVT) learner is presented to generate mixed contextual attention maps from discriminatory latent representations between support and query images to give more emphasis to the biotic stress lesions in coffee leaves. Then, a knowledge distillation strategy is introduced to avoid disastrous forgetting phenomena during inner-loop training. An adaptive meta-training rule is designed to automatically update the parameters of the meta-learner according to the current task. The competitive results from exhaustive experimentations on public datasets demonstrate the superior performance of our approach over the traditional methods. This is not only restricted to enhancing the accuracy and efficiency of coffee leaf disease diagnosis but also contributes to reducing the environmental footprint through optimizing resource utilization and minimizing the need for chemical treatments, hence aligning with broader sustainability goals in agriculture. Full article
Show Figures

Figure 1

19 pages, 9708 KB  
Article
Enhancement for Greenhouse Sustainability Using Tomato Disease Image Classification System Based on Intelligent Complex Controller
by Taehyun Kim, Hansol Park, Jeonghyun Baek, Manjung Kim, Donghyeok Im, Hyoseong Park, Dongil Shin and Dongkyoo Shin
Sustainability 2023, 15(23), 16220; https://doi.org/10.3390/su152316220 - 22 Nov 2023
Cited by 2 | Viewed by 2012
Abstract
Monitoring the occurrence of plant diseases and pests such as fungi, viruses, nematodes, and insects in crops and collecting environmental information such as temperature, humidity, and light levels is crucial for sustainable greenhouse management. It is essential to control the environment through measures [...] Read more.
Monitoring the occurrence of plant diseases and pests such as fungi, viruses, nematodes, and insects in crops and collecting environmental information such as temperature, humidity, and light levels is crucial for sustainable greenhouse management. It is essential to control the environment through measures like adjusting vents, using shade nets, and employing screen controls to achieve optimal growing conditions, ensuring the sustainability of the greenhouse. In this paper, an artificial intelligence-based integrated environmental control system was developed to enhance the sustainability of the greenhouse. The system automatically acquires images of crop diseases and augments the disease image information according to environmental data, utilizing deep-learning models for classification and feedback. Specifically, the data are augmented by measuring scattered light within the greenhouse, compensating for potential losses in the images due to variations in light intensity. This augmentation addresses recognition issues stemming from data imbalances. Classifying the data is done using the Faster R-CNN model, followed by a comparison of the accuracy results. This comparison enables feedback for accurate image loss correction based on reflectance, ultimately improving recognition rates. The empirical experimental results demonstrated a 94% accuracy in classifying diseases, showcasing a high level of accuracy in real greenhouse conditions. This indicates the potential utility of employing optimal pest control strategies for greenhouse management. In contrast to the predominant direction of most existing research, which focuses on simply utilizing extensive learning and resources to enhance networks and optimize loss functions, this study demonstrated the performance improvement effects of the model by analyzing video preprocessing and augmented data based on environmental information. Through such efforts, attention should be directed towards quality improvement using information rather than relying on massive data collection and learning. This approach allows the acquisition of optimal pest control timing and methods for different types of plant diseases and pests, even in underdeveloped greenhouse environments, without the assistance of greenhouse experts, using minimal resources. The implementation of such a system will result in a reduction in labor for greenhouse management, a decrease in pesticide usage, and an improvement in productivity. Full article
(This article belongs to the Special Issue Intelligent Agricultural Technologies and Corresponding Equipment)
Show Figures

Figure 1

21 pages, 8439 KB  
Article
A New Remote Sensing Service Mode for Agricultural Production and Management Based on Satellite–Air–Ground Spatiotemporal Monitoring
by Wenjie Li, Wen Dong, Xin Zhang and Jinzhong Zhang
Agriculture 2023, 13(11), 2063; https://doi.org/10.3390/agriculture13112063 - 27 Oct 2023
Cited by 6 | Viewed by 3281
Abstract
Remote sensing, the Internet, the Internet of Things (IoT), artificial intelligence, and other technologies have become the core elements of modern agriculture and smart farming. Agricultural production and management modes guided by data and services have become a cutting-edge carrier of agricultural information [...] Read more.
Remote sensing, the Internet, the Internet of Things (IoT), artificial intelligence, and other technologies have become the core elements of modern agriculture and smart farming. Agricultural production and management modes guided by data and services have become a cutting-edge carrier of agricultural information monitoring, which promotes the transformation of the intelligent computing of remote sensing big data and agricultural intensive management from theory to practical applications. In this paper, the main research objective is to construct a new high-frequency agricultural production monitoring and intensive sharing service and management mode, based on the three dimensions of space, time, and attributes, that includes crop recognition, growth monitoring, yield estimation, crop disease or pest monitoring, variable-rate prescription, agricultural machinery operation, and other automatic agricultural intelligent computing applications. The platforms supported by this mode include a data management and agricultural information production subsystem, an agricultural monitoring and macro-management subsystem (province and county scales), and two mobile terminal applications (APPs). Taking Shandong as the study area of the application case, the technical framework of the system and its mobile terminals were systematically elaborated at the province and county levels, which represented macro-management and precise control of agricultural production, respectively. The automatic intelligent computing mode of satellite–air–ground spatiotemporal collaboration that we proposed fully couples data obtained from satellites, unmanned aerial vehicles (UAVs), and IoT technologies, which can provide the accurate and timely monitoring of agricultural conditions and real-time guidance for agricultural machinery scheduling throughout the entire process of agricultural cultivation, planting, management, and harvest; the area accuracy of all obtained agricultural information products is above 90%. This paper demonstrates the necessity of customizable product and service research in agricultural intelligent computing, and the proposed practical mode can provide support for governments to participate in agricultural macro-management and decision making, which is of great significance for smart farming development and food security. Full article
(This article belongs to the Special Issue Agricultural Automation in Smart Farming)
Show Figures

Figure 1

17 pages, 9687 KB  
Article
An Approach for Plant Leaf Image Segmentation Based on YOLOV8 and the Improved DEEPLABV3+
by Tingting Yang, Suyin Zhou, Aijun Xu, Junhua Ye and Jianxin Yin
Plants 2023, 12(19), 3438; https://doi.org/10.3390/plants12193438 - 29 Sep 2023
Cited by 36 | Viewed by 8574
Abstract
Accurate plant leaf image segmentation provides an effective basis for automatic leaf area estimation, species identification, and plant disease and pest monitoring. In this paper, based on our previous publicly available leaf dataset, an approach that fuses YOLOv8 and improved DeepLabv3+ is proposed [...] Read more.
Accurate plant leaf image segmentation provides an effective basis for automatic leaf area estimation, species identification, and plant disease and pest monitoring. In this paper, based on our previous publicly available leaf dataset, an approach that fuses YOLOv8 and improved DeepLabv3+ is proposed for precise image segmentation of individual leaves. First, the leaf object detection algorithm-based YOLOv8 was introduced to reduce the interference of backgrounds on the second stage leaf segmentation task. Then, an improved DeepLabv3+ leaf segmentation method was proposed to more efficiently capture bar leaves and slender petioles. Densely connected atrous spatial pyramid pooling (DenseASPP) was used to replace the ASPP module, and the strip pooling (SP) strategy was simultaneously inserted, which enabled the backbone network to effectively capture long distance dependencies. The experimental results show that our proposed method, which combines YOLOv8 and the improved DeepLabv3+, achieves a 90.8% mean intersection over the union (mIoU) value for leaf segmentation on our public leaf dataset. When compared with the fully convolutional neural network (FCN), lite-reduced atrous spatial pyramid pooling (LR-ASPP), pyramid scene parsing network (PSPnet), U-Net, DeepLabv3, and DeepLabv3+, the proposed method improves the mIoU of leaves by 8.2, 8.4, 3.7, 4.6, 4.4, and 2.5 percentage points, respectively. Experimental results show that the performance of our method is significantly improved compared with the classical segmentation methods. The proposed method can thus effectively support the development of smart agroforestry. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

22 pages, 13762 KB  
Article
Removing Human Bottlenecks in Bird Classification Using Camera Trap Images and Deep Learning
by Carl Chalmers, Paul Fergus, Serge Wich, Steven N. Longmore, Naomi Davies Walsh, Philip A. Stephens, Chris Sutherland, Naomi Matthews, Jens Mudde and Amira Nuseibeh
Remote Sens. 2023, 15(10), 2638; https://doi.org/10.3390/rs15102638 - 18 May 2023
Cited by 20 | Viewed by 3981
Abstract
Birds are important indicators for monitoring both biodiversity and habitat health; they also play a crucial role in ecosystem management. Declines in bird populations can result in reduced ecosystem services, including seed dispersal, pollination and pest control. Accurate and long-term monitoring of birds [...] Read more.
Birds are important indicators for monitoring both biodiversity and habitat health; they also play a crucial role in ecosystem management. Declines in bird populations can result in reduced ecosystem services, including seed dispersal, pollination and pest control. Accurate and long-term monitoring of birds to identify species of concern while measuring the success of conservation interventions is essential for ecologists. However, monitoring is time-consuming, costly and often difficult to manage over long durations and at meaningfully large spatial scales. Technology such as camera traps, acoustic monitors and drones provide methods for non-invasive monitoring. There are two main problems with using camera traps for monitoring: (a) cameras generate many images, making it difficult to process and analyse the data in a timely manner; and (b) the high proportion of false positives hinders the processing and analysis for reporting. In this paper, we outline an approach for overcoming these issues by utilising deep learning for real-time classification of bird species and automated removal of false positives in camera trap data. Images are classified in real-time using a Faster-RCNN architecture. Images are transmitted over 3/4G cameras and processed using Graphical Processing Units (GPUs) to provide conservationists with key detection metrics, thereby removing the requirement for manual observations. Our models achieved an average sensitivity of 88.79%, a specificity of 98.16% and accuracy of 96.71%. This demonstrates the effectiveness of using deep learning for automatic bird monitoring. Full article
(This article belongs to the Special Issue Remote Sensing Applications to Ecology: Opportunities and Challenges)
Show Figures

Figure 1

20 pages, 4137 KB  
Article
EfficientDet-4 Deep Neural Network-Based Remote Monitoring of Codling Moth Population for Early Damage Detection in Apple Orchard
by Dana Čirjak, Ivan Aleksi, Darija Lemic and Ivana Pajač Živković
Agriculture 2023, 13(5), 961; https://doi.org/10.3390/agriculture13050961 - 26 Apr 2023
Cited by 11 | Viewed by 3592
Abstract
Deep neural networks (DNNs) have recently been applied in many areas of agriculture, including pest monitoring. The codling moth is the most damaging apple pest, and the currently available methods for its monitoring are outdated and time-consuming. Therefore, the aim of this study [...] Read more.
Deep neural networks (DNNs) have recently been applied in many areas of agriculture, including pest monitoring. The codling moth is the most damaging apple pest, and the currently available methods for its monitoring are outdated and time-consuming. Therefore, the aim of this study was to develop an automatic monitoring system for codling moth based on DNNs. The system consists of a smart trap and an analytical model. The smart trap enables data processing on-site and does not send the whole image to the user but only the detection results. Therefore, it does not consume much energy and is suitable for rural areas. For model development, a dataset of 430 sticky pad photos of codling moth was collected in three apple orchards. The photos were labelled, resulting in 8142 annotations of codling moths, 5458 of other insects, and 8177 of other objects. The results were statistically evaluated using the confusion matrix, and the developed model showed an accuracy > of 99% in detecting codling moths. This developed system contributes to automatic pest monitoring and sustainable apple production. Full article
(This article belongs to the Special Issue Hardware and Software Support for Insect Pest Management)
Show Figures

Figure 1

17 pages, 2468 KB  
Article
Automatic Detection of Moths (Lepidoptera) with a Funnel Trap Prototype
by Norbert Flórián, Júlia Katalin Jósvai, Zsolt Tóth, Veronika Gergócs, László Sipőcz, Miklós Tóth and Miklós Dombos
Insects 2023, 14(4), 381; https://doi.org/10.3390/insects14040381 - 13 Apr 2023
Cited by 9 | Viewed by 4172
Abstract
Monitoring insect populations is essential to optimise pest control with the correct protection timing and the avoidance of unnecessary insecticide use. Modern real-time monitoring practices use automatic insect traps, which are expected to be able to estimate the population sizes of pest animals [...] Read more.
Monitoring insect populations is essential to optimise pest control with the correct protection timing and the avoidance of unnecessary insecticide use. Modern real-time monitoring practices use automatic insect traps, which are expected to be able to estimate the population sizes of pest animals with high species specificity. There are many solutions to overcome this challenge; however, there are only a few data that consider their accuracy under field conditions. This study presents an opto-electronic device prototype (ZooLog VARL) developed by us. A pilot field study evaluated the precision and accuracy of the data filtering using an artificial neural network(ANN) and the detection accuracy of the new probes. The prototype comprises a funnel trap, sensor-ring, and data communication system. The main modification of the trap was a blow-off device that prevented the escape of flying insects from the funnel. These new prototypes were tested in the field during the summer and autumn of 2018, detecting the daily and monthly flight of six moth species (Agrotis segetum, Autographa gamma, Helicoverpa armigera, Cameraria ohridella, Grapholita funebrana, Grapholita molesta). The accuracy of ANN was always higher than 60%. In the case of species with larger body sizes, it reached 90%. The detection accuracy ranged from 84% to 92% on average. These probes detected the real-time catches of the moth species. Therefore, weekly and daily patterns of moth flight activity periods could be compared and displayed for the different species. This device solved the problem of multiple counting and gained a high detection accuracy in target species cases. ZooLog VARL probes provide the real-time, time-series data sets of each monitored pest species. Further evaluation of the catching efficiency of the probes is needed. However, the prototype allows us to follow and model pest dynamics and may make more precise forecasts of population outbreaks. Full article
(This article belongs to the Section Insect Pest and Vector Management)
Show Figures

Figure 1

Back to TopTop