The Applications of Deep Learning in Smart Agriculture

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 11519

Special Issue Editors


E-Mail Website
Guest Editor
Department of Natural Resources and Agricultural Engineering, Agricultural University of Athens, 75 Iera Odos St., 11855 Athens, Greece
Interests: deep learning; computer vision; natural language processing; multimodal learning; self-supervised learning; domain adaptation

E-Mail Website
Guest Editor
Interdisciplinary Centre for Data and AI, School of Natural and Computing Sciences, University of Aberdeen, Aberdeen AB24 3FX, UK
Interests: deep learning; multimodal learning; capsule neural networks; self-supervised learning; domain adaptation; privacy-preserving technologies; efficient deep learning systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Due to the substantial population growth, ensuring the availability of high-quality food globally without affecting natural ecosystems has become a relevant concern where agriculture is considered a critical field with a significant economic and environmental impact. Therefore, advancing toward smart agriculture has become an unavoidable step. This means that new emerging technologies should be integrated within important agricultural tasks (e.g., phenotyping, disease detection, yield prediction, harvesting, spraying, etc.).

Many of these emerging technologies are related to deep learning, a field of artificial intelligence in which the relevant features of a decision/prediction problem are automatically extracted. The relationship between agriculture and deep learning has become rather promising in recent years; specifically, positive results have been reported by implementing deep-learning-based techniques, such as transfer learning, domain adaptation/generalization, transformer-based architectures, generative adversarial neural networks, knowledge distillation, neural architecture search, etc. These techniques, which directly favor the improvement of the current methods used in precision agriculture, could boost the value of different types of data: from images or videos to the texts found in regulatory documents, without forgetting about tabular data containing vegetation indexes along the growing season.

Thus, this Special Issue aims to provide a place for submitting all papers scoped under the agricultural domain and the use of deep learning-based techniques.

Dr. Borja Espejo-García
Dr. Spyros Fountas
Dr. Georgios Leontidis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart farming
  • deep learning
  • precision agriculture
  • computer vision
  • natural language processing
  • machine learning
  • sensors
  • multi-modal information
  • farm machinery

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 29742 KiB  
Article
Rice Counting and Localization in Unmanned Aerial Vehicle Imagery Using Enhanced Feature Fusion
by Mingwei Yao, Wei Li, Li Chen, Haojie Zou, Rui Zhang, Zijie Qiu, Sha Yang and Yue Shen
Agronomy 2024, 14(4), 868; https://doi.org/10.3390/agronomy14040868 - 21 Apr 2024
Viewed by 434
Abstract
In rice cultivation and breeding, obtaining accurate information on the quantity and spatial distribution of rice plants is crucial. However, traditional field sampling methods can only provide rough estimates of the plant count and fail to capture precise plant locations. To address these [...] Read more.
In rice cultivation and breeding, obtaining accurate information on the quantity and spatial distribution of rice plants is crucial. However, traditional field sampling methods can only provide rough estimates of the plant count and fail to capture precise plant locations. To address these problems, this paper proposes P2PNet-EFF for the counting and localization of rice plants. Firstly, through the introduction of the enhanced feature fusion (EFF), the model improves its ability to integrate deep semantic information while preserving shallow spatial details. This allows the model to holistically analyze the morphology of plants rather than focusing solely on their central points, substantially reducing errors caused by leaf overlap. Secondly, by integrating efficient multi-scale attention (EMA) into the backbone, the model enhances its feature extraction capabilities and suppresses interference from similar backgrounds. Finally, to evaluate the effectiveness of the P2PNet-EFF method, we introduce the URCAL dataset for rice counting and localization, gathered using UAV. This dataset consists of 365 high-resolution images and 173,352 point annotations. Experimental results on the URCAL demonstrate that the proposed method achieves a 34.87% reduction in MAE and a 28.19% reduction in RMSE compared to the original P2PNet while increasing R2 by 3.03%. Furthermore, we conducted extensive experiments on three frequently used plant counting datasets. The results demonstrate the excellent performance of the proposed method. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

18 pages, 4951 KiB  
Article
Evaluation of the Potential of Using Machine Learning and the Savitzky–Golay Filter to Estimate the Daily Soil Temperature in Gully Regions of the Chinese Loess Plateau
by Wei Deng, Dengfeng Liu, Fengnian Guo, Lianpeng Zhang, Lan Ma, Qiang Huang, Qiang Li, Guanghui Ming and Xianmeng Meng
Agronomy 2024, 14(4), 703; https://doi.org/10.3390/agronomy14040703 - 28 Mar 2024
Viewed by 711
Abstract
Soil temperature directly affects the germination of seeds and the growth of crops. In order to accurately predict soil temperature, this study used RF and MLP to simulate shallow soil temperature, and then the shallow soil temperature with the best simulation effect will [...] Read more.
Soil temperature directly affects the germination of seeds and the growth of crops. In order to accurately predict soil temperature, this study used RF and MLP to simulate shallow soil temperature, and then the shallow soil temperature with the best simulation effect will be used to predict the deep soil temperature. The models were forced by combinations of environmental factors, including daily air temperature (Tair), water vapor pressure (Pw), net radiation (Rn), and soil moisture (VWC), which were observed in the Hejiashan watershed on the Loess Plateau in China. The results showed that the accuracy of the model for predicting deep soil temperature proposed in this paper is higher than that of directly using environmental factors to predict deep soil temperature. In testing data, the range of MAE was 1.158–1.610 °C, the range of RMSE was 1.449–2.088 °C, the range of R2 was 0.665–0.928, and the range of KGE was 0.708–0.885 at different depths. The study not only provides a critical reference for predicting soil temperature but also helps people to better carry out agricultural production activities. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

16 pages, 5704 KiB  
Article
Evaluating Time-Series Prediction of Temperature, Relative Humidity, and CO2 in the Greenhouse with Transformer-Based and RNN-Based Models
by Ju Yeon Ahn, Yoel Kim, Hyeonji Park, Soo Hyun Park and Hyun Kwon Suh
Agronomy 2024, 14(3), 417; https://doi.org/10.3390/agronomy14030417 - 21 Feb 2024
Viewed by 597
Abstract
In greenhouses, plant growth is directly influenced by internal environmental conditions, and therefore requires continuous management and proper environmental control. Inadequate environmental conditions make plants vulnerable to pests and diseases, lower yields, and cause impaired growth and development. Previous studies have explored the [...] Read more.
In greenhouses, plant growth is directly influenced by internal environmental conditions, and therefore requires continuous management and proper environmental control. Inadequate environmental conditions make plants vulnerable to pests and diseases, lower yields, and cause impaired growth and development. Previous studies have explored the combination of greenhouse actuator control history with internal and external environmental data to enhance prediction accuracy, using deep learning-based models such as RNNs and LSTMs. In recent years, transformer-based models and RNN-based models have shown good performance in various domains. However, their applications for time-series forecasting in a greenhouse environment remain unexplored. Therefore, the objective of this study was to evaluate the prediction performance of temperature, relative humidity (RH), and CO2 concentration in a greenhouse after 1 and 3 h, using a transformer-based model (Autoformer), variants of two RNN models (LSTM and SegRNN), and a simple linear model (DLinear). The performance of these four models was compared to assess whether the latest state-of-the-art (SOTA) models, Autoformer and SegRNN, are as effective as DLinear and LSTM in predicting greenhouse environments. The analysis was based on four external climate data samples, three internal data samples, and six actuator data samples. Overall, DLinear and SegRNN consistently outperformed Autoformer and LSTM. Both DLinear and SegRNN performed well in general, but were not as strong in predicting CO2 concentration. SegRNN outperformed DLinear in CO2 predictions, while showing similar performance in temperature and RH prediction. The results of this study do not provide a definitive conclusion that transformer-based models, such as Autoformer, are inferior to linear-based models like DLinear or certain RNN-based models like SegRNN in predicting time series for greenhouse environments. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

22 pages, 214283 KiB  
Article
ESG-YOLO: A Method for Detecting Male Tassels and Assessing Density of Maize in the Field
by Wendi Wu, Jianhua Zhang, Guomin Zhou, Yuhang Zhang, Jian Wang and Lin Hu
Agronomy 2024, 14(2), 241; https://doi.org/10.3390/agronomy14020241 - 24 Jan 2024
Viewed by 1003
Abstract
The intelligent acquisition of phenotypic information on male tassels is critical for maize growth and yield assessment. In order to realize accurate detection and density assessment of maize male tassels in complex field environments, this study used a UAV to collect images of [...] Read more.
The intelligent acquisition of phenotypic information on male tassels is critical for maize growth and yield assessment. In order to realize accurate detection and density assessment of maize male tassels in complex field environments, this study used a UAV to collect images of maize male tassels under different environmental factors in the experimental field and then constructed and formed the ESG-YOLO detection model based on the YOLOv7 model by using GELU as the activation function instead of the original SiLU and by adding a dual ECA attention mechanism and an SPD-Conv module. And then, through the model to identify and detect the male tassel, the model’s average accuracy reached a mean value (mAP) of 93.1%; compared with the YOLOv7 model, its average accuracy mean value (mAP) is 2.3 percentage points higher. Its low-resolution image and small object target detection is excellent, and it can be more intuitive and fast to obtain the maize male tassel density from automatic identification surveys. It provides an effective method for high-precision and high-efficiency identification of maize male tassel phenotypes in the field, and it has certain application value for maize growth potential, yield, and density assessment. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

23 pages, 48270 KiB  
Article
Improved YOLOv7-Tiny Complex Environment Citrus Detection Based on Lightweighting
by Bo Gu, Changji Wen, Xuanzhi Liu, Yingjian Hou, Yuanhui Hu and Hengqiang Su
Agronomy 2023, 13(11), 2667; https://doi.org/10.3390/agronomy13112667 - 24 Oct 2023
Cited by 4 | Viewed by 1742
Abstract
In complex citrus orchard environments, light changes, branch shading, and fruit overlapping impact citrus detection accuracy. This paper proposes the citrus detection model YOLO-DCA in complex environments based on the YOLOv7-tiny model. We used depth-separable convolution (DWConv) to replace the ordinary convolution in [...] Read more.
In complex citrus orchard environments, light changes, branch shading, and fruit overlapping impact citrus detection accuracy. This paper proposes the citrus detection model YOLO-DCA in complex environments based on the YOLOv7-tiny model. We used depth-separable convolution (DWConv) to replace the ordinary convolution in ELAN, which reduces the number of parameters of the model; we embedded coordinate attention (CA) into the convolution to make it a coordinate attention convolution (CAConv) to replace the ordinary convolution of the neck network convolution; and we used a dynamic detection head to replace the original detection head. We trained and evaluated the test model using a homemade citrus dataset. The model size is 4.5 MB, the number of parameters is 2.1 M, mAP is 96.98%, and the detection time of a single image is 5.9 ms, which is higher than in similar models. In the application test, it has a better detection effect on citrus in occlusion, light transformation, and motion change scenes. The model has the advantages of high detection accuracy, small model space occupation, easy application deployment, and strong robustness, which can help citrus-picking robots and improve their intelligence level. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

12 pages, 9590 KiB  
Article
Real-Time Joint-Stem Prediction for Agricultural Robots in Grasslands Using Multi-Task Learning
by Jiahao Li, Ronja Güldenring and Lazaros Nalpantidis
Agronomy 2023, 13(9), 2365; https://doi.org/10.3390/agronomy13092365 - 12 Sep 2023
Cited by 1 | Viewed by 1109
Abstract
Autonomous weeding robots need to accurately detect the joint stem of grassland weeds in order to control those weeds in an effective and energy-efficient manner. In this work, keypoints on joint stems and bounding boxes around weeds in grasslands are detected jointly using [...] Read more.
Autonomous weeding robots need to accurately detect the joint stem of grassland weeds in order to control those weeds in an effective and energy-efficient manner. In this work, keypoints on joint stems and bounding boxes around weeds in grasslands are detected jointly using multi-task learning. We compare a two-stage, heatmap-based architecture to a single-stage, regression-based architecture—both based on the popular YOLOv5 object detector. Our results show that introducing joint-stem detection as a second task boosts the individual weed detection performance in both architectures. Furthermore, the single-stage architecture clearly outperforms its competitors with an OKS of 56.3 in joint-stem detection while also achieving real-time performance of 12.2 FPS on Nvidia Jetson NX, suitable for agricultural robots. Finally, we make the newly created joint-stem ground-truth annotations publicly available for the relevant research community. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

20 pages, 4363 KiB  
Article
Deep-Learning-Based Rice Disease and Insect Pest Detection on a Mobile Phone
by Jizhong Deng, Chang Yang, Kanghua Huang, Luocheng Lei, Jiahang Ye, Wen Zeng, Jianling Zhang, Yubin Lan and Yali Zhang
Agronomy 2023, 13(8), 2139; https://doi.org/10.3390/agronomy13082139 - 15 Aug 2023
Cited by 2 | Viewed by 2013
Abstract
The realization that mobile phones can detect rice diseases and insect pests not only solves the problems of low efficiency and poor accuracy from manually detection and reporting, but it also helps farmers detect and control them in the field in a timely [...] Read more.
The realization that mobile phones can detect rice diseases and insect pests not only solves the problems of low efficiency and poor accuracy from manually detection and reporting, but it also helps farmers detect and control them in the field in a timely fashion, thereby ensuring the quality of rice grains. This study examined two Improved detection models for the detection of six high-frequency diseases and insect pests. These models were the Improved You Only Look Once (YOLO)v5s and YOLOv7-tiny based on their lightweight object detection networks. The Improved YOLOv5s was introduced with the Ghost module to reduce computation and optimize the model structure, and the Improved YOLOv7-tiny was introduced with the Convolutional Block Attention Module (CBAM) and SIoU to improve model learning ability and accuracy. First, we evaluated and analyzed the detection accuracy and operational efficiency of the models. Then we deployed two proposed methods to a mobile phone. We also designed an application to further verify their practicality for detecting rice diseases and insect pests. The results showed that Improved YOLOv5s achieved the highest F1-Score of 0.931, 0.961 in mean average precision (mAP) (0.5), and 0.648 in mAP (0.5:0.9). It also reduced network parameters, model size, and the floating point operations per second (FLOPs) by 47.5, 45.7, and 48.7%, respectively. Furthermore, it increased the model inference speed by 38.6% compared with the original YOLOv5s model. Improved YOLOv7-tiny outperformed the original YOLOv7-tiny in detection accuracy, which was second only to Improved YOLOv5s. The probability heat maps of the detection results showed that Improved YOLOv5s performed better in detecting large target areas of rice diseases and insect pests, while Improved YOLOv7-tiny was more accurate in small target areas. On the mobile phone platform, the precision and recall of Improved YOLOv5s under FP16 accuracy were 0.925 and 0.939, and the inference speed was 374 ms/frame, which was superior to Improved YOLOv7-tiny. Both of the proposed improved models realized accurate identification of rice diseases and insect pests. Moreover, the constructed mobile phone application based on the improved detection models provided a reference for realizing fast and efficient field diagnoses. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

19 pages, 10404 KiB  
Article
Improved U-Net for Growth Stage Recognition of In-Field Maize
by Tianyu Wan, Yuan Rao, Xiu Jin, Fengyi Wang, Tong Zhang, Yali Shu and Shaowen Li
Agronomy 2023, 13(6), 1523; https://doi.org/10.3390/agronomy13061523 - 31 May 2023
Cited by 3 | Viewed by 1383
Abstract
Precise recognition of maize growth stages in the field is one of the critical steps in conducting precision irrigation and crop growth evaluation. However, due to the ever-changing environmental factors and maize growth characteristics, traditional recognition methods usually suffer from limitations in recognizing [...] Read more.
Precise recognition of maize growth stages in the field is one of the critical steps in conducting precision irrigation and crop growth evaluation. However, due to the ever-changing environmental factors and maize growth characteristics, traditional recognition methods usually suffer from limitations in recognizing different growth stages. For the purpose of tackling these issues, this study proposed an improved U-net by first using a cascade convolution-based network as the encoder with a strategy for backbone network replacement to optimize feature extraction and reuse. Secondly, three attention mechanism modules have been introduced to upgrade the decoder part of the original U-net, which highlighted critical regions and extracted more discriminative features of maize. Subsequently, a dilation path of the improved U-net was constructed by integrating dilated convolution layers using a multi-scale feature fusion approach to preserve the detailed spatial information of in-field maize. Finally, the improved U-net has been applied to recognize different growth stages of maize in the field. The results clearly demonstrated the superior ability of the improved U-net to precisely segment and recognize maize growth stage from in-field images. Specifically, the semantic segmentation network achieved a mean intersection over union (mIoU) of 94.51% and a mean pixel accuracy (mPA) of 96.93% in recognizing the maize growth stage with only 39.08 MB of parameters. In conclusion, the good trade-offs made in terms of accuracy and parameter number demonstrated that this study could lay a good foundation for implementing accurate maize growth stage recognition and long-term automatic growth monitoring. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

13 pages, 2627 KiB  
Article
Advancing Agricultural Predictions: A Deep Learning Approach to Estimating Bulb Weight Using Neural Prophet Model
by Wonseong Kim and Byung Min Soon
Agronomy 2023, 13(5), 1362; https://doi.org/10.3390/agronomy13051362 - 12 May 2023
Cited by 1 | Viewed by 1291
Abstract
A deep learning methodology was utilized to predict the bulb weights of garlic and onions in the Jeolla Province of Korea. The Korea Rural Economic Institute (KREI) operates the Outlook & Agricultural Statistics Information System (OASIS) platform, which provides actual measurements of garlic [...] Read more.
A deep learning methodology was utilized to predict the bulb weights of garlic and onions in the Jeolla Province of Korea. The Korea Rural Economic Institute (KREI) operates the Outlook & Agricultural Statistics Information System (OASIS) platform, which provides actual measurements of garlic and onions. We trained the Neural Prophet (NP) lagged time-series model using this data. The NP model effectively handles lagged variables and their covariates by inserting a hidden layer. Our results indicate that the NP model performed with around 5% mean absolute error in predicting bulb weights, with a gap of 3.3 g and 4.7 g with average weights of 63.7 g and 129.9 g for garlic and onions, respectively. This experimental research was based on only three years of measurement data. Hence, the gap between observed and predicted data can be reduced by accumulating more measurement data in the future. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

Back to TopTop