Application of Deep Learning in Precise Analysis of Agricultural Crops

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (20 August 2022) | Viewed by 8963

Special Issue Editors

Department of Computing and Mathematics, Manchester Metropolitan University, Manchester M1 5GD, UK
Interests: potential distribution; suitability; deep learning; remote sensing; hyperspectral
Institute of Aerospace Information Innovation, Chinese Academy of Sciences, 9 Dengzhuang South Road, Haidian District, Beijing 100094, China
Interests: quantitative remote sensing;plant diseases and pests

Special Issue Information

Dear Colleagues,

In the last few years, the exploitation of the power of Big Data and Artificial Intelligence has led to a big step forward in many applications, including in smart agriculture. Deep learning, as one of the most important techniques of AI, has attracted the interest of researchers around the world. Although substantial progress has been made in smart agriculture, multiple challenges are still open due to the complexity of the circumstances and the limited data resources. 

This Special Issue aims to bring together communities of deep learning and agriculture. In this Special Issue, we aim to exchange knowledge on any aspect related to the application of deep learning in the precise analysis of agricultural crops, thus facilitating their introduction and improving crop production in the agricultural field. This is an open call for papers, soliciting original contributions considering recent findings in theory, methodologies, and applications in related scenarios from smart agriculture, especially in crops and orchards.

Dr. Yue Shi
Dr. Huichun Ye
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • computer vision (RGB image, spectral image, drone vision, remote sensing images)
  • precise agriculture
  • plant phenotyping
  • crop disease recognition
  • deep convolutional neural networks
  • spatial temporal feature
  • decision model
  • multimodal fusion
  • intelligent agriculture equipment

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 23526 KiB  
Article
Automatic Estimation of Apple Orchard Blooming Levels Using the Improved YOLOv5
by Zhaoying Chen, Rui Su, Yuliang Wang, Guofang Chen, Zhiqiao Wang, Peijun Yin and Jinxing Wang
Agronomy 2022, 12(10), 2483; https://doi.org/10.3390/agronomy12102483 - 12 Oct 2022
Cited by 16 | Viewed by 2723
Abstract
The estimation of orchard blooming levels and the determination of peak blooming dates are very important because they determine the timing of orchard flower thinning and are essential for apple yield and quality. In this paper, we propose an orchard blooming level estimation [...] Read more.
The estimation of orchard blooming levels and the determination of peak blooming dates are very important because they determine the timing of orchard flower thinning and are essential for apple yield and quality. In this paper, we propose an orchard blooming level estimation method for global-level and block-level blooming level estimation of orchards. The method consists of a deep learning-based apple flower detector, a blooming level estimator, and a peak blooming day finding estimator. The YOLOv5s model is used as the apple flower detector, which is improved by adding a coordinate attention layer and a small object detection layer and by replacing the model neck with a bidirectional feature pyramid network (BiFPN) structure to improve the performance of the apple flower detector at different growth stages. The robustness of the apple flower detector under different light conditions and the generalization across years was tested using apple flower data collected in 2021–2022. The trained apple flower detector achieved a mean average precision of 77.5%. The blooming level estimator estimated the orchard blooming level based on the proportion of flowers detected at different growth stages. Statistical results show that the blooming level estimator follows the trend of orchard blooming levels. The peak blooming day finding estimator successfully positioned the peak blooming time and provided information for the flower thinning timing decision. The method described in this paper is able to provide orchardists with accurate information on apple flower growth status and is highly automated. Full article
Show Figures

Figure 1

20 pages, 4404 KiB  
Article
Predicting Plant Growth and Development Using Time-Series Images
by Chunying Wang, Weiting Pan, Xubin Song, Haixia Yu, Junke Zhu, Ping Liu and Xiang Li
Agronomy 2022, 12(9), 2213; https://doi.org/10.3390/agronomy12092213 - 16 Sep 2022
Cited by 4 | Viewed by 3220
Abstract
Early prediction of the growth and development of plants is important for the intelligent breeding process, yet accurate prediction and simulation of plant phenotypes is difficult. In this work, a prediction model of plant growth and development based on spatiotemporal long short-term memory [...] Read more.
Early prediction of the growth and development of plants is important for the intelligent breeding process, yet accurate prediction and simulation of plant phenotypes is difficult. In this work, a prediction model of plant growth and development based on spatiotemporal long short-term memory (ST-LSTM) and memory in memory network (MIM) was proposed to predict the image sequences of future growth and development including plant organs such as ears. A novel dataset of wheat growth and development was also compiled. The performance of the prediction model of plant growth and development was evaluated by calculating structural similarity index measure (SSIM), mean square error (MSE), and peak signal to noise ratio (PSNR) between the predicted and real plant images. Moreover, the optimal number of time steps and the optimal time interval between steps were determined for the proposed model on the wheat growth and development dataset. Under the optimal setting, the SSIM values surpassed 84% for all time steps. The mean of MSE values was 46.11 and the MSE values were below 68 for all time steps. The mean of PSNR values was 30.67. When the number of prediction steps was set to eight, the prediction model had the best prediction performance on the public Panicoid Phenomap-1 dataset. The SSIM values surpassed 78% for all time steps. The mean of MSE values was 77.78 and the MSE values were below 118 for all time steps. The mean of PSNR values was 29.03. The results showed a high degree of similarity between the predicted images and the real images of plant growth and development and verified the validity, reliability, and feasibility of the proposed model. The study shows the potential to provide the plant phenotyping community with an efficient tool that can perform high-throughput phenotyping and predict future plant growth. Full article
Show Figures

Figure 1

16 pages, 8418 KiB  
Article
Xiaomila Green Pepper Target Detection Method under Complex Environment Based on Improved YOLOv5s
by Fenghua Wang, Zhexing Sun, Yu Chen, Hao Zheng and Jin Jiang
Agronomy 2022, 12(6), 1477; https://doi.org/10.3390/agronomy12061477 - 20 Jun 2022
Cited by 12 | Viewed by 2549
Abstract
Real-time detection of fruit targets is a key technology of the Xiaomila green pepper (Capsicum frutescens L.) picking robot. The complex conditions of orchards make it difficult to achieve accurate detection. However, most of the existing deep learning network detection algorithms cannot [...] Read more.
Real-time detection of fruit targets is a key technology of the Xiaomila green pepper (Capsicum frutescens L.) picking robot. The complex conditions of orchards make it difficult to achieve accurate detection. However, most of the existing deep learning network detection algorithms cannot effectively detect Xiaomila green pepper fruits covered by leaves, branches, and other fruits in natural scenes. As detailed in this paper, the Red, Green, Blue (RGB) images of Xiaomila green pepper in the green and mature stage were collected under natural light conditions for building the dataset and an improved YOLOv5s model (YOLOv5s-CFL) is proposed to improve the efficiency and adaptability of picking robots in the natural environment. First, the convolutional layer in the Cross Stage Partial (CSP) is replaced with GhostConv, the detection speed is improved through a lightweight structure, and the detection accuracy is enhanced by adding a Coordinate Attention (CA) layer and replacing Path Aggregation Network (PANet) in the neck with Bidirectional Feature Pyramid Network (BiFPN). In the experiment, the YOLOv5s-CFL model was used to detect the Xiaomila, and the detection results were analyzed and compared with those of the original YOLOv5s, YOLOv4-tiny, and YOLOv3-tiny models. With these improvements, the Mean Average Precision (mAP) of YOLOv5s-CFL is 1.1%, 6.8%, and 8.9% higher than original YOLOv5s, YOLOv4-tiny, and YOLOv3-tiny, respectively. Compared with the original YOLOv5 model, the model size is reduced from 14.4 MB to 13.8 MB, and the running speed is reduced from 15.8 to 13.9 Gflops. The experimental results indicate that the lightweight model improves the detection accuracy and has good real-time performance and application prospects in the field of picking robots. Full article
Show Figures

Figure 1

Back to TopTop