Next Article in Journal
Seasonal M2 Internal Tides in the Arabian Sea
Next Article in Special Issue
Predicting Maize Yield at the Plot Scale of Different Fertilizer Systems by Multi-Source Data and Machine Learning Methods
Previous Article in Journal
Simulation and Assessment of the Capabilities of Orbita Hyperspectral (OHS) Imagery for Remotely Monitoring Chlorophyll-a in Eutrophic Plateau Lakes
Previous Article in Special Issue
Using Hybrid Artificial Intelligence and Evolutionary Optimization Algorithms for Estimating Soybean Yield and Fresh Biomass Using Hyperspectral Vegetation Indices
 
 
Correction published on 11 May 2022, see Remote Sens. 2022, 14(10), 2313.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models

1
Department of Plant and Soil Science, Texas Tech University, Lubbock, TX 79409, USA
2
Department of Soil and Crop Sciences, Texas A&M AgriLife Research, Lubbock, TX 79403, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(14), 2822; https://doi.org/10.3390/rs13142822
Submission received: 1 June 2021 / Revised: 8 July 2021 / Accepted: 15 July 2021 / Published: 18 July 2021 / Corrected: 11 May 2022
(This article belongs to the Special Issue Deep Learning Methods for Crop Monitoring and Crop Yield Prediction)

Abstract

An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.
Keywords: cotton stand count; unmanned aerial systems; deep learning; remote sensing; MobileNet; CenterNet; Python; Tensorflow cotton stand count; unmanned aerial systems; deep learning; remote sensing; MobileNet; CenterNet; Python; Tensorflow
Graphical Abstract

Share and Cite

MDPI and ACS Style

Lin, Z.; Guo, W. Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models. Remote Sens. 2021, 13, 2822. https://doi.org/10.3390/rs13142822

AMA Style

Lin Z, Guo W. Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models. Remote Sensing. 2021; 13(14):2822. https://doi.org/10.3390/rs13142822

Chicago/Turabian Style

Lin, Zhe, and Wenxuan Guo. 2021. "Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models" Remote Sensing 13, no. 14: 2822. https://doi.org/10.3390/rs13142822

APA Style

Lin, Z., & Guo, W. (2021). Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models. Remote Sensing, 13(14), 2822. https://doi.org/10.3390/rs13142822

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop