Open AccessArticle
Convolutional Neural Network-Based Remote Sensing Images Segmentation Method for Extracting Winter Wheat Spatial Distribution
by
Chengming Zhang 1,2,*, Shuai Gao 3,*, Xiaoxia Yang 1,2, Feng Li 4, Maorui Yue 5, Yingjuan Han 6, Hui Zhao 6, Ya’nan Zhang 1 and Keqi Fan 1
1
College of Information Science and Engineering, Shandong Agricultural University, 61 Daizong Road, Taian 271000, China
2
Shandong Technology and Engineering Center for Digital Agriculture, 61 Daizong Road, Taian 271000, China
3
Chinese Academy of Sciences, Institute of Remote Sensing and Digital Earth, 9 Dengzhuangnan Road, Beijing 100094, China
4
Shandong Climate Center, Mountain Road, Jinan 250001, China
5
Taian Agriculture Bureau, Naihe Road, Taian 271000, China
6
Key Laboratory for Meteorological Disaster Monitoring and Early Warning and Risk Management of Characteristic Agriculture in Arid Regions, CMA, 71 Xinchangxi Road, Yinchuan 750002, China
Cited by 14 | Viewed by 4422
Abstract
When extracting winter wheat spatial distribution by using convolutional neural network (CNN) from Gaofen-2 (GF-2) remote sensing images, accurate identification of edge pixel is the key to improving the result accuracy. In this paper, an approach for extracting accurate winter wheat spatial distribution
[...] Read more.
When extracting winter wheat spatial distribution by using convolutional neural network (CNN) from Gaofen-2 (GF-2) remote sensing images, accurate identification of edge pixel is the key to improving the result accuracy. In this paper, an approach for extracting accurate winter wheat spatial distribution based on CNN is proposed. A hybrid structure convolutional neural network (HSCNN) was first constructed, which consists of two independent sub-networks of different depths. The deeper sub-network was used to extract the pixels present in the interior of the winter wheat field, whereas the shallower sub-network extracts the pixels at the edge of the field. The model was trained by classification-based learning and used in image segmentation for obtaining the distribution of winter wheat. Experiments were performed on 39 GF-2 images of Shandong province captured during 2017–2018, with SegNet and DeepLab as comparison models. As shown by the results, the average accuracy of SegNet, DeepLab, and HSCNN was 0.765, 0.853, and 0.912, respectively. HSCNN was equally as accurate as DeepLab and superior to SegNet for identifying interior pixels, and its identification of the edge pixels was significantly better than the two comparison models, which showed the superiority of HSCNN in the identification of winter wheat spatial distribution.
Full article
►▼
Show Figures