**4. Conclusions**

In this paper, we use the Faster R-CNN to train the detection model on an open access dataset, BUAA-FFPP60. After the model is trained and tested, we used the model to detect the chimneys in three high-resolution remote sensing images of Google Maps, which is located in Tangshan city. The recall rates for working chimneys, non-working chimneys, working condensing towers, and non-working condensing towers are 77.27%, 76.62%, 100%, and 100%, respectively. However, the precisions for these targets are only 40.47%, 40.48%, 21.73%, and 8.3%, respectively. To increase the precision of detection, two spatial analysis methods, the DTM filtering and main direction test, are introduced to remove the false positive targets. The results show that more than 95% false chimneys can be removed, and the final precision of detection are 94.44%, 93.65%, 83.3%, and 80% respectively. There also exists a possibility that truly detected chimneys might be removed by these spatial analysis methods. However, in our experiment, only three non-working chimneys have been mistakenly removed. Therefore, DTM filtering and main direction tests are very effective methods to remove the false chimneys in detection results from Faster R-CNN. Although the two spatial analysis methods are very effective and robust to remove false positives, they are not useful to reduce the false negative. To reduce the false negative or increase the recall rate, we use an image enhancement method and a low Faster R-CNN threshold. We also suggest that further studies focus on more methods to reduce the false negatives, such as introducing more pre-processing, constructing new architecture of neural networks, and improving the completeness of the training dataset.

**Author Contributions:** Conceptualization, C.H. and Y.D.; methodology, F.Y. and Y.D.; software, G.L.; validation, G.L., Y.D., and F.Y.; formal analysis, C.H.; investigation, L.B.; resources, G.L. and F.Y.; data curation, G.L. and C.H.; writing—original draft preparation, G.L. and C.H.; writing—review and editing, Y.D.; visualization, G.L.; supervision, C.H.; project administration, C.H.; funding acquisition, C.H. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Key Research and Development Program of China, grant number 2018YFC0213600, and the Strategic Priority Research Program of the Chinese Academy of Sciences, grant number GranXDA19030101 and the National Natural Science Foundation of China, grant number 41590853.

**Conflicts of Interest:** The authors declare no conflict of interest.
