Next Article in Journal
Parallel Optimization for Large Scale Interferometric Synthetic Aperture Radar Data Processing
Previous Article in Journal
Improving the Spatial Prediction of Soil Organic Carbon Content Using Phenological Factors: A Case Study in the Middle and Upper Reaches of Heihe River Basin, China
 
 
Article
Peer-Review Record

Mean Inflection Point Distance: Artificial Intelligence Mapping Accuracy Evaluation Index—An Experimental Case Study of Building Extraction

Remote Sens. 2023, 15(7), 1848; https://doi.org/10.3390/rs15071848
by Ding Yu 1, Aihua Li 1, Jinrui Li 2, Yan Xu 3 and Yinping Long 4,*
Reviewer 1:
Reviewer 3:
Remote Sens. 2023, 15(7), 1848; https://doi.org/10.3390/rs15071848
Submission received: 21 February 2023 / Revised: 29 March 2023 / Accepted: 29 March 2023 / Published: 30 March 2023
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
(This article belongs to the Section AI Remote Sensing)

Round 1

Reviewer 1 Report

The authors proposed an inflection point matching algorithm of a vector polygondesigned the mean inflection point distance (MPD) as a new segmentation evaluation method for artificial intelligence mapping. The experimental results show that the MPD is more sensitive to the boundary accuracy than IoU and Boundary IoU, and can obtain an accurate error value. However, the novelty is limited and the presentation is unsatisfactory. Some questions are listed below.

 

1. MPD is not generalizable. Because the experiments were conducted to validate the accuracy of the building extraction? Perhaps more experiments should be conducted to verify the advantages of MPD. Moreover, the discussion of results did not highlight the effects of using MPD.

 

2. The flowchart (Fig. 5) is not organized. I suggest to redraw a logical and readable flowchart.

 

3. Lots of works using inflection point distance and angle to assistant specific tasks. Please specify the differences and advantages of proposed MPD.

 

4. There are many format and typing errors in this manuscript.

 

 

5. English writing should be thoroughly revised.

Author Response

Dear Editors and Reviewers,

Thanks very much for taking your time to review this manuscript. We really appreciate all your comments and nice suggestions! Please find our itemized responses in the attachment and our revisions in the resubmitted files.

 

Thanks again!

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper entitled "Mean Inflection Point Distance: Artificial Intelligence Mapping Accuracy Evaluation Index" presents a new metric (MPD) to evaluate the goodness of semantic segmentation applied to the extraction of building boundaries from aerial images.

It is based on the reasoned questioning by means of references of the MIoU metric focusing on the fact that the necessary attention is not paid to the external shape (perimeter) of the object to be extracted but to the ratio between true positives and false negatives.

The metric presented to measure the goodness of segmentation proposes to measure difference in inflection points of representative vertices of buildings (object of detection in the proof of concept), as is done in the field of image analysis in medicine.

Although the article starts from a state of the art, it lacks more references related to semantic segmentation (frameworks, use cases) applied to the extraction of mappable objects as noted in the first sentence of the article abstract. Only 4 references are provided and their contributions are not described. References are presented that do not fit with what is said in the text. Many references are of arXiv type, i.e. open publications that have NOT been peer-reviewed. Authors are urged to look for references in impact journals (with filter) for arXiv papers. They are also urged to work better on the references because in many cases they are incomplete. Before finishing with the references, it should be pointed out that when [24] or [25] are referenced for findContour or Douglas-Peucker the referenced texts do not seem to have any relation with them. Is there a problem with the reference manager?

 

In the experiments section, three frameworks are directly chosen to perform semantic segmentation without justifying their choice: PointRend [5] (shouldn't it be reference 38?), SwinTransformer [6], and Mask-RCNN [27]. Why these and not others. Review ALL references and complete them.

In the abstract, reference should be made to the proof of concept performed mentioning the semantic segmentation frameworks.

From the point of view of analysis and discussion of results:

Comparing the results in Tables 1-3 shows a high correlation between IOU and Boundary IoU metrics. While in many cases there is also this correlation, with the metrics MSD, MPD_EP and MPD in others the correlation is inverse. At least this can be deduced from the results shown in Table 4. Can the authors say something about this, could this be affirmed? This would mean that the higher the IoU the lower the MPD. Which value is better for this MPD metric and are the two metrics saying the same thing? Clearly NO, the results in tables 1 -3 show contradictory values in this regard.

From a practical point of view, the authors will have had to develop some kind of code to implement the MPD metric. For IoU there are different implementation options according to the different environments in which the networks are worked with (TensorFlow, Keras or Pytorch), how did they implement it?, for an experiment to be reproducible, all the necessary data must be given and its reproduction must be made easy, even with code. In this sense I propose to the authors to publish in a github or similar repository the code used to facilitate the verification of their contributions and methodological proposals.

Author Response

Dear Editors and Reviewers,

Thanks very much for taking your time to review this manuscript. We really appreciate all your comments and nice suggestions! Please find our itemized responses in the attachment and our revisions in the resubmitted files.

 

Thanks again!

Author Response File: Author Response.pdf

Reviewer 3 Report

This paper proposes a new evaluation method. Over the years, IOU and miou have many inherent problems, which may not effectively improve the reliability of translation algorithms. The proposed evaluation algorithm in this paper is an innovation and provides a new scheme for the semantic segmentation and evaluation of remote sensing images. Therefore, the article can be published.

Author Response

Dear Editors and Reviewers,

Thanks very much for taking your time to review this manuscript. We really appreciate all your comments and nice suggestions! Please find our itemized responses in the attachment and our revisions in the resubmitted files.

 

Thanks again!

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Thanks for authors.

All my concerns have been addressed. I recommend for publication.

Besides, some minor suggestions are listed below,

 

1.     In line 149, the second contribution could be presented as “We define and formalize the edge inflection point of the vector contour”. Such description helps for highlighting the novelty. After “define and formalize”, I suggest to add another sentence to clarify the importance of this step.

2.     In caption of Fig. 6, (a) and (b) mean what should be supplied (refer to the caption of Fig. 7).

3.     If possible, replace the figures with higher resolution with format of “png” or “tif”.

 

4.     In Introduction, reference DOI: 10.3390/rs14164065 proposed a semantic segmentation network of remote sensing images for land cover mapping, which is encouraged to be cited. 

Author Response

Dear Editors and Reviewers,

Thanks very much for taking your time to review this manuscript. We really appreciate all your comments and nice suggestions! Please find our itemized responses in below and our revisions in the resubmitted files.

 

Thanks again!

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors have substantially improved the article by adjusting the title, rewriting the abstract, improving the state of the art of the semantic segmentation part and better describing the methodology. Also in the analysis and discussion of the results and the improvements in the analysis of the accuracies and the HD and MPD metrics.

 

I insist on the need for the authors to publish the code, once it is organised so that the experiments are reproducible and comparisons of results can be made.

 

Author Response

Dear Editors and Reviewers,

Thanks very much for taking your time to review this manuscript. We really appreciate all your comments and nice suggestions! Please find our itemized responses in below and our revisions in the resubmitted files.

 

Thanks again!

Author Response File: Author Response.pdf

Back to TopTop