Next Article in Journal
Sentinel-2 and Sentinel-1 Bare Soil Temporal Mosaics of 6-Year Periods for Soil Organic Carbon Content Mapping in Central France
Next Article in Special Issue
Assessing Conservation Conditions at La Fortaleza de Kuelap, Peru, Based on Integrated Close-Range Remote Sensing and Near-Surface Geophysics
Previous Article in Journal
The Issue of Land Subsidence in Coastal and Alluvial Plains: A Bibliometric Review
Previous Article in Special Issue
UAV LiDAR Based Approach for the Detection and Interpretation of Archaeological Micro Topography under Canopy—The Rediscovery of Perticara (Basilicata, Italy)
 
 
Article
Peer-Review Record

Toward a Data Fusion Index for the Assessment and Enhancement of 3D Multimodal Reconstruction of Built Cultural Heritage

Remote Sens. 2023, 15(9), 2408; https://doi.org/10.3390/rs15092408
by Anthony Pamart 1,*, Violette Abergel 1, Livio de Luca 1 and Philippe Veron 2
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2023, 15(9), 2408; https://doi.org/10.3390/rs15092408
Submission received: 4 April 2023 / Revised: 25 April 2023 / Accepted: 28 April 2023 / Published: 4 May 2023

Round 1

Reviewer 1 Report

The paper aims at defining a method based on density estimation to compute a Multimodal Enhancement Fusion Index (MEFI) revealing layers behind 3D coordinates in the field of Digital Cultural Heritage.

MEFI shows – through color-coded features – if the underlying data are isolated and sparse or redundant and dense, and adding semantic layers to provide qualitative information. The metadata schema used is MEMoS. A customized 3D viewer is also presented.

The aim is to create a heat-map enabling to reveal where does a model potentially have more information and where it is lacking of data, through a fusion index enhancing the underlying information stored into multi-source 3D reconstruction (quantitative index reflecting an overlapping score from spatial and geometric features), increasing the quality of the informative value and interpretative potential from multi-source point clouds.

 

The topic is extremely relevant nowadays in the field of Cultural Heritage, since massive (often multi-source) digitization is producing unexploited and “unrelated” data, potentially crucial to support knowledge, interpretation and conservation.

Innovativeness and significance lie in the approach: working on multimodal datasets as “containers” of meaningful information as a step prior to semantic exploration. Today there is much need for this awareness.

 

In the introduction, the research framework is clearly outlined. The state of the art is comprehensive and up-to-date, addressing data fusion in the framework of 3D Multimodal Reconstruction for Cultural Heritage applications.

Out of 69 references, 32 are dated from the last five years. Cited references are relevant to the research.

The case study analysed to evaluate the MEFI index in a real scenario is well explained and methods clearly described.

Results and discussions are appropriate and include limits and future works.

 

Minor remarks and typos:

- line 28: “collected and growing” instead of “collected ang growing”.

- check line 160 (“a comparison and analysis cross-analysis tools”);

- check line 243 (“WHO, WHEN, WHERE, WHAT, HOW, WHY and WHY”): WHICH is missing;

- line 304: the full stop at the end of the sentence is missing;

- line 520: space to be deleted before semicolon.

Author Response

Dear Reviewer,

Thank you for your review, please find below our answers in red within the text.

The paper aims at defining a method based on density estimation to compute a Multimodal Enhancement Fusion Index (MEFI) revealing layers behind 3D coordinates in the field of Digital Cultural Heritage.

MEFI shows – through color-coded features – if the underlying data are isolated and sparse or redundant and dense, and adding semantic layers to provide qualitative information. The metadata schema used is MEMoS. A customized 3D viewer is also presented.

The aim is to create a heat-map enabling to reveal where does a model potentially have more information and where it is lacking of data, through a fusion index enhancing the underlying information stored into multi-source 3D reconstruction (quantitative index reflecting an overlapping score from spatial and geometric features), increasing the quality of the informative value and interpretative potential from multi-source point clouds.

The topic is extremely relevant nowadays in the field of Cultural Heritage, since massive (often multi-source) digitization is producing unexploited and “unrelated” data, potentially crucial to support knowledge, interpretation and conservation.

Innovativeness and significance lie in the approach: working on multimodal datasets as “containers” of meaningful information as a step prior to semantic exploration. Today there is much need for this awareness.

In the introduction, the research framework is clearly outlined. The state of the art is comprehensive and up-to-date, addressing data fusion in the framework of 3D Multimodal Reconstruction for Cultural Heritage applications.

Answer : Thank you for your complete and precise understanding of the work presented.

Out of 69 references, 32 are dated from the last five years. Cited references are relevant to the research.

Answer : In our opinion, citing recent works could be sometimes less meaningful than citing original research that has marked its time or citing precisely libraries or algorithms. References that are 5 years old can still be worth a citation when the subject is either under-explored or over-explored. In particular Data Fusion related are indeed « old » but as revealed by the new reference [70] i) fusion techniques are nowadays almost exclusively by advanced deep-learning approaches (therefore out of the scope of this study ii) Cultural Heritage or even Remote Sensing are not encompassed by major application domains for such methods (≠ military, robotics and healthcare). However, when relevant some changes has been made in the bibliography.

Updated :

  • Hackel, T., Wegner, J. D., & Schindler, K. (2017). Joint classification and contour extraction of large 3D point clouds. ISPRS Journal of Photogrammetry and Remote Sensing130, 231-245. [33]
  • Crameri, F., Shephard, G. E., & Heron, P. J. (2020). The misuse of colour in science communication. Nature communications11(1), 5444 [49]
  • Liu, Y., & Heer, J. (2018, April). Somewhere over the rainbow: An empirical assessment of quantitative colormaps. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-12). [50]
  • Sarah Tournon, Vincent Baillet, Mehdi Chayani, Bruno Dutailly, Xavier Granier, et al.. The French National 3D Data Repository for Humanities: Features, Feedback and Open Questions. Computer Applications and Quantitative Methods in Archaeology (CAA) 2021, Jun 2021, Lymassol (virtual), Cyprus. hal-03267055 [55]

Added : 67,  69 and 70

The case study analysed to evaluate the MEFI index in a real scenario is well explained and methods clearly described.

Results and discussions are appropriate and include limits and future works.

Minor remarks and typos:

- line 28: “collected and growing” instead of “collected ang growing”.

- check line 160 (“a comparison and analysis cross-analysis tools”);

- check line 243 (“WHO, WHEN, WHERE, WHAT, HOW, WHY and WHY”): WHICH is missing;

- line 304: the full stop at the end of the sentence is missing;

- line 520: space to be deleted before semicolon.

 

Answer : All corrections made for those points. 

In addition, Data availability have been added (All data accessible linked to DOI under Etalab 2.0 Licence) and Added acknowledgments section.

Reviewer 2 Report

The article is interesting, topical and in line with the theme of the magazine. It is a detailed and comparative study of the problem of the quantity of point cloud data, as well as its subsequent processing. I wonder if it would not be interesting to carry out mesh comparison studies, even if they come from the point cloud. I have no objections, except that some of the images cause a very large file size.

Author Response

Dear Reviewer,

Thank you for your review, please find below our answers in red within the text.

The article is interesting, topical and in line with the theme of the magazine. It is a detailed and comparative study of the problem of the quantity of point cloud data, as well as its subsequent processing. I wonder if it would not be interesting to carry out mesh comparison studies, even if they come from the point cloud. I have no objections, except that some of the images cause a very large file size.

Answer : Thank you for the sharp review. Indeed, mesh integration would be interesting and quite straightforward, at least on the computing side (based on vertices), as resampling is applied the visualization and transfer of attributes, could be trickier but still doable. A sentence has been added (line 570) to insert this remark. Images have been resampled to decreased file size.

In addition, Data availability have been added (All data accessible linked to DOI under Etalab 2.0 Licence) and Added acknowledgments section.

Reviewer 3 Report

The paper focuses on a very interesting topic related to how evaluate, from a quantitative point of view, previous 2D/3D documentation data of a CH object. It comes with the definition of a fusion index to support the information extraction and the decision making process by making explicit data density and data typologies for each portion of the building. 

Given the "work in progress" approach to the paper, the "future work" section provides an adequate and appealing overview of possible improvements, specifically, the ones related to the evaluation of qualitative metrics seems to be crucial as well as.

Two notes:

- Typo at line 28: "ang" istead of "and"

- At line 40 the concept of data fusion is presented for the first time in the structure of the paper. Since it is then further presented in the state of the art, it would be good to whether refer to the following paragraph or insert references also here just to frame out from the very beginning what is meant with data fusion in CH field.

 

 

Author Response

Dear Reviewer,

Thank you for your review, please find below our answers in red within the text.

The paper focuses on a very interesting topic related to how evaluate, from a quantitative point of view, previous 2D/3D documentation data of a CH object. It comes with the definition of a fusion index to support the information extraction and the decision making process by making explicit data density and data typologies for each portion of the building. 

Given the "work in progress" approach to the paper, the "future work" section provides an adequate and appealing overview of possible improvements, specifically, the ones related to the evaluation of qualitative metrics seems to be crucial as well as.

Answer: Thank you for understanding so well the motivation behind this work which is indeed at the embryonic stage.

Two notes:

- Typo at line 28: "ang" istead of "and"

- At line 40 the concept of data fusion is presented for the first time in the structure of the paper. Since it is then further presented in the state of the art, it would be good to whether refer to the following paragraph or insert references also here just to frame out from the very beginning what is meant with data fusion in CH field.

 

Answer : Those two notes have been addressed.

In addition, Data availability have been added (All data accessible linked to DOI under Etalab 2.0 Licence) and Added acknowledgments section.

Back to TopTop