Next Article in Journal
Exploring the Applicability of Physiological Monitoring to Manage Physical Fatigue in Firefighters
Next Article in Special Issue
Validity of AI-Based Gait Analysis for Simultaneous Measurement of Bilateral Lower Limb Kinematics Using a Single Video Camera
Previous Article in Journal
Review and Analysis of Tumour Detection and Image Quality Analysis in Experimental Breast Microwave Sensing
 
 
Article
Peer-Review Record

Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction

Sensors 2023, 23(11), 5126; https://doi.org/10.3390/s23115126
by László Kopácsi 1,2, Benjámin Baffy 2,†, Gábor Baranyi 2, Joul Skaf 2, Gábor Sörös 3, Szilvia Szeier 2,‡, András Lőrincz 2,* and Daniel Sonntag 1,4,*
Reviewer 2: Anonymous
Sensors 2023, 23(11), 5126; https://doi.org/10.3390/s23115126
Submission received: 28 April 2023 / Revised: 16 May 2023 / Accepted: 22 May 2023 / Published: 27 May 2023

Round 1

Reviewer 1 Report

This paper presents . I have the following observations.

1. Figure 1, predicted semantic segmentation is shown. There are cases where holes are present that lead to flase segmentation results. How do authors deal with such cases?

2. Page-2, line-66, the implementation is said to be at GitHub. After opening the link, it gives 3D Semantic Label Transfer and Matching in Human-Robot Collaboration. Is the implementation of the work presented in the paper under consideration?

3. Page-4, Figure-2, the overall pipeline is too simple which lacks details of the pipeline. It should be revised by including details.

4. Section 3.2.2, Superpixel-based projection in 3D relies on image processing techniques to estimate the depth of objects in a scene. However, this technique can have limitations when the scene contains objects that are partially occluded or have complex geometries.How do authors deal with such issues.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This article discusses the use of allocentric semantic 3D maps in human-machine interactions and proposes a method for acquiring semantic labels for images taken from unusual perspectives.

*Strength

- This research aims to address the issue of reduced segmentation performance at a robot's low viewpoint.

- To address the issue, various methods are explored and practical solutions are proposed in the research.

* weakness

- There are not many comparable studies to compare with, as there are not many studies on this topic.

 

I have a few points below: 

- When describing the viewpoints of humans and robots in the abstract, it would be helpful to specifically mention the viewpoint from the perspective of a small robot. Without the contextual information of a small robot, the difference in viewpoints between humans and robots may not be fully understood.

- It would be helpful to list useful applications where the proposed techniques can be applied in the discussion section.

- (line 133) "... adopted from from previous work [8]." -> "... adopted from previous work [8]."

- It would be helpful to have labels for the x-axis and y-axis in Figure 4.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The required changes have been made 

Back to TopTop