Next Article in Journal
Assessment of Multiple Planetary Boundary Layer Height Retrieval Methods and Their Impact on PM2.5 and Its Chemical Compositions throughout a Year in Nanjing
Previous Article in Journal
An Evaluation of Ecosystem Quality and Its Response to Aridity on the Qinghai–Tibet Plateau
 
 
Article
Peer-Review Record

A New Framework for Generating Indoor 3D Digital Models from Point Clouds

Remote Sens. 2024, 16(18), 3462; https://doi.org/10.3390/rs16183462
by Xiang Gao 1, Ronghao Yang 1,*, Xuewen Chen 1, Junxiang Tan 1, Yan Liu 2, Zhaohua Wang 1, Jiahao Tan 1 and Huan Liu 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2024, 16(18), 3462; https://doi.org/10.3390/rs16183462
Submission received: 7 August 2024 / Revised: 13 September 2024 / Accepted: 16 September 2024 / Published: 18 September 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript presents a method for reconstructing building floor plans and 3D digital models from indoor point clouds. It is interesting.

After carefully reading the manuscript, it is suggested to revise the following items to improve the manuscript.

1.     It is suggested to give the accuracy of the proposed method in the Abstract.

2.     Keywords are too few and do not reflect the essence of this study, it is suggested to be added.

3.     In line 90, “where the rooms are adjacent., reconstructing the topological relationships”, the symbol . is needless.

4.     It is proposed to merge "1.introduction" and "2.Related works" into" 1.introduction ".

5.     In Table 1. The floor plan precision and recall rate of our method on GibLayout dataset.”, the difference between Precision and Recall of Scene1 is 0.07. It is recommended to explain why such a large difference is generated.

6.     In line 554, “7. Discussion” should be replaced by “7. Conclusions”.

7. The innovation or novel points of this study should be summarized in Conclusions.

Author Response

Comments 1: It is suggested to give the accuracy of the proposed method in the Abstract.

Response 1: Thank you for your suggestion. We have added accuracy of the proposed method to abstract. The revisions are as follows:

(Line 24~line 26): The mean precision and recall for floorplan are both 0.93 and the Point-Surface- Distance (PSD) and standard deviation of PSD for 3D model are 0.044m and 0.066m respectively.

 

Comments 2: Keywords are too few and do not reflect the essence of this study, it is suggested to be added.

Response 2: Thanks for this advice. We have added scan-to-BIM and Indoor reconstruction to keywords.

 

Comments 3: In line 90, “where the rooms are adjacent., reconstructing the topological relationships”, the symbol “.” is needless. In line 554, “7. Discussion” should be replaced by “7. Conclusions”.

Response 3: Thanks for your advice. We have made revisions to the manuscript according to your advice.

 

Comments 4: It is proposed to merge "1. introduction" and "2. Related works" into" 1. introduction ".

Response 4: Thanks for your advice. It’s too long if merge these two parts. So, I am sorry I didn’t make change to it.

 

Comments 5: In “Table 1. The floor plan’ precision and recall rate of our method on GibLayout dataset.” , the difference between Precision and Recall of Scene1 is 0.07. It is recommended to explain why such a large difference is generated.

Response 5: Thanks for this advice. This is because in narrow areas, regularization of boundary points may generate redundant short edges, causing a decrease in accuracy. We added explanations in the manuscript. The revisions are as follows:

(Line 500~line 502): In scene1, the difference between precision and recall is 0.07. This is because in narrow areas, regularization of boundary points may generate redundant short edges, causing a decrease in accuracy.

 

Comments 6: The innovation or novel points of this study should be summarized in Conclusions.

Response 6: Thanks for this advice. We summarized our innovation points in the conclusion section. The revisions are as follows:

(Line 598~line 606): Different from other common methods, the proposed method is based on roofs, so it can’t be affected by indoor clutter and furniture points. We innovatively perform an over-lap analysis between the roof and room instance map to segment the entire roof to get different room’s roof. To reconstruct the topological relationships between the rooms, a new and robust method was proposed to detect the doors. It firstly extracts potential areas that may exist the door, which narrows down the detection range and reduces the interference of point clouds from other areas. In addition, we also reconstructed the windows in the scene, which enriched the semantic information of the model and expanded its application scope.

 

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

The author proposes a new framework for generating 3D interior models from point clouds, effectively alleviating clutter and furniture point clouds, and realizing high-precision models that match the real scene robuently.However, the following questions should be considered before acceptance:

In my opinion, the description in the abstract section of the manuscript does not apply to a scientific paper at all, and looks more like a guide to technical steps.The summary should include the proposed method, innovations, performance metrics, and potential value to the field. Therefore, I suggest that the authors make changes to this section.

In the author's contribution introduction, a few clarifications are sought:Firstly, while the author presents three methods, they are essentially extensions based on the Randla-net algorithm. Could you clarify whether the proposed method is primarily an enhancement or an innovative approach? Secondly, as these methods target distinct aspects such as house scenes, roof boundaries, and door/window edges due to their differing characteristics, was there contemplation on defining an algorithm capable of directly reconstructing the complete house image? Thirdly, the authors specify that the method falls under spatial segmentation, labeling it as a new method, yet the subsequent details do not explicitly highlight what distinguishes it as "new."

Similar to Figure 12,13, the author emphasizes that this method can also draw topological relationships, so I suggest that the categories in (a), (b), (c) and (d) be highlighted with non-colored boxes or highlighting patterns, so that readers can better understand the differences and topological relationships.

The comparison of the other two methods is described in Figure 14. Are the other two methods based on LiDAR point clouds? If possible, give a brief description of the indicators for both methods.

Part VII should be a "conclusion" rather than a "discussion".

 

Comments on the Quality of English Language

Minor editing 

Author Response

Comments 1: In my opinion, the description in the abstract section of the manuscript does not apply to a scientific paper at all, and looks more like a guide to technical steps. The summary should include the proposed method, innovations, performance metrics, and potential value to the field. Therefore, I suggest that the authors make changes to this section.

Response 1: Thank you for your suggestion. We have made revisions to the abstract according to your advice. The revisions are as follows:

(Line 11~line 26): 3D indoor models have wide applications in fields such as indoor navigation, civil engineering, virtual reality and so on. With the development of LiDAR technology, automatic reconstruction of indoor models from point clouds has gained significant attention. We propose a new framework for generating indoor 3D Digital Models from point clouds. The proposed method first generates the room instance map of indoor scene. Walls are detected and projected onto a horizontal plane to form line segments. These segments are extended, intersected, and by solving an integer programming problem, line segments are selected to create room polygons. The polygons are con-verted into a raster image, and image connectivity detection is used to generate the room instance map. Then the roofs of the point cloud are extracted and used to perform an overlap analysis with the generated room instance map to segment the entire roof point cloud, obtaining the roof for each room. Room boundaries are defined by extracting and regularizing the roof point cloud boundaries. Finally, by detecting doors and windows in the scene in two steps, we generate the floor plans and 3D models separately. Experiments with the Giblayout dataset show that our method is robust to clutter and furniture point clouds, achieving high-accuracy models that match real scenes. The mean precision and recall for floorplan are both 0.93 and the Point-Surface- Distance (PSD) and standard deviation of PSD for 3D model are 0.044m and 0.066m respectively.

 

Comments 2: In the author's contribution introduction, a few clarifications are sought: Firstly, while the author presents three methods, they are essentially extensions based on the Randla-net algorithm. Could you clarify whether the proposed method is primarily an enhancement or an innovative approach?

Response 2: Thanks for this advice. In fact, we just use Randla-net to extract roofs from the scene. Our innovation mainly lies in spatial segmentation method and the reconstruction of room topology relationships. We have made modifications to the contribution section to emphasize our innovation. The revisions are as follows:

(Line 105~line 117):

(1) A new method for spatial segmentation is proposed. This method firstly selected the walls of rooms by minimizing an energy function to generate the room instance map for initial spatial segmentation. Then, by innovatively performing an overlap analysis between the roof and room instance map, more accurate spatial segmentation can be obtained.

(2) A new method for reconstructing the topological relationships between rooms is proposed. It firstly extracts potential areas that may exist the opening, which narrows down the detection range. Then, by performing density filtering, the precise position of the opening is detected. It combined the opening with the boundary of rooms to reconstruct the topological relationship between rooms.

(3) A method for detecting doors and windows in indoor scenes is proposed. It first detects openings on the boundary of rooms, then classifies them as windows or doors by analyzing their length and height, and finally parameterizes them.

 

Comments 3: Secondly, as these methods target distinct aspects such as house scenes, roof boundaries, and door/window edges due to their differing characteristics, was there contemplation on defining an algorithm capable of directly reconstructing the complete house image?

Response 3: Thanks for your question. Our research focuses on the reconstruction of the permanent structures in the scene, such as walls, doors and windows, floors, and ceilings, which are crucial for the perception of the scene. We have summarized the methods you mentioned in the introduction section. This type of method reconstructs by taking the entire scene as input, and the reconstructed model often lacks semantic information and the topological relationship between rooms, which limits the application scope of the model.

 

Comments 4: Thirdly, the authors specify that the method falls under spatial segmentation, labeling it as a new method, yet the subsequent details do not explicitly highlight what distinguishes it as "new."

Response 4: Thanks for your advice. We provided a more detailed explanation of the innovation of our method in the introduction section and emphasized our innovation in the conclusion section. The revisions are as follows:

(Line 86~line 97): Our method belongs to the category of spatial segmentation methods. We proposed a new method for space partitioning. Firstly, we detected walls in the scene and projected them onto the ground to generate line segments. Then, the line segments are extended to intersect and generate a set of candidate line segments. We selected line segments to obtain initial spatial partitioning by minimizing a constraint-based energy function. After that, the selected line segments are converted into an image and by using the image connectivity detection we obtained the room instance map. Our method is based on roof point clouds, as in indoor scenes, roof point clouds are often not obstructed by indoor debris and are saved relatively intact. We used the Randla-net [16] semantic segmentation network to extract the roof. We innovatively performed overlap analysis between the roof and room instance map to segment the roof point cloud for more accurate spatial partitioning. Through this, the entire roof is segmented to several different room’s roofs.

(Line 598~line 606): Different from other common methods, the proposed method is based on roofs, so it can’t be affected by indoor clutter and furniture points. We innovatively perform an over-lap analysis between the roof and room instance map to segment the entire roof to get different room’s roof. To reconstruct the topological relationships between the rooms, a new and robust method was proposed to detect the doors. It firstly extracts potential areas that may exist the door, which narrows down the detection range and reduces the interference of point clouds from other areas. In addition, we also reconstructed the windows in the scene, which enriched the semantic information of the model and expanded its application scope.

 

Comments 5: Similar to Figure 12,13, the author emphasizes that this method can also draw topological relationships, so I suggest that the categories in (a), (b), (c) and (d) be highlighted with non-colored boxes or highlighting patterns, so that readers can better understand the differences and topological relationships.

Response 5: Thanks for this advice. In Figure 12 and Figure 13, we visualize the doors connecting different rooms in the floor plan as openings. Through these openings, we can clearly understand the topological relationships between the rooms. In the generated 3D digital model, the doors and windows are visualized using blue lines and displayed from different angles.

 

Comments 6: The comparison of the other two methods is described in Figure 14. Are the other two methods based on LiDAR point clouds? If possible, give a brief description of the indicators for both methods.

Response 6: The input for these two methods is point density images generated through point clouds, which are obtained through RGBD sensors. As shown in the Figure.1, the left is the point density image generated by projecting the point cloud onto a plane, and the right is the generated floor plan. We use the same method to generate a point density image of the test data and use it as input for the comparison method.

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

The article suggest new framework(s) for generation of Indoor 3D
digital models suggesting places where GNSS does not work.

The article makes no mention of photogrammetry, IMU, or SLAM for enhancing
indoor solutions which are all part of modern inside 3-D modeling. Therefore
while they are ideas in the article that are outside of these techniques, it
does seem some consideration needs to me made to identify that perhaps these
techniques do not enhance what is the focus of the article.  Definitely the
mentioned technologies perhaps reduce the noise the authors refer to.  I
finally see reference to items like SLAM at line 423 but it does little
to indicate data quality which obviously enhances extraction.

I struggle with what is completely automated and what is at least partially
human driven.  I struggle with no mention of accuracy in a metric sense of "how
far off from the real value?"  I struggle with the fact that this is simply from
one data set and thus no way to predict how it would work in general.

I therefore am not discounting this for publication, I am simply sure a lot
of items needs to be clarified.  As seen below the same English corrections are
so numerous I gave up showing them and simply say they need correction.

Line 1 - these are a small portion of applications so maybe ",etc."
Line 23 - define high accuracy
Line 28 indoor twice is redundant
Line 33 for example first time you define an abbrevation say what it is like MLS
Line 33 define high precision and high density
Line 53 "always miss" is poor English maybe "always has noise that is undetected
my automated algorithms"
Line 57 project"ed"
Line 58 "ground" is strange term for indoors; same issue later search for ground
Line 67 "author" should be"user"
Line 69 coverage"."
Line 90 ",." I think ,
Line 92 no "ed"
Line 126 "the" floor plan
Line 131-132 Sentence is not a sentence and I do not understand it.
Line 133 "and" finally
Line 141 project"s"
Line 142 extend"s"
Line 143 perform"s"
I will quit identifying all the missing "s"s as it exists throughout the article
Line 150 propose"d" same issue will exist later so I will quit identifying.
Line 150-153 make 2 sentences
Line 155 label no "s"
Line 187 "grid sampled"?  But scanner data is not gridded exactly?
Line 186 section I cannot tell what is completely automated and what is user
defined?  What happens if order is changed?
Line 248 why can you not use intensity instead of color?
Line 263 formula no "r"
Line 304 "the" 3-D model
Line 351 remove "the"
Line 361 why 1.5m?
Line 424-425 why reduce?  Would not using all data be better?
Line 482 how do you define a value for the threshold?
Line 495 same as 482
Line 496-497 define completely and accurately?  Is accurately by feature or
by a distance quality value?
Line 511 what are the units of 0.05?
Line 524-525 How does one determine these numbers in general for other data sets?
Line 526 Is this not an arbitrary number

Comments on the Quality of English Language

A lot of work is required see above and I gave up identifying the same problems over and over again.  They should hire an accomplished English reviewer to improve it after they enhance the other things that have been identified.

Author Response

Comments 1: The article makes no mention of photogrammetry, IMU, or SLAM for enhancing indoor solutions which are all part of modern inside 3-D modeling. Therefore, while they are ideas in the article that are outside of these techniques, it does seem some consideration needs to me made to identify that perhaps these techniques do not enhance what is the focus of the article. Definitely the mentioned technologies perhaps reduce the noise the authors refer to. I finally see reference to items like SLAM at line 423 but it does little to indicate data quality which obviously enhances extraction.

Response 1: Thank you for your suggestion. We have added the information you mentioned to introduction. The revisions are as follows:

(Line 34~line 45): With the development of LiDAR scanning technology [4, 5], various types of laser scanning devices, such as consumer grade RGBD (RGB and depth) sensors, lightweight terrestrial laser scanners (TLS), and hand-held laser scanning (HLS) or backpack laser scanning (BLS), have been used for data acquisition of indoor scenes. RGBD images (e.g., Microsoft Kinect) acquired by a camera and a depth sensor are widely used in 3-D visual applications. However, RGBD sensors are limited by the small scanning range and high level of noise. TLS scanners can obtain high-quality data but suffer from low mapping efficiency because of the laborious scan station resetting and registration procedures. By combining inertial measurement unit (IMU) and simultaneous localization and mapping (SLAM) technologies, Mobile Laser Scanning (MLS) can continuously obtain high-precision and high-density point clouds while moving around. They can be applied to indoor scenes that have complex layouts and can minimize the effects of occlusions.

 

Comments 2: I struggle with what is completely automated and what is at least partially human driven.  I struggle with no mention of accuracy in a metric sense of "how far off from the real value?"  I struggle with the fact that this is simply from one data set and thus no way to predict how it would work in general.

Response 2: I'm sorry we may not have explained our method clearly. All of our methods are implemented through programs. We only need to input the obtained point cloud and set the key parameters, and the program can finally output the generated plan and 3D model. We analyzed the key parameters in the parameter setting and analysis section. The parameters in our method are highly correlated with the density of the point cloud. Therefore, we use voxel sampling to preprocess the input point cloud. Voxel sampling can reduce the number of points and can adjust the point cloud density to our required range, which can automate parameters settings.

For the metric sense of "how far off from the real value?" you mentioned. We use the Point-Surface- Distance (PSD) and standard deviation of PSD (PSDstd) to evaluate the reconstruct 3D model. PSD can reflect the shape similarity between the recon-structed model and point cloud. It calculates the average distance from the point cloud to the reconstructed model surface. The smaller the PSD, the closer the shape of the recon-structed model is to the point cloud. As shown in the table below, the mean PSD and standard deviation of PSD of five reconstruct models were 0.044m and 0.066m respectively. This indicates that the shape of the reconstructed model is very well restored and close to the shape of point clouds.

 

Scene1

Scene2

Scene3

Scene4

Scene5

Mean

PSD

0.028

0.037

0.055

0.049

0.053

0.044

PSDstd

0.031

0.047

0.103

0.058

0.090

0.066

We have added this to the manuscript. The revisions are as follows:

(Line 505~line 512): We use the Point-Surface- Distance (PSD) and standard deviation of PSD (PSDstd) to evaluate the reconstruct 3D model. PSD can reflect the shape similarity between the recon-structed model and point cloud. It calculates the average distance from the point cloud to the reconstructed model surface. The smaller the PSD, the closer the shape of the recon-structed model is to the point cloud. As shown in Table 2, the mean PSD and standard deviation of PSD of five reconstruct models were 0.044 and 0.066 respectively, which both lower than 0.1. This indicates that the shape of the reconstructed model is very well re-stored and close to the shape of point clouds.

 

Our research focuses on the reconstruction of small-scale buildings, and the selected dataset is common indoor scenes, including layouts such as rings, corridors, and non-Manhattan scenes. Our method can reconstruct such scenes, but it cannot reconstruct the curve structure in the scene. This requires the introduction of curve and surface fitting algorithms, and we have added a discussion on this part. The revisions are as follows:

(Line 575~line 578): The reconstruction of curve structure: Our method can reconstruct common in-door scenes, including layouts such as rings, corridors, and non-Manhattan scenes, but it cannot reconstruct the curve structure in the scene. We plan to introduce the curve and surface fitting algorithms to reconstruct more abundant structures.

 

Comments 3:

Line 187 "grid sampled"?  But scanner data is not gridded exactly?

Line 186 section I cannot tell what is completely automated and what is user

defined?  What happens if order is changed?

Line 248 why can you not use intensity instead of color?

Response 3: These three questions are all from methodology, so I responded to the three questions together. I am sorry the unclear description of our method has caused confusion for you. We have made modifications to the description of our method to make it clearer. First, it should be voxel sampled. We use voxel sampling to preprocess point clouds. Second, as mentioned earlier, all of our methods are implemented through programs. We only need to input the obtained point cloud and set the key parameters, and the program can finally automatedly output the generated floor plan and 3D model. In addition, I'm sorry, but I don't know which part of the method you are referring to as its 'order'. If you are referring to the order of RANSAC, Euclidean clustering, and region growing algorithms. The final segmented roof point cloud might classify roofs from different rooms but on the same plane into the same category if the order has been changed. Thirdly, the training dataset does not contain intensity information. In fact, by our method, we can get more accurate roof point cloud.

The revisions are as follows:

(Line 198~line 220): As shown in the Figure 1, our method's overall workflow begins with the input of voxel sampled point clouds. We firstly use the Randla-net to extract the wall and roof point clouds and filter out wrong points by analyzing the normal vector of extracted points. Then we detect planes in the wall point clouds and project them onto a horizontal plane to generate 2D line segments. By extending these line segments, we obtain a set of intersecting line segments. We need to select the line segments that form the rough boundary of rooms from these segments. We transform the problem of selecting line segments into an energy minimization problem. As shown in Figure 1(e), By minimizing the energy function E, we obtained the approximate layout of the rooms. We convert these lines into a raster image and use image connectivity detection to segment the regions, generating room instance maps. As shown in Figure 1(f), different colors in the room in-stance map represents different room areas in the indoor scene. Next, we segment the planes in the extracted roof point clouds. We sequentially use RANSAC [34], Euclidean clustering, and region-growing [35] algorithms to segment the roof point clouds into small patches. By performing an overlap analysis between these patches and room instance map, we obtain the roof point cloud for each room. Then, the Alphashape [36] algorithm was used to extract the boundaries of the roof point clouds for each room. After that, we simplify and regularize the ordered boundary points to generate boundary polygons for each room. The generated polygons are independent and do not have topological relationships between rooms. We propose a method to detect openings that connect different rooms, reconstruct the topological relationships between rooms, and generate the floor plan of the scene. Finally, we detect doors and windows in the scene and the heights of the walls. By combining the walls, doors, and windows, we generate the final 3D digital model.

Comments 4: Line 361 why 1.5m?

Response 4: The height of ordinary residential doors is generally between 2m and 2.4m, so we choose a height below 2m to be able to detect all doors. The height cannot be set too low, otherwise the void formed by obstruction may be detected as a door.

 

Comments 5: Line 424-425 why reduce?  Would not using all data be better?

Response 5: Voxel sampling can reduce the number of points and balance the density of the point cloud without changing its shape. It is a common point cloud preprocessing method. Reducing the number of points can decrease the execution time of the algorithm. More importantly, it can balance the density of points and limit the minimum distance between points within the range we set. A point cloud with uniform density can make our detection of doors and windows more accurate. Knowing the minimum distance between points can automate many parameter settings, such as setting the radius of the alphashape algorithm for detecting roof boundaries to three times the minimum distance, and obtaining an approximate range of 20 points per meter through the minimum distance between points.

 

Comments 6:

Line 482 how do you define a value for the threshold?

Line 495 same as 482

Response 6: We defined the value by referring to the paper “Building Floorplan Reconstruction Based on Integer Linear Programming”. We set it to 0.1m. We have clarified this in the paper.

 

Comments 7: Line 496-497 define completely and accurately?  Is accurately by feature or by a distance quality value?

Response 7: When the length and width of the reconstructed doors and the distance between the plane of the door and the corresponding door in the GT are less than 0.1m, we consider the door to be correctly reconstructed. We counted the number of correctly reconstructed doors in each scene, and in all scenes, we only had two doors that were not correctly reconstructed from total 31 doors.

 

Comments 8: Line 511 what are the units of 0.05?

Response 8: The unit is meters. We have added units to the manuscript.

 

Comments 9: Line 524-525 How does one determine these numbers in general for other data sets?

Response 9: The setting of this parameter is related to the distance from the top of the opening to the roof. We can roughly calculate this distance and divide it by the size of the voxel to obtain a reference value. Then, we can set the threshold slightly larger than the reference value. In fact, the experimental results are not sensitive to this parameter, as the density of points in the area where the opening exists will be significantly lower than in the surrounding areas. It is easy to identify the opening as long as the threshold is not too small. We have provided suggestions for parameter settings in the parameter analysis section. We clarified this in the manuscript. The revisions are as follows:

(Line 545~line 551): When performing density filtering on points on adjacent boundaries, the threshold for the number of points and the number of intervals which the line segment is divided will affect the detection of openings. In our experiments, we set the number of intervals to 100 and the threshold for the number of points to 15. The setting of the threshold for the number of points is related to the distance from the top of the opening to the roof. We can roughly calculate this distance and divide it by the size of the voxel to obtain a reference value. Then, we can set the threshold slightly larger than the reference value.

 

Comments 10: Line 526 Is this not an arbitrary number

Response 10: The reason we set it to 0.2m is that windows are usually at a certain height from the ground, while doors are usually connected to the ground. Considering the existence possibility of a sill, we set this value to 0.2m which slightly higher than the ground. We clarified this in the manuscript. The revisions are as follows:

(Line 551~line 557): When distinguishing doors and windows from openings, we set the criterion that if the height of the opening’s lowest endpoint from the ground is less than 0.2 meters, we classify the opening as a door; otherwise, it is classified as a window. Because windows are usually at a certain height from the ground, while doors are usually connected to the ground. Considering the existence possibility of a sill, we set this value to 0.2m.

 

4. Response to Comments on the Quality of English Language

Line 1 - these are a small portion of applications so maybe ",etc."

Line 23 - define high accuracy

Line 28 indoor twice is redundant

Line 33 for example first time you define an abbrevation say what it is like MLS

Line 33 define high precision and high density

Line 53 "always miss" is poor English maybe "always has noise that is undetected

my automated algorithms"

Line 57 project"ed"

Line 58 "ground" is strange term for indoors; same issue later search for ground

Line 67 "author" should be"user"

Line 69 coverage"."

Line 90 ",." I think ,

Line 92 no "ed"

Line 126 "the" floor plan

Line 131-132 Sentence is not a sentence and I do not understand it.

Line 133 "and" finally

Line 141 project"s"

Line 142 extend"s"

Line 143 perform"s"

I will quit identifying all the missing "s"s as it exists throughout the article

Line 150 propose"d" same issue will exist later so I will quit identifying.

Line 150-153 make 2 sentences

Line 155 label no "s"

Line 263 formula no "r"

Line 304 "the" 3-D model

Line 351 remove "the"

Response : Thank you for your careful review. We have made corrections to all the errors you mentioned. Also, we have rechecked the entire paper and corrected similar errors.

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The author carefully reviewed my questions and responded to me carefully while making changes in the manuscript that I found satisfactory. Therefore, in my opinion, the manuscript meets the requirements of the journal and can be accepted in its current form.

Reviewer 3 Report

Comments and Suggestions for Authors

My criticisms have been addressed in an exceptional fashion.

Comments on the Quality of English Language

I think a professional English reviewer could improve the readability.

Back to TopTop