Next Article in Journal
True Random Number Generator Based on Fibonacci-Galois Ring Oscillators for FPGA
Next Article in Special Issue
Moving Vehicle Tracking with a Moving Drone Based on Track Association
Previous Article in Journal
WiGR: A Practical Wi-Fi-Based Gesture Recognition System with a Lightweight Few-Shot Network
Previous Article in Special Issue
Classifying Upper Arm Gym-Workouts via Convolutional Neural Network by Imputing a Biopotential-Kinematic Relationship
 
 
Article
Peer-Review Record

3D Snow Sculpture Reconstruction Based on Structured-Light 3D Vision Measurement

Appl. Sci. 2021, 11(8), 3324; https://doi.org/10.3390/app11083324
by Wancun Liu 1,2, Liguo Zhang 3,4,*, Xiaolin Zhang 1 and Lianfu Han 5,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2021, 11(8), 3324; https://doi.org/10.3390/app11083324
Submission received: 6 January 2021 / Revised: 26 March 2021 / Accepted: 29 March 2021 / Published: 7 April 2021
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information, Volume II)

Round 1

Reviewer 1 Report

LINE 210-230:
According to my experience R-G color space has problem on the laser line. If we use colour camera, laser is so bright that G (and B) component also has values close to 255 (pixels are saturated). Consequently R-G = 255 - 250 =~ 0. Laser line in the middle of the gauss is partially removed from the image.

LINE 262-267:
Can you explain more in detail relation between center off mass alghoritm and Canny edge detection; Can you explain point-pairs? Do you calculate canter off mass for pixels between point pairs? Clearify. I do not understand how do you arrive to several mass centers in particular column?

RESULTS:
I have some concerns about resuts:
-If the measured object is in size od 3000 mm and triangulation angle is 45 degrees and camera optical axis intersect with laser plane on the object surface,that means that:
(1) object of size 3000mm is sampled with 640 pixels.
In such case 3000/640=4.69mm/pixel. With such low sampling the described accuracy can not be achieved.
(2) If we assume pixel size 6.5 micrometer, than the detector in camera has 4.16 mm, what means that we need lens with focal length f=3000mm*4.16/(3000mm+4.16mm)~4 mm at distance 3000 mm from snow sculpture. Small sensor + wide field lens does not provide high quality image. There should be lot of distortion. The camera laser distane is also very large due to 45 degrees triangulation angle; cca 3000 mm. This should be huge scanner.

Please explain optical setup and especially calibration. What is measuring uncertainty?

I doubt the veracity of the results shown for the described equipment.

Author Response

Dear  Reviewer,

Thank you for your email on 2 February 2021 regarding the review results of our paper, entitled “3D Snow Sculpture Reconstruction Based on Structured-light 3D Vision Measurement”. We would also like to thank the anonymous reviewers for their efforts in reviewing and improving our manuscript. According to the review comments, we have revised our manuscript by adding more strict formulation, comparison experiments and references. We mark the main changes of the revision in blue for your convenience.

Our replies to the comments and questions of each reviewer are listed in the following pages. We hope that our revisions can satisfy the reviewers.

Sincerely,

Wancun Liu, Liguo Zhang, Xiaolin Zhang, and Lianfu Han

Author Response File: Author Response.docx

Reviewer 2 Report

The manuscript describes a method for outdoor 3D reconstruction of snow sculptures using structured light techniques. The manuscript is technically sound and the authors have done a good job in quantifying the performance of the method relative to other methods. Furthermore, the technique is very convenient and applicable in a real-world outdoor setting. I recommend that the article be accepted. Minor suggestions: I recommend the authors improve the quality of the writing as there are several spots that should be written better. For example: Line 17 “ anti-noise property”, line 36 “not second”, line 97 “continuous video, line 233 “We can learn from Grundland et al [20].” Also, line 48 a statement such as “obviously there is high demand…” needs to be followed by citations.

Author Response

Dear  Reviewer,

Thank you for your email on 2 February 2021 regarding the review results of our paper, entitled “3D Snow Sculpture Reconstruction Based on Structured-light 3D Vision Measurement”. We would also like to thank the anonymous reviewers for their efforts in reviewing and improving our manuscript. According to the review comments, we have revised our manuscript by adding more strict formulation, comparison experiments and references. We mark the main changes of the revision in blue for your convenience.

Our replies to the comments and questions of each reviewer are listed in the following pages. We hope that our revisions can satisfy the reviewers.

Sincerely,

Wancun Liu, Liguo Zhang, Xiaolin Zhang, and Lianfu Han

Author Response File: Author Response.docx

Reviewer 3 Report

Authors demonstrate the applicability of structured light 3D imaging method with a laser stripe outdoors.

They show that their method works for snow sculptures in the field.

It seems that the work is genuine and that the authors suceeded in making it work by clever stripe extraction.

The major weakness is that it has been only tested with two snow sculptures in the morning - it would have been nice to test more actual snow sculptures and also different outdoor light conditions.

It is desirable to give the lighting conditions, ideally with measuements (e.g. illuminance in lux) or weather conditions to estimate applicability.

English is OK

I have a few comments:

Authors use red laser - green lasers are normally available at higher output powers. Would that improve signal to noise or is it counter balanced by higher amount of sunlight in the green?

Abstract is OK, maybe worth mentioning the laser stripe method and that stripe extraction was the key to their work

line 120: Fig. 2 needs to be fitted to page properly

line 140: "laser stripe is flooded" -> "laser stripe is overlaid"

line 173: signal noise ratio -> signal to noise ratio

line 179: I assume this is not a true luminance measurement according to V_lambda (?)

line 233: straange sentence "We can learn from..."

line 237: watch page break (check throughout manuscript)

line 255: why now use "laser line"? (check throughout manuscript)

line 316: laser stripe -> stripe ? (check throughout manuscript)

line 395: please give more information on the light background

- what time, or what sun elevation angle
- what level of cloudiness
- what level of foginess

etc.

ideally give an illuminance measurement in lux as well

This is essential to evaluate if this method is applicable to a large range of scenarios or if it just worked once

line 410: table should not go over page break

line 460:

please discuss influence of illumination condition more strongly

if this method only works reliable in the morning it reduces applicability

Author Response

Dear  Reviewer,

Thank you for your email on 2 February 2021 regarding the review results of our paper, entitled “3D Snow Sculpture Reconstruction Based on Structured-light 3D Vision Measurement”. We would also like to thank the anonymous reviewers for their efforts in reviewing and improving our manuscript. According to the review comments, we have revised our manuscript by adding more strict formulation, comparison experiments and references. We mark the main changes of the revision in blue for your convenience.

Our replies to the comments and questions of each reviewer are listed in the following pages. We hope that our revisions can satisfy the reviewers.

Sincerely,

Wancun Liu, Liguo Zhang, Xiaolin Zhang, and Lianfu Han

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

The authors address laser triangulation. Despite the fact that this method is quite researched, the detection of laser lines from images captured with a color camera in demanding lighting conditions is not so simple. The authors present the development of a new method for laser line detection in color images. I have a couple of concerns:

 

Chapter 2.2. Monochromatic value space transformation

Why do you develop theory around kurtosis eq. 1 to 6 if at the end you simply use only R-G color space?

According to my experience R-G color space has problem ON THE LASER LINE. If you use colour camera, laser is so bright that G (and B) component also has values close to 255 (pixels are saturated). For example R-G = 255 - 250 =~ 0. The brightes part of the laser line (as is shown in figure 6) is partially removed from the image. 


RESULTS:
What is the size of measuring field of your structured light sensor? I guess ...  In chapter 3.1.1. Standard cube measurement you claim:

"In the experiment, a standard cube with black and white grid is designed. The size of the cube is 200 mm × 200 mm× 200 mm, the mesh of which is 20 mm × 20 mm, and the uncertainty is 0.01 mm."

That means that your sensor should have small measuring field e.g. cca 250 mm width, especially because you use VGA camera (line 368).

However,

you show outstanding performance when measuring large objects in size of 3m. How do you achive this with your system (VGA camera, small measuring field)??

Please explain optical setup (distance between laser and camera, triangulation angle, size of measuring field), scanning procedure and especially calibration!   
  

Author Response

Dear reviewer,

 We would also like to thank you for their efforts in reviewing and improving our manuscript.

 Our replies to the comments and questions of each reviewer are listed in the following pages. We hope that our revisions can satisfy the reviewers.

Sincerely,

Wancun Liu, Liguo Zhang, Xiaolin Zhang, and Lianfu Han

Author Response File: Author Response.docx

Back to TopTop