Next Article in Journal
Syntheses of Dual-Artistic Media Effects Using a Generative Model with Spatial Control
Next Article in Special Issue
Safety System Assessment Case Study of Automated Vehicle Shuttle
Previous Article in Journal
Linear, High Dynamic Range Isolated Skin Resistance Transducer Circuit for Neurophysiological Research in Individuals after Spinal Cord Injury
 
 
Article
Peer-Review Record

Object Segmentation for Autonomous Driving Using iseAuto Data

Electronics 2022, 11(7), 1119; https://doi.org/10.3390/electronics11071119
by Junyi Gu 1,*, Mauro Bellone 2, Raivo Sell 1 and Artjom Lind 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2022, 11(7), 1119; https://doi.org/10.3390/electronics11071119
Submission received: 28 February 2022 / Revised: 18 March 2022 / Accepted: 25 March 2022 / Published: 1 April 2022
(This article belongs to the Special Issue Cyber-Physical Systems in Smart Cities)

Round 1

Reviewer 1 Report

The paper is focused on the problem of object segmentation of humans and vehicles in an autonomous driving environment for which a newly captured image-lidar dataset is presented. The paper is an extended version of their ITSC conference paper. The introduced dataset is more general with respect to day/night, and fair/rain environmental settings. Authors evaluate their proposed dataset in various settings including fully supervised, fine-tuned and semi-supervised. The domain knowledge transfer is studied with respect to Waymo dataset and the introduced iseAuto dataset.

 

Overall the paper is well written. To begin with, authors describe the limitation of active and passive sensors i.e., lidar and camera, and emphasize how a fusion could be the solution especially for low-texture flat areas. A reasonable coverage of the related work on sensor fusion, semi-supervised learning is then provided. The dataset and the capturing vehicle's specifications are sufficiently described along with different splits of iseAuto, and Waymo datasets. The paper is concluded with the description of the employed metrics and the results demonstrating the importance of semi-supervised learning and domain transfer. Overall I enjoyed reading the paper and inclined towards recommending it for an acceptance.

 

I have minor suggestion to further improve the quality of the paper.

  1. Authors should describe how the journal version of the paper is extended beyond the ITSC conference paper. Also why the reference for the conference paper is not provided? If it is under submission, author should mention that.
  2. Figure 3 in the paper is very unclear; either revise the figure or add more description to its caption.     

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors have provided a great literature survey of both datasets available and the algorithms applied in the domain.i have following few suggestions:

  1. They state in the paper that the human class is less represented, is there a way to improve prediction on human classes through transfer learning from other dataset.
  2. The code and model should be provided to the research community through GitHub
  3. 3. It would be good if the paper can provide how increase in data improves the prediction of the algorithm. It would then give an idea what is the minimum data further required to have a substantial improvement. It would be useful for other researchers.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

The manuscript provides a dataset (named iseAuto) based on the LiDAR and camera acquisition for object segmentation problems in autonomous driving. Using semi-supervised transfer learning for vehicle segmentation using  LiDAR-camera fusion improved the performance from 76% in the iseAuto baseline to 79% .

Some minor comments for final publication: 

1) Are all sensors in fig. 1 are LiDAR? identify the type of sensor in fig. 1. 

2) I suggest comparing the specifications of your dataset against to other available datasets in a table.

3) For better comparison, make the best results bold in Tabs. 2-7.

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Back to TopTop