Research on the Path Planning Algorithm of a Manipulator Based on GMM/GMR-MPRM
Round 1
Reviewer 1 Report
The paper introduces n adaptive path planning algorithm aiming at the problem of task recurring in the process of LfD.
The paper is very well structured in formal analysis and simulation.
However the paper can be improved adding in the introduction some other platforms in which the approach can be used to improve the soundness of the paper and of the proposed approach. These
Russo, M., Cafolla, D., Ceccarelli, M. Design and experiments of a novel humanoid robot with parallel architectures (2018) Robotics, 7 (4), art. no. 79, . DOI: 10.3390/robotics7040079Ceccarelli, M., Cafolla, D., Carbone, G., Russo, M., Cigola, M., Senatore, L.J., Gallozzi, A., Maccio, R.D., Ferrante, F., Bolici, F., Supino, S., Colella, N., Bianchi, M., Intrisano, C., Recinto, G., Micheli, A., Vistocco, D., Nuccio, M.R., Porcelli, M. HeritageBot service robot assisting in cultural heritage (2017) Proceedings - 2017 1st IEEE International Conference on Robotic Computing, IRC 2017, art. no. 7926580, pp. 440-445. DOI: 10.1109/IRC.2017.84
Figure 4 and Figure 5 should be improved since the numbers are not clear enough
Detail better the experimentation part with more real-scenario photos.
Author Response
Please see the attachment.
Author Response File: Author Response.docx
Reviewer 2 Report
The paper proposes an adaptive online path planner based on Gaussian Mixture Model (GMM), Gaussian Mixture Regression(GMR), and Probabilistic Roadmap (PRM) to solve issues related to real tasks execution.
Comments:
- consider citing [1], optimal motion planner;
- consider citing [2], in which the motion planner is learned offline and then taken to the real robot for the task execution;
- in the introduction Section, better highlight which are the main contributions of the paper with a bullet list;
- w.r.t. teaching to the robot a task with human's demonstrations, consider to discuss your method w.r.t. [3];
- is a video of the experimental results available? It would help to understand the paper results;
- what if a (partially) new trajectory has to be executed/a (partially) new environment is available? Is there any possibility to transfer the already available knowledge to plan the new motion?
- is the algorithm capable to deal with dynamic obstacles?
- please better explain the difficulties in setting up the method in a real task (parameters to be tuned, ...);
- please better describe the simulation and experimental tests in terms of robot control (control frequency, ...);
- please better specify how much time is needed to plan the robot motion;
- check English.
[1] Chen, Yuqing, Loris Roveda, and David J. Braun. "Efficiently computable constrained optimal feedback controllers." IEEE Robotics and Automation Letters 4.1 (2018): 121-128.
[2] Shahid, Asad Ali, et al. "Learning continuous control actions for robotic grasping with reinforcement learning." 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2020.
[3] Roveda, Loris, et al. "Human–robot collaboration in sensorless assembly task learning enhanced by uncertainties adaptation via Bayesian Optimization." Robotics and Autonomous Systems 136 (2021): 103711.
Author Response
Please see the attachment.
Author Response File: Author Response.docx
Round 2
Reviewer 2 Report
The paper can now be accepted.