Next Article in Journal
A Brief Review of Formaldehyde Removal through Activated Carbon Adsorption
Next Article in Special Issue
A Novel Compliant 2-DOF Ejector Pin Mechanism for the Mass Transfer of Robotic Mini-LED Chips
Previous Article in Journal
ZnO/Ag Nanocomposites with Enhanced Antimicrobial Activity
Previous Article in Special Issue
Design and Performance Analysis of Lamina Emergent Torsional Joints Based on Double-Laminated Material Structure
 
 
Article
Peer-Review Record

Design of an Eye-in-Hand Smart Gripper for Visual and Mechanical Adaptation in Grasping

Appl. Sci. 2022, 12(10), 5024; https://doi.org/10.3390/app12105024
by Li-Wei Cheng, Shih-Wei Liu and Jen-Yuan Chang *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(10), 5024; https://doi.org/10.3390/app12105024
Submission received: 20 April 2022 / Revised: 11 May 2022 / Accepted: 14 May 2022 / Published: 16 May 2022
(This article belongs to the Special Issue Application of Compliant Mechanisms in Robotics)

Round 1

Reviewer 1 Report

This study developed a general robotic gripper with three fingers for adaptive actuation and an eye-in-hand vision system. The manuscript is well arranged and nicely presented. I recommend a minor revision before the official acceptance.

-For the abstract part, I suggest the author give a short background introduction on the improvement part.

-For the introduction part, can the author make it more clear and short possible?

Author Response

Dear Reviewer:

Thank you very much for your valuable time in reviewing the manuscript.  Your comments and suggestions for improvement of the manuscript have been very much appreciated.  Herein the attached file contains our humble replies to each of the question/suggestion mentioned in your review.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper presents designs of link-structure-based shape-adaptive grippers, with thorough modeling and analysis of grasping movement. It also utilizes a camera positioned in the center of the gripper to assist in identifying the shape and motion of grasping targets. I appreciate the effort of the authors in presenting in-depth details about their design exploration, modeling, coordinate mapping, and image processing. However, the authors can still do better in highlighting the novelty and core contributions of this paper, as many of the methods included in this paper are known. Particularly, the abstract and the introduction section should be further improved, as the current one is obscure and fails to follow a clear logic flow.

 

In addition, this paper can be further improved if the authors can resolve the following questions:

1. Line 10: What is “visual offset”? Can you be more specific?

2. Line 18: “… when the camera is too close to the object” I don’t understand how it solves the out of sight problem.

3. Line 104: I cannot agree with the claim: “...occlusion by the gripper itself can be easily avoided with the eye-in-hand setup”. If the camera is surrounded by fingers, wouldn’t it have a narrow field of view and thus be subject to vision blocking?

4. Line 109: What do you mean by “visual adaptivity”? Can you be more specific?

5. Line 371: Such details about converting RGB to HSV format are unnecessary as it is already well-known.

6. Line 398: “The shape of the target object is determined by calculating the number of vertices”. Such criteria will only work for objects with simple 2D-like geometry. In section 5, grasping strategy using only two- and three-finger mode seems to be overly simple.

7. Line 554: The need for prior knowledge of conveyor speed suggests the limitation of such an “eye-in-hand” configuration.

8. The validation of the entire system is limited. It would be better if the authors can conduct a thorough experiment at the end of this paper to show the success rate of object grasping from the conveyor at different moving speeds.

 

Minor issues:

9. Line 20: the grammar of “The results indicate useful…” is incorrect

10. Line 69: “To properly design an adaptive gripper, we designed a gripper based on …” is verbose.

11. Line 98: “Easy implementation and simple calibration are other advantages of eye-to-hand systems” does not fit the context before and after.

12. The angle theta 1 is missing in Figure 3.

13. Figure 8: “Me” should be M theta

Author Response

Dear Reviewer,

Thank you so much for your valuable time in reviewing the manuscript.  Your comments and suggestions for improvement of the manuscript have been very much appreciated.  Please find in the attached PDF file ourr humble replies to each of the question/suggestion mentioned in your review.  Thank you!

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

I appreciate the active responses from the authors to the questions from the reviewers. The only issue left is that I am confused by the negative error rate shown in Table. 3.  Some explanations and the formula to compute the error rate should be included in the manuscript. Aside from that, I am satisfied with how the authors address those questions and have no further questions. 

Back to TopTop