Next Issue
Previous Issue

Table of Contents

Robotics, Volume 3, Issue 3 (September 2014), Pages 235-329

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Neural Networks Integrated Circuit for Biomimetics MEMS Microrobot
Robotics 2014, 3(3), 235-246; doi:10.3390/robotics3030235
Received: 31 March 2014 / Revised: 23 May 2014 / Accepted: 18 June 2014 / Published: 25 June 2014
Cited by 2 | PDF Full-text (1544 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we will propose the neural networks integrated circuit (NNIC) which is the driving waveform generator of the 4.0, 2.7, 2.5 mm, width, length, height in size biomimetics microelectromechanical systems (MEMS) microrobot. The microrobot was made from silicon wafer fabricated by
[...] Read more.
In this paper, we will propose the neural networks integrated circuit (NNIC) which is the driving waveform generator of the 4.0, 2.7, 2.5 mm, width, length, height in size biomimetics microelectromechanical systems (MEMS) microrobot. The microrobot was made from silicon wafer fabricated by micro fabrication technology. The mechanical system of the robot was equipped with small size rotary type actuators, link mechanisms and six legs to realize the ant-like switching behavior. The NNIC generates the driving waveform using synchronization phenomena such as biological neural networks. The driving waveform can operate the actuators of the MEMS microrobot directly. Therefore, the NNIC bare chip realizes the robot control without using any software programs or A/D converters. The microrobot performed forward and backward locomotion, and also changes direction by inputting an external single trigger pulse. The locomotion speed of the microrobot was 26.4 mm/min when the step width was 0.88 mm. The power consumption of the system was 250 mWh when the room temperature was 298 K. Full article
(This article belongs to the Special Issue Advances in Biomimetic Robotics)
Open AccessArticle IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning
Robotics 2014, 3(3), 247-280; doi:10.3390/robotics3030247
Received: 19 February 2014 / Revised: 24 April 2014 / Accepted: 17 June 2014 / Published: 11 July 2014
Cited by 4 | PDF Full-text (1770 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D
[...] Read more.
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization. Full article
(This article belongs to the Special Issue Robot Vision)
Open AccessArticle Rotation Matrix to Operate a Robot Manipulator for 2D Analog Tracking Objects Using Electrooculography
Robotics 2014, 3(3), 289-309; doi:10.3390/robotics3030289
Received: 27 February 2014 / Revised: 17 May 2014 / Accepted: 26 June 2014 / Published: 23 July 2014
Cited by 3 | PDF Full-text (1380 KB) | HTML Full-text | XML Full-text
Abstract
Performing some special tasks using electrooculography (EOG) in daily activities is being developed in various areas. In this paper, simple rotation matrixes were introduced to help the operator move a 2-DoF planar robot manipulator. The EOG sensor, NF 5201, has two output channels
[...] Read more.
Performing some special tasks using electrooculography (EOG) in daily activities is being developed in various areas. In this paper, simple rotation matrixes were introduced to help the operator move a 2-DoF planar robot manipulator. The EOG sensor, NF 5201, has two output channels (Ch1 and Ch2), as well as one ground channel and one reference channel. The robot movement was the indicator that this system could follow gaze motion based on EOG. Operators gazed into five training target points each in the horizontal and vertical line as the preliminary experiments, which were based on directions, distances and the areas of gaze motions. This was done to get the relationships between EOG and gaze motion distance for four directions, which were up, down, right and left. The maximum angle for the horizontal was 46°, while it was 38° for the vertical. Rotation matrixes for the horizontal and vertical signals were combined, so as to diagonally track objects. To verify, the errors between actual and desired target positions were calculated using the Euclidian distance. This test section had 20 random target points. The result indicated that this system could track an object with average angle errors of 3.31° in the x-axis and 3.58° in the y-axis. Full article

Review

Jump to: Research

Open AccessReview The Role of Indocyanine Green for Robotic Partial Nephrectomy: Early Results, Limitations and Future Directions
Robotics 2014, 3(3), 281-288; doi:10.3390/robotics3030281
Received: 27 February 2014 / Revised: 30 June 2014 / Accepted: 9 July 2014 / Published: 16 July 2014
PDF Full-text (281 KB) | HTML Full-text | XML Full-text
Abstract
The surgical management of small renal masses has continued to evolve, particularly with the advent of the robotic partial nephrectomy (RPN). Recent studies at high volume institutions utilizing near infrared imaging with indocyanine green (ICG) fluorescent dye to delineate renal tumor anatomy has
[...] Read more.
The surgical management of small renal masses has continued to evolve, particularly with the advent of the robotic partial nephrectomy (RPN). Recent studies at high volume institutions utilizing near infrared imaging with indocyanine green (ICG) fluorescent dye to delineate renal tumor anatomy has generated interest among robotic surgeons for improving warm ischemia times and positive margin rate for RPN. To date, early studies suggest positive margin rate using ICG is comparable to traditional RPN, however this technology improves visualization of the renal vasculature allowing selective clamping or zero ischemia. The precise combination of fluorescent compound, dose, and optimal tumor anatomy for ICG RPN has yet to be elucidated. Full article
(This article belongs to the Special Issue Medical Robotics and Systems)
Open AccessReview A Review of Camera Viewpoint Automation in Robotic and Laparoscopic Surgery
Robotics 2014, 3(3), 310-329; doi:10.3390/robotics3030310
Received: 1 May 2014 / Revised: 18 July 2014 / Accepted: 19 July 2014 / Published: 14 August 2014
Cited by 7 | PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
Complex teleoperative tasks, such as surgery, generally require human control. However, teleoperating a robot using indirect visual information poses many technical challenges because the user is expected to control the movement(s) of the camera(s) in addition to the robot’s arms and other elements.
[...] Read more.
Complex teleoperative tasks, such as surgery, generally require human control. However, teleoperating a robot using indirect visual information poses many technical challenges because the user is expected to control the movement(s) of the camera(s) in addition to the robot’s arms and other elements. For humans, camera positioning is difficult, error-prone, and a drain on the user’s available resources and attention. This paper reviews the state of the art of autonomous camera control with a focus on surgical applications. We also propose potential avenues of research in this field that will support the transition from direct slaved control to truly autonomous robotic camera systems. Full article
(This article belongs to the Special Issue Medical Robotics and Systems)

Journal Contact

MDPI AG
Robotics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
robotics@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Robotics
Back to Top