Next Issue
Volume 3, December
Previous Issue
Volume 3, June
 
 

Robotics, Volume 3, Issue 3 (September 2014) – 5 articles , Pages 235-329

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:

Research

Jump to: Review

1544 KiB  
Article
Neural Networks Integrated Circuit for Biomimetics MEMS Microrobot
by Ken Saito, Kazuaki Maezumi, Yuka Naito, Tomohiro Hidaka, Kei Iwata, Yuki Okane, Hirozumi Oku, Minami Takato and Fumio Uchikoba
Robotics 2014, 3(3), 235-246; https://doi.org/10.3390/robotics3030235 - 25 Jun 2014
Cited by 8 | Viewed by 10635
Abstract
In this paper, we will propose the neural networks integrated circuit (NNIC) which is the driving waveform generator of the 4.0, 2.7, 2.5 mm, width, length, height in size biomimetics microelectromechanical systems (MEMS) microrobot. The microrobot was made from silicon wafer fabricated by [...] Read more.
In this paper, we will propose the neural networks integrated circuit (NNIC) which is the driving waveform generator of the 4.0, 2.7, 2.5 mm, width, length, height in size biomimetics microelectromechanical systems (MEMS) microrobot. The microrobot was made from silicon wafer fabricated by micro fabrication technology. The mechanical system of the robot was equipped with small size rotary type actuators, link mechanisms and six legs to realize the ant-like switching behavior. The NNIC generates the driving waveform using synchronization phenomena such as biological neural networks. The driving waveform can operate the actuators of the MEMS microrobot directly. Therefore, the NNIC bare chip realizes the robot control without using any software programs or A/D converters. The microrobot performed forward and backward locomotion, and also changes direction by inputting an external single trigger pulse. The locomotion speed of the microrobot was 26.4 mm/min when the step width was 0.88 mm. The power consumption of the system was 250 mWh when the room temperature was 298 K. Full article
(This article belongs to the Special Issue Advances in Biomimetic Robotics)
Show Figures

Figure 1

1770 KiB  
Article
IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning
by Jacky C.K. Chow, Derek D. Lichti, Jeroen D. Hol, Giovanni Bellusci and Henk Luinge
Robotics 2014, 3(3), 247-280; https://doi.org/10.3390/robotics3030247 - 11 Jul 2014
Cited by 42 | Viewed by 15574
Abstract
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D [...] Read more.
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization. Full article
(This article belongs to the Special Issue Robot Vision)
Show Figures

Figure 1

1380 KiB  
Article
Rotation Matrix to Operate a Robot Manipulator for 2D Analog Tracking Objects Using Electrooculography
by Muhammad Ilhamdi Rusydi, Takeo Okamoto, Satoshi Ito and Minoru Sasaki
Robotics 2014, 3(3), 289-309; https://doi.org/10.3390/robotics3030289 - 23 Jul 2014
Cited by 16 | Viewed by 10576
Abstract
Performing some special tasks using electrooculography (EOG) in daily activities is being developed in various areas. In this paper, simple rotation matrixes were introduced to help the operator move a 2-DoF planar robot manipulator. The EOG sensor, NF 5201, has two output channels [...] Read more.
Performing some special tasks using electrooculography (EOG) in daily activities is being developed in various areas. In this paper, simple rotation matrixes were introduced to help the operator move a 2-DoF planar robot manipulator. The EOG sensor, NF 5201, has two output channels (Ch1 and Ch2), as well as one ground channel and one reference channel. The robot movement was the indicator that this system could follow gaze motion based on EOG. Operators gazed into five training target points each in the horizontal and vertical line as the preliminary experiments, which were based on directions, distances and the areas of gaze motions. This was done to get the relationships between EOG and gaze motion distance for four directions, which were up, down, right and left. The maximum angle for the horizontal was 46°, while it was 38° for the vertical. Rotation matrixes for the horizontal and vertical signals were combined, so as to diagonally track objects. To verify, the errors between actual and desired target positions were calculated using the Euclidian distance. This test section had 20 random target points. The result indicated that this system could track an object with average angle errors of 3.31° in the x-axis and 3.58° in the y-axis. Full article
Show Figures

Figure 1

Review

Jump to: Research

281 KiB  
Review
The Role of Indocyanine Green for Robotic Partial Nephrectomy: Early Results, Limitations and Future Directions
by Zachary Klaassen, Qiang Li, Rabii Madi and Martha K. Terris
Robotics 2014, 3(3), 281-288; https://doi.org/10.3390/robotics3030281 - 16 Jul 2014
Cited by 4 | Viewed by 8470
Abstract
The surgical management of small renal masses has continued to evolve, particularly with the advent of the robotic partial nephrectomy (RPN). Recent studies at high volume institutions utilizing near infrared imaging with indocyanine green (ICG) fluorescent dye to delineate renal tumor anatomy has [...] Read more.
The surgical management of small renal masses has continued to evolve, particularly with the advent of the robotic partial nephrectomy (RPN). Recent studies at high volume institutions utilizing near infrared imaging with indocyanine green (ICG) fluorescent dye to delineate renal tumor anatomy has generated interest among robotic surgeons for improving warm ischemia times and positive margin rate for RPN. To date, early studies suggest positive margin rate using ICG is comparable to traditional RPN, however this technology improves visualization of the renal vasculature allowing selective clamping or zero ischemia. The precise combination of fluorescent compound, dose, and optimal tumor anatomy for ICG RPN has yet to be elucidated. Full article
(This article belongs to the Special Issue Medical Robotics and Systems)
Show Figures

Figure 1

342 KiB  
Review
A Review of Camera Viewpoint Automation in Robotic and Laparoscopic Surgery
by Abhilash Pandya, Luke A. Reisner, Brady King, Nathan Lucas, Anthony Composto, Michael Klein and Richard Darin Ellis
Robotics 2014, 3(3), 310-329; https://doi.org/10.3390/robotics3030310 - 14 Aug 2014
Cited by 56 | Viewed by 14743
Abstract
Complex teleoperative tasks, such as surgery, generally require human control. However, teleoperating a robot using indirect visual information poses many technical challenges because the user is expected to control the movement(s) of the camera(s) in addition to the robot’s arms and other elements. [...] Read more.
Complex teleoperative tasks, such as surgery, generally require human control. However, teleoperating a robot using indirect visual information poses many technical challenges because the user is expected to control the movement(s) of the camera(s) in addition to the robot’s arms and other elements. For humans, camera positioning is difficult, error-prone, and a drain on the user’s available resources and attention. This paper reviews the state of the art of autonomous camera control with a focus on surgical applications. We also propose potential avenues of research in this field that will support the transition from direct slaved control to truly autonomous robotic camera systems. Full article
(This article belongs to the Special Issue Medical Robotics and Systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop