sensors-logo

Journal Browser

Journal Browser

Intelligent Sensors - 2010

A special issue of Sensors (ISSN 1424-8220).

Deadline for manuscript submissions: closed (30 June 2010) | Viewed by 344587

Special Issue Editor


E-Mail Website
Guest Editor
Facultad de Ingeniería y Ciencias Aplicadas, Campus Queri, Universidad de Las Américas—Ecuador, Calle José Queri s/n entre, Avenue De los Granados y Eloy Alfaro, Quito 170504, Ecuador
Interests: signal processing; estimation; control for sensors; robust and optimal sensor systems and their applications; statistical analysis of the information obtained from sensor measurements; signal conditioning techniques for intelligent sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The objective of this special issue is to provide high quality research results on intelligent (or smart) sensors. Full research papers with new results or a comprehensive review of the state-of-art of intelligent sensors and their applications are encouraged for submission. There are no restrictions on the topics of interest of this special issue.

Authors are encouraged to submit experimental and theoretical results in as much detail as possible of their research on this topic and/or their industrial applications (but not limited) to: automotive, consumer, environmental, medical, biological, chemical, electrical, mechatronics, robotic, nautical, aeronautical and space measurement systems. In addition, authors who are working on intelligent materials are encouraged to submit full research papers.

Prof. Dr. Wilmar Hernandez
Guest Editor

Related Special Issue

Published Papers (32 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

1229 KiB  
Article
Pervasive Monitoring—An Intelligent Sensor Pod Approach for Standardised Measurement Infrastructures
by Bernd Resch, Manfred Mittlboeck and Michael Lippautz
Sensors 2010, 10(12), 11440-11467; https://doi.org/10.3390/s101211440 - 13 Dec 2010
Cited by 17 | Viewed by 11123
Abstract
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The [...] Read more.
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a “digital skin for planet earth”. The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

3312 KiB  
Article
On the Relevance of Using Bayesian Belief Networks in Wireless Sensor Networks Situation Recognition
by Antoine B. Bagula, Isaac Osunmakinde and Marco Zennaro
Sensors 2010, 10(12), 11001-11020; https://doi.org/10.3390/s101211001 - 03 Dec 2010
Cited by 4 | Viewed by 8671
Abstract
Achieving situation recognition in ubiquitous sensor networks (USNs) is an important issue that has been poorly addressed by both the research and practitioner communities. This paper describes some steps taken to address this issue by effecting USN middleware intelligence using an emerging situation [...] Read more.
Achieving situation recognition in ubiquitous sensor networks (USNs) is an important issue that has been poorly addressed by both the research and practitioner communities. This paper describes some steps taken to address this issue by effecting USN middleware intelligence using an emerging situation awareness (ESA) technology. We propose a situation recognition framework where temporal probabilistic reasoning is used to derive and emerge situation awareness in ubiquitous sensor networks. Using data collected from an outdoor environment monitoring in the city of Cape Town, we illustrate the use of the ESA technology in terms of sensor system operating conditions and environmental situation recognition. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

4961 KiB  
Article
Multi-Sensor Person Following in Low-Visibility Scenarios
by Jorge Sales, Raúl Marín, Enric Cervera, Sergio Rodríguez and Javier Pérez
Sensors 2010, 10(12), 10953-10966; https://doi.org/10.3390/s101210953 - 03 Dec 2010
Cited by 21 | Viewed by 11997
Abstract
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is [...] Read more.
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

1096 KiB  
Article
A General Purpose Feature Extractor for Light Detection and Ranging Data
by Yangming Li and Edwin B. Olson
Sensors 2010, 10(11), 10356-10375; https://doi.org/10.3390/s101110356 - 17 Nov 2010
Cited by 32 | Viewed by 9831
Abstract
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended [...] Read more.
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

260 KiB  
Article
Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory
by Jiaming Li, Suhuai Luo and Jesse S. Jin
Sensors 2010, 10(10), 9384-9396; https://doi.org/10.3390/s101009384 - 18 Oct 2010
Cited by 15 | Viewed by 7761
Abstract
Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach [...] Read more.
Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

1683 KiB  
Article
Intelligent Sensor Positioning and Orientation Through Constructive Neural Network-Embedded INS/GPS Integration Algorithms
by Kai-Wei Chiang and Hsiu-Wen Chang
Sensors 2010, 10(10), 9252-9285; https://doi.org/10.3390/s101009252 - 15 Oct 2010
Cited by 29 | Viewed by 10737
Abstract
Mobile mapping systems have been widely applied for acquiring spatial information in applications such as spatial information systems and 3D city models. Nowadays the most common technologies used for positioning and orientation of a mobile mapping system include a Global Positioning System (GPS) [...] Read more.
Mobile mapping systems have been widely applied for acquiring spatial information in applications such as spatial information systems and 3D city models. Nowadays the most common technologies used for positioning and orientation of a mobile mapping system include a Global Positioning System (GPS) as the major positioning sensor and an Inertial Navigation System (INS) as the major orientation sensor. In the classical approach, the limitations of the Kalman Filter (KF) method and the overall price of multi-sensor systems have limited the popularization of most land-based mobile mapping applications. Although intelligent sensor positioning and orientation schemes consisting of Multi-layer Feed-forward Neural Networks (MFNNs), one of the most famous Artificial Neural Networks (ANNs), and KF/smoothers, have been proposed in order to enhance the performance of low cost Micro Electro Mechanical System (MEMS) INS/GPS integrated systems, the automation of the MFNN applied has not proven as easy as initially expected. Therefore, this study not only addresses the problems of insufficient automation in the conventional methodology that has been applied in MFNN-KF/smoother algorithms for INS/GPS integrated systems proposed in previous studies, but also exploits and analyzes the idea of developing alternative intelligent sensor positioning and orientation schemes that integrate various sensors in more automatic ways. The proposed schemes are implemented using one of the most famous constructive neural networks––the Cascade Correlation Neural Network (CCNNs)––to overcome the limitations of conventional techniques based on KF/smoother algorithms as well as previously developed MFNN-smoother schemes. The CCNNs applied also have the advantage of a more flexible topology compared to MFNNs. Based on the experimental data utilized the preliminary results presented in this article illustrate the effectiveness of the proposed schemes compared to smoother algorithms as well as the MFNN-smoother schemes. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

617 KiB  
Article
Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera
by Eduardo Magdaleno, Jonás Philipp Lüke, Manuel Rodríguez and José Manuel Rodríguez-Ramos
Sensors 2010, 10(10), 9194-9210; https://doi.org/10.3390/s101009194 - 15 Oct 2010
Cited by 9 | Viewed by 10004
Abstract
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to [...] Read more.
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

1070 KiB  
Article
Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments
by Marta Marrón-Romera, Juan C. García, Miguel A. Sotelo, Daniel Pizarro, Manuel Mazo, José M. Cañas, Cristina Losada and Álvaro Marcos
Sensors 2010, 10(10), 8865-8887; https://doi.org/10.3390/s101008865 - 28 Sep 2010
Cited by 22 | Viewed by 11320
Abstract
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition [...] Read more.
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

603 KiB  
Article
Wireless Intelligent Sensors Management Application Protocol-WISMAP
by Juan Carlos Cuevas-Martinez, Manuel Angel Gadeo-Martos, Jose Angel Fernandez-Prieto, Joaquin Canada-Bago and Antonio Jesus Yuste-Delgado
Sensors 2010, 10(10), 8827-8849; https://doi.org/10.3390/s101008827 - 28 Sep 2010
Cited by 11 | Viewed by 10426
Abstract
Although many recent studies have focused on the development of new applications for wireless sensor networks, less attention has been paid to knowledge-based sensor nodes. The objective of this work is the development in a real network of a new distributed system in [...] Read more.
Although many recent studies have focused on the development of new applications for wireless sensor networks, less attention has been paid to knowledge-based sensor nodes. The objective of this work is the development in a real network of a new distributed system in which every sensor node can execute a set of applications, such as fuzzy ruled-base systems, measures, and actions. The sensor software is based on a multi-agent structure that is composed of three components: management, application control, and communication agents; a service interface, which provides applications the abstraction of sensor hardware and other components; and an application layer protocol. The results show the effectiveness of the communication protocol and that the proposed system is suitable for a wide range of applications. As real world applications, this work presents an example of a fuzzy rule-based system and a noise pollution monitoring application that obtains a fuzzy noise indicator. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

508 KiB  
Article
Color Regeneration from Reflective Color Sensor Using an Artificial Intelligent Technique
by Ömer Galip Saracoglu and Hayriye Altural
Sensors 2010, 10(9), 8363-8374; https://doi.org/10.3390/s100908363 - 03 Sep 2010
Cited by 21 | Viewed by 11137
Abstract
A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in [...] Read more.
A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

1800 KiB  
Article
Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions
by Basam Musleh, Fernando García, Javier Otamendi, José Mª Armingol and Arturo De la Escalera
Sensors 2010, 10(9), 8028-8053; https://doi.org/10.3390/s100908028 - 27 Aug 2010
Cited by 41 | Viewed by 13065
Abstract
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, [...] Read more.
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

3457 KiB  
Article
Class Separation Improvements in Pixel Classification Using Colour Injection
by Edward Blanco, Manuel Mazo, Luis Bergasa, Sira Palazuelos, Jose Rodríguez, Cristina Losada and Jose Martín
Sensors 2010, 10(8), 7803-7842; https://doi.org/10.3390/s100807803 - 20 Aug 2010
Viewed by 11797
Abstract
This paper presents an improvement in the colour image segmentation in the Hue Saturation (HS) sub-space. The authors propose to inject (add) a colour vector in the Red Green Blue (RGB) space to increase the class separation in the [...] Read more.
This paper presents an improvement in the colour image segmentation in the Hue Saturation (HS) sub-space. The authors propose to inject (add) a colour vector in the Red Green Blue (RGB) space to increase the class separation in the HS plane. The goal of the work is the development of an algorithm to obtain the optimal colour vector for injection that maximizes the separation between the classes in the HS plane. The chromatic Chrominace-1 Chrominance-2 sub-space (of the Luminance Chrominace-1 Chrominance-2 (YC1C2) space) is used to obtain the optimal vector to add. The proposal is applied on each frame of a colour image sequence in real-time. It has been tested in applications with reduced contrast between the colours of the background and the object, and particularly when the size of the object is very small in comparison with the size of the captured scene. Numerous tests have confirmed that this proposal improves the segmentation process, considerably reducing the effects of the variation of the light intensity of the scene. Several tests have been made in skin segmentation in applications for sign language recognition via computer vision, where an accurate segmentation of hands and face is required. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

1647 KiB  
Article
A Universal Intelligent System-on-Chip Based Sensor Interface
by Virgilio Mattoli, Alessio Mondini, Barbara Mazzolai, Gabriele Ferri and Paolo Dario
Sensors 2010, 10(8), 7716-7747; https://doi.org/10.3390/s100807716 - 17 Aug 2010
Cited by 20 | Viewed by 11734
Abstract
The need for real-time/reliable/low-maintenance distributed monitoring systems, e.g., wireless sensor networks, has been becoming more and more evident in many applications in the environmental, agro-alimentary, medical, and industrial fields. The growing interest in technologies related to sensors is an important indicator of these [...] Read more.
The need for real-time/reliable/low-maintenance distributed monitoring systems, e.g., wireless sensor networks, has been becoming more and more evident in many applications in the environmental, agro-alimentary, medical, and industrial fields. The growing interest in technologies related to sensors is an important indicator of these new needs. The design and the realization of complex and/or distributed monitoring systems is often difficult due to the multitude of different electronic interfaces presented by the sensors available on the market. To address these issues the authors propose the concept of a Universal Intelligent Sensor Interface (UISI), a new low-cost system based on a single commercial chip able to convert a generic transducer into an intelligent sensor with multiple standardized interfaces. The device presented offers a flexible analog and/or digital front-end, able to interface different transducer typologies (such as conditioned, unconditioned, resistive, current output, capacitive and digital transducers). The device also provides enhanced processing and storage capabilities, as well as a configurable multi-standard output interface (including plug-and-play interface based on IEEE 1451.3). In this work the general concept of UISI and the design of reconfigurable hardware are presented, together with experimental test results validating the proposed device. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

5202 KiB  
Article
A Software Architecture for Adaptive Modular Sensing Systems
by Andrew C. Lyle and Michael D. Naish
Sensors 2010, 10(8), 7514-7560; https://doi.org/10.3390/s100807514 - 10 Aug 2010
Cited by 4 | Viewed by 9448
Abstract
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of [...] Read more.
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

2073 KiB  
Article
A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines
by Sandra Veronica Moreno-Tapia, Luis Alberto Vera-Salas, Roque Alfredo Osornio-Rios, Aurelio Dominguez-Gonzalez, Ion Stiharu and Rene de Jesus Romero-Troncoso
Sensors 2010, 10(8), 7263-7286; https://doi.org/10.3390/s100807263 - 03 Aug 2010
Cited by 11 | Viewed by 12069
Abstract
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is [...] Read more.
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

426 KiB  
Article
Investigation of the Frequency Shift of a SAD Circuit Loop and the Internal Micro-Cantilever in a Gas Sensor
by Liu Guan, Jiahao Zhao, Shijie Yu, Peng Li and Zheng You
Sensors 2010, 10(7), 7044-7056; https://doi.org/10.3390/s100707044 - 23 Jul 2010
Cited by 4 | Viewed by 9656
Abstract
Micro-cantilever sensors for mass detection using resonance frequency have attracted considerable attention over the last decade in the field of gas sensing. For such a sensing system, an oscillator circuit loop is conventionally used to actuate the micro-cantilever, and trace the frequency shifts. [...] Read more.
Micro-cantilever sensors for mass detection using resonance frequency have attracted considerable attention over the last decade in the field of gas sensing. For such a sensing system, an oscillator circuit loop is conventionally used to actuate the micro-cantilever, and trace the frequency shifts. In this paper, gas experiments are introduced to investigate the mechanical resonance frequency shifts of the micro-cantilever within the circuit loop(mechanical resonance frequency, MRF) and resonating frequency shifts of the electric signal in the oscillator circuit (system working frequency, SWF). A silicon beam with a piezoelectric zinc oxide layer is employed in the experiment, and a Self-Actuating-Detecting (SAD) circuit loop is built to drive the micro-cantilever and to follow the frequency shifts. The differences between the two resonating frequencies and their shifts are discussed and analyzed, and a coefficientrelated to the two frequency shifts is confirmed.Micro-cantilever sensors for mass detection using resonance frequency have attracted considerable attention over the last decade in the field of gas sensing. For such a sensing system, an oscillator circuit loop is conventionally used to actuate the micro-cantilever, and trace the frequency shifts. In this paper, gas experiments are introduced to investigate the mechanical resonance frequency shifts of the micro-cantilever within the circuit loop(mechanical resonance frequency, MRF) and resonating frequency shifts of the electric signal in the oscillator circuit (system working frequency, SWF). A silicon beam with a piezoelectric zinc oxide layer is employed in the experiment, and a Self-Actuating-Detecting (SAD) circuit loop is built to drive the micro-cantilever and to follow the frequency shifts. The differences between the two resonating frequencies and their shifts are discussed and analyzed, and a coefficientrelated to the two frequency shifts is confirmed. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

373 KiB  
Article
Study on a Two-Dimensional Scanning Micro-Mirror and Its Application in a MOEMS Target Detector
by Chi Zhang, Zheng You, Hu Huang and Guanhua Li
Sensors 2010, 10(7), 6848-6860; https://doi.org/10.3390/s100706848 - 16 Jul 2010
Cited by 15 | Viewed by 9627
Abstract
A two-dimensional (2D) scanning micro-mirror for target detection and measurement has been developed. This new micro-mirror is used in a MOEMS target detector to replace the conventional scanning detector. The micro-mirror is fabricated by MEMS process and actuated by a piezoelectric actuator. To [...] Read more.
A two-dimensional (2D) scanning micro-mirror for target detection and measurement has been developed. This new micro-mirror is used in a MOEMS target detector to replace the conventional scanning detector. The micro-mirror is fabricated by MEMS process and actuated by a piezoelectric actuator. To achieve large deflection angles, the micro-mirror is excited in the resonance modes. It has two degrees of freedom and changes the direction of the emitted laser beam for a regional 2D scanning. For the deflection angles measurement, piezoresistors are integrated in the micro-mirror and the deflection angles of each direction can be detected independently and precisely. Based on the scanning micro-mirror and the phase-shift ranging technology, a MOEMS target detector has been developed in a size of 90 mm × 35 mm × 50 mm. The experiment shows that the target can be detected in the scanning field and the relative range and orientation can be measured by the MOEMS target detector. For the target distance up to 3 m with a field of view about 20º × 20º, the measurement resolution is about 10.2 cm in range, 0.15º in the horizontal direction and 0.22º in the vertical direction for orientation. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

885 KiB  
Article
Bathymetry Determination via X-Band Radar Data: A New Strategy and Numerical Results
by Francesco Serafino, Claudio Lugni, Jose Carlos Nieto Borge, Virginia Zamparelli and Francesco Soldovieri
Sensors 2010, 10(7), 6522-6534; https://doi.org/10.3390/s100706522 - 06 Jul 2010
Cited by 36 | Viewed by 10712
Abstract
This work deals with the question of sea state monitoring using marine X-band radar images and focuses its attention on the problem of sea depth estimation. We present and discuss a technique to estimate bathymetry by exploiting the dispersion relation for surface gravity [...] Read more.
This work deals with the question of sea state monitoring using marine X-band radar images and focuses its attention on the problem of sea depth estimation. We present and discuss a technique to estimate bathymetry by exploiting the dispersion relation for surface gravity waves. This estimation technique is based on the correlation between the measured and the theoretical sea wave spectra and a simple analysis of the approach is performed through test cases with synthetic data. More in detail, the reliability of the estimate technique is verified through simulated data sets that are concerned with different values of bathymetry and surface currents for two types of sea spectrum: JONSWAP and Pierson-Moskowitz. The results show how the estimated bathymetry is fairly accurate for low depth values, while the estimate is less accurate as the bathymetry increases, due to a less significant role of the bathymetry on the sea surface waves as the water depth increases. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

177 KiB  
Article
Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation
by Young Soo Suh, Young Sik Ro and Hee Jun Kang
Sensors 2010, 10(7), 6406-6420; https://doi.org/10.3390/s100706406 - 29 Jun 2010
Viewed by 7127
Abstract
This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths [...] Read more.
This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths affect the estimation performance, an equation to compute the estimation performance is provided. An algorithm is proposed to find sampling periods and quantization bit lengths combination, which gives good estimation performance while satisfying the transmission rate constraint. Through the numerical example, the proposed algorithm is verified. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

2603 KiB  
Article
EMMNet: Sensor Networking for Electricity Meter Monitoring
by Zhi-Ting Lin, Jie Zheng, Yu-Sheng Ji, Bao-Hua Zhao, Yu-Gui Qu, Xu-Dong Huang and Xiu-Fang Jiang
Sensors 2010, 10(7), 6307-6323; https://doi.org/10.3390/s100706307 - 24 Jun 2010
Cited by 5 | Viewed by 9787
Abstract
Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an [...] Read more.
Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an electricity meter monitoring network, named EMMNet, comprised of data collectors, data concentrators, hand-held devices, a centralized server, and clients. EMMNet provides long-distance communication capabilities, which make it suitable suitable for complex urban environments. In addition, the operational cost of EMMNet is low, compared with other existing remote meter monitoring systems based on GPRS. A new dynamic tree protocol based on the application requirements which can significantly improve the reliability of the network is also proposed. We are currently conducting tests on five networks and investigating network problems for further improvements. Evaluation results indicate that EMMNet enhances the efficiency and accuracy in the reading, recording, and calibration of electricity meters. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

259 KiB  
Article
A New Collaborative Knowledge-Based Approach for Wireless Sensor Networks
by Joaquin Canada-Bago, Jose Angel Fernandez-Prieto, Manuel Angel Gadeo-Martos and Juan Ramón Velasco
Sensors 2010, 10(6), 6044-6062; https://doi.org/10.3390/s100606044 - 17 Jun 2010
Cited by 9 | Viewed by 9568
Abstract
This work presents a new approach for collaboration among sensors in Wireless Sensor Networks. These networks are composed of a large number of sensor nodes with constrained resources: limited computational capability, memory, power sources, etc. Nowadays, there is a growing interest in the [...] Read more.
This work presents a new approach for collaboration among sensors in Wireless Sensor Networks. These networks are composed of a large number of sensor nodes with constrained resources: limited computational capability, memory, power sources, etc. Nowadays, there is a growing interest in the integration of Soft Computing technologies into Wireless Sensor Networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks. The objective of this work is to design a collaborative knowledge-based network, in which each sensor executes an adapted Fuzzy Rule-Based System, which presents significant advantages such as: experts can define interpretable knowledge with uncertainty and imprecision, collaborative knowledge can be separated from control or modeling knowledge and the collaborative approach may support neighbor sensor failures and communication errors. As a real-world application of this approach, we demonstrate a collaborative modeling system for pests, in which an alarm about the development of olive tree fly is inferred. The results show that knowledge-based sensors are suitable for a wide range of applications and that the behavior of a knowledge-based sensor may be modified by inferences and knowledge of neighbor sensors in order to obtain a more accurate and reliable output. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

3672 KiB  
Article
Fast Scene Recognition and Camera Relocalisation for Wide Area Augmented Reality Systems
by Tao Guan, Liya Duan, Yongjian Chen and Junqing Yu
Sensors 2010, 10(6), 6017-6043; https://doi.org/10.3390/s100606017 - 14 Jun 2010
Cited by 5 | Viewed by 8760
Abstract
This paper focuses on online scene learning and fast camera relocalisation which are two key problems currently limiting the performance of wide area augmented reality systems. Firstly, we propose to use adaptive random trees to deal with the online scene learning problem. The [...] Read more.
This paper focuses on online scene learning and fast camera relocalisation which are two key problems currently limiting the performance of wide area augmented reality systems. Firstly, we propose to use adaptive random trees to deal with the online scene learning problem. The algorithm can provide more accurate recognition rates than traditional methods, especially with large scale workspaces. Secondly, we use the enhanced PROSAC algorithm to obtain a fast camera relocalisation method. Compared with traditional algorithms, our method can significantly reduce the computation complexity, which facilitates to a large degree the process of online camera relocalisation. Finally, we implement our algorithms in a multithreaded manner by using a parallel-computing scheme. Camera tracking, scene mapping, scene learning and relocalisation are separated into four threads by using multi-CPU hardware architecture. While providing real-time tracking performance, the resulting system also possesses the ability to track multiple maps simultaneously. Some experiments have been conducted to demonstrate the validity of our methods. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

769 KiB  
Article
A Fiber Optic Doppler Sensor and Its Application in Debonding Detection for Composite Structures
by Fucai Li, Hideaki Murayama, Kazuro Kageyama, Guang Meng, Isamu Ohsawa and Takehiro Shirai
Sensors 2010, 10(6), 5975-5993; https://doi.org/10.3390/s100605975 - 14 Jun 2010
Cited by 17 | Viewed by 8892
Abstract
Debonding is one of the most important damage forms in fiber-reinforced composite structures. This work was devoted to the debonding damage detection of lap splice joints in carbon fiber reinforced plastic (CFRP) structures, which is based on guided ultrasonic wave signals captured by [...] Read more.
Debonding is one of the most important damage forms in fiber-reinforced composite structures. This work was devoted to the debonding damage detection of lap splice joints in carbon fiber reinforced plastic (CFRP) structures, which is based on guided ultrasonic wave signals captured by using fiber optic Doppler (FOD) sensor with spiral shape. Interferometers based on two types of laser sources, namely the He-Ne laser and the infrared semiconductor laser, are proposed and compared in this study for the purpose of measuring Doppler frequency shift of the FOD sensor. Locations of the FOD sensors are optimized based on mechanical characteristics of lap splice joint. The FOD sensors are subsequently used to detect the guided ultrasonic waves propagating in the CFRP structures. By taking advantage of signal processing approaches, features of the guided wave signals can be revealed. The results demonstrate that debonding in the lap splice joint results in arrival time delay of the first package in the guided wave signals, which can be the characteristic for debonding damage inspection and damage extent estimation. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

1337 KiB  
Article
Data Acquisition, Analysis and Transmission Platform for a Pay-As-You-Drive System
by Luciano Boquete, José Manuel Rodríguez-Ascariz, Rafael Barea, Joaquín Cantos, Juan Manuel Miguel-Jiménez and Sergio Ortega
Sensors 2010, 10(6), 5395-5408; https://doi.org/10.3390/s100605395 - 01 Jun 2010
Cited by 27 | Viewed by 11973
Abstract
This paper presents a platform used to acquire, analyse and transmit data from a vehicle to a Control Centre as part of a Pay-As-You-Drive system. The aim is to monitor vehicle usage (how much, when, where and how) and, [...] Read more.
This paper presents a platform used to acquire, analyse and transmit data from a vehicle to a Control Centre as part of a Pay-As-You-Drive system. The aim is to monitor vehicle usage (how much, when, where and how) and, based on this information, assess the associated risk and set an appropriate insurance premium. To determine vehicle usage, the system analyses the driver's respect for speed limits, driving style (aggressive or non-aggressive), mobile telephone use and the number of vehicle passengers. An electronic system on board the vehicle acquires these data, processes them and transmits them by mobile telephone (GPRS/UMTS) to a Control Centre, at which the insurance company assesses the risk associated with vehicles monitored by the system. The system provides insurance companies and their customers with an enhanced service and could potentially increase responsible driving habits and reduce the number of road accidents. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

2746 KiB  
Article
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
by Arturo Gil, Óscar Reinoso, Mónica Ballesta, Miguel Juliá and Luis Payá
Sensors 2010, 10(5), 5209-5232; https://doi.org/10.3390/s100505209 - 25 May 2010
Cited by 22 | Viewed by 11528
Abstract
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves [...] Read more.
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

382 KiB  
Article
Common Criteria Related Security Design Patterns—Validation on the Intelligent Sensor Example Designed for Mine Environment
by Andrzej Bialas
Sensors 2010, 10(5), 4456-4496; https://doi.org/10.3390/s100504456 - 30 Apr 2010
Cited by 14 | Viewed by 10478
Abstract
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems [...] Read more.
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems should be developed in a restricted way to provide them with assurance. One of assurance creation methodologies is Common Criteria (ISO/IEC 15408), used for IT products and systems. The contribution of the paper is a Common Criteria compliant and pattern-based method for the intelligent sensors security development. The paper concisely presents this method and its evaluation for the sensor detecting methane in a mine, focusing on the security problem of the intelligent sensor definition and solution. The aim of the validation is to evaluate and improve the introduced method. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

3738 KiB  
Article
FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links
by Carlos Rodriguez-Donate, Luis Morales-Velazquez, Roque Alfredo Osornio-Rios, Gilberto Herrera-Ruiz and Rene de Jesus Romero-Troncoso
Sensors 2010, 10(4), 4114-4129; https://doi.org/10.3390/s100404114 - 26 Apr 2010
Cited by 23 | Viewed by 13749
Abstract
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate [...] Read more.
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

3085 KiB  
Article
Localization of Mobile Robots Using Odometry and an External Vision Sensor
by Daniel Pizarro, Manuel Mazo, Enrique Santiso, Marta Marron, David Jimenez, Santiago Cobreces and Cristina Losada
Sensors 2010, 10(4), 3655-3680; https://doi.org/10.3390/s100403655 - 13 Apr 2010
Cited by 38 | Viewed by 13224
Abstract
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of [...] Read more.
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

1084 KiB  
Article
FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts
by Miguel Trejo-Hernandez, Roque Alfredo Osornio-Rios, Rene de Jesus Romero-Troncoso, Carlos Rodriguez-Donate, Aurelio Dominguez-Gonzalez and Gilberto Herrera-Ruiz
Sensors 2010, 10(4), 3373-3388; https://doi.org/10.3390/s100403373 - 07 Apr 2010
Cited by 38 | Viewed by 11126
Abstract
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation [...] Read more.
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Graphical abstract

739 KiB  
Article
Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots
by Cristina Losada, Manuel Mazo, Sira Palazuelos, Daniel Pizarro and Marta Marrón
Sensors 2010, 10(4), 3261-3279; https://doi.org/10.3390/s100403261 - 01 Apr 2010
Cited by 22 | Viewed by 11090
Abstract
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). [...] Read more.
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Review

Jump to: Research

639 KiB  
Review
A Smart Checkpointing Scheme for Improving the Reliability of Clustering Routing Protocols
by Hong Min, Jinman Jung, Bongjae Kim, Yookun Cho, Junyoung Heo, Sangho Yi and Jiman Hong
Sensors 2010, 10(10), 8938-8952; https://doi.org/10.3390/s101008938 - 29 Sep 2010
Cited by 4 | Viewed by 10744
Abstract
In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based [...] Read more.
In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based on data aggregation schemes aimed at minimizing packet numbers have been proposed to meet these requirements. In clustering routing protocols, the cluster head plays an important role. The cluster head collects data from its member nodes and aggregates the collected data. To improve reliability and reduce recovery latency, we propose a checkpointing scheme for the cluster head. In the proposed scheme, backup nodes monitor and checkpoint the current state of the cluster head periodically. We also derive the checkpointing interval that maximizes reliability while using the same amount of energy consumed by clustering routing protocols that operate without checkpointing. Experimental comparisons with existing non-checkpointing schemes show that our scheme reduces both energy consumption and recovery latency. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

421 KiB  
Review
Intelligent Chiral Sensing Based on Supramolecular and Interfacial Concepts
by Katsuhiko Ariga, Gary J. Richards, Shinsuke Ishihara, Hironori Izawa and Jonathan P. Hill
Sensors 2010, 10(7), 6796-6820; https://doi.org/10.3390/s100706796 - 13 Jul 2010
Cited by 59 | Viewed by 14543
Abstract
Of the known intelligently-operating systems, the majority can undoubtedly be classed as being of biological origin. One of the notable differences between biological and artificial systems is the important fact that biological materials consist mostly of chiral molecules. While most biochemical processes routinely [...] Read more.
Of the known intelligently-operating systems, the majority can undoubtedly be classed as being of biological origin. One of the notable differences between biological and artificial systems is the important fact that biological materials consist mostly of chiral molecules. While most biochemical processes routinely discriminate chiral molecules, differentiation between chiral molecules in artificial systems is currently one of the challenging subjects in the field of molecular recognition. Therefore, one of the important challenges for intelligent man-made sensors is to prepare a sensing system that can discriminate chiral molecules. Because intermolecular interactions and detection at surfaces are respectively parts of supramolecular chemistry and interfacial science, chiral sensing based on supramolecular and interfacial concepts is a significant topic. In this review, we briefly summarize recent advances in these fields, including supramolecular hosts for color detection on chiral sensing, indicator-displacement assays, kinetic resolution in supramolecular reactions with analyses by mass spectrometry, use of chiral shape-defined polymers, such as dynamic helical polymers, molecular imprinting, thin films on surfaces of devices such as QCM, functional electrodes, FET, and SPR, the combined technique of magnetic resonance imaging and immunoassay, and chiral detection using scanning tunneling microscopy and cantilever technology. In addition, we will discuss novel concepts in recent research including the use of achiral reagents for chiral sensing with NMR, and mechanical control of chiral sensing. The importance of integration of chiral sensing systems with rapidly developing nanotechnology and nanomaterials is also emphasized. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Show Figures

Back to TopTop