E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Intelligent Sensors - 2010"

Quicklinks

A special issue of Sensors (ISSN 1424-8220).

Deadline for manuscript submissions: closed (30 June 2010)

Special Issue Editor

Guest Editor
Prof. Dr. Wilmar Hernandez

Department of Computer Science and Electronics, Universidad Tecnica Particular de Loja-UTPL, Campus UTPL, Calle San Cayetano Alto s/n, PO Box: 1101608, Loja, Loja, Ecuador
E-Mail
Interests: intelligent sensors; mechanical sensors; electronics; instrumentation; optimal signal processing; robust and optimal control

Special Issue Information

Dear Colleagues,

The objective of this special issue is to provide high quality research results on intelligent (or smart) sensors. Full research papers with new results or a comprehensive review of the state-of-art of intelligent sensors and their applications are encouraged for submission. There are no restrictions on the topics of interest of this special issue.

Authors are encouraged to submit experimental and theoretical results in as much detail as possible of their research on this topic and/or their industrial applications (but not limited) to: automotive, consumer, environmental, medical, biological, chemical, electrical, mechatronics, robotic, nautical, aeronautical and space measurement systems. In addition, authors who are working on intelligent materials are encouraged to submit full research papers.

Prof. Dr. Wilmar Hernandez
Guest Editor

Related Special Issue

Published Papers (32 papers)

View options order results:
result details:
Displaying articles 1-32
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Pervasive Monitoring—An Intelligent Sensor Pod Approach for Standardised Measurement Infrastructures
Sensors 2010, 10(12), 11440-11467; doi:10.3390/s101211440
Received: 8 October 2010 / Revised: 19 November 2010 / Accepted: 9 December 2010 / Published: 13 December 2010
Cited by 13 | PDF Full-text (1229 KB) | HTML Full-text | XML Full-text
Abstract
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The
[...] Read more.
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a “digital skin for planet earth”. The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Multi-Sensor Person Following in Low-Visibility Scenarios
Sensors 2010, 10(12), 10953-10966; doi:10.3390/s101210953
Received: 24 August 2010 / Revised: 16 September 2010 / Accepted: 25 November 2010 / Published: 3 December 2010
Cited by 13 | PDF Full-text (4961 KB) | HTML Full-text | XML Full-text
Abstract
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is
[...] Read more.
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle On the Relevance of Using Bayesian Belief Networks in Wireless Sensor Networks Situation Recognition
Sensors 2010, 10(12), 11001-11020; doi:10.3390/s101211001
Received: 8 October 2010 / Revised: 1 November 2010 / Accepted: 22 November 2010 / Published: 3 December 2010
Cited by 2 | PDF Full-text (3312 KB) | HTML Full-text | XML Full-text
Abstract
Achieving situation recognition in ubiquitous sensor networks (USNs) is an important issue that has been poorly addressed by both the research and practitioner communities. This paper describes some steps taken to address this issue by effecting USN middleware intelligence using an emerging situation
[...] Read more.
Achieving situation recognition in ubiquitous sensor networks (USNs) is an important issue that has been poorly addressed by both the research and practitioner communities. This paper describes some steps taken to address this issue by effecting USN middleware intelligence using an emerging situation awareness (ESA) technology. We propose a situation recognition framework where temporal probabilistic reasoning is used to derive and emerge situation awareness in ubiquitous sensor networks. Using data collected from an outdoor environment monitoring in the city of Cape Town, we illustrate the use of the ESA technology in terms of sensor system operating conditions and environmental situation recognition. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle A General Purpose Feature Extractor for Light Detection and Ranging Data
Sensors 2010, 10(11), 10356-10375; doi:10.3390/s101110356
Received: 17 September 2010 / Revised: 7 October 2010 / Accepted: 30 October 2010 / Published: 17 November 2010
Cited by 13 | PDF Full-text (1096 KB) | HTML Full-text | XML Full-text
Abstract
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended
[...] Read more.
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory
Sensors 2010, 10(10), 9384-9396; doi:10.3390/s101009384
Received: 30 August 2010 / Revised: 15 September 2010 / Accepted: 25 September 2010 / Published: 18 October 2010
Cited by 10 | PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach
[...] Read more.
Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera
Sensors 2010, 10(10), 9194-9210; doi:10.3390/s101009194
Received: 18 August 2010 / Revised: 20 September 2010 / Accepted: 29 September 2010 / Published: 15 October 2010
Cited by 5 | PDF Full-text (617 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to
[...] Read more.
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Intelligent Sensor Positioning and Orientation Through Constructive Neural Network-Embedded INS/GPS Integration Algorithms
Sensors 2010, 10(10), 9252-9285; doi:10.3390/s101009252
Received: 6 September 2010 / Revised: 30 September 2010 / Accepted: 14 October 2010 / Published: 15 October 2010
Cited by 10 | PDF Full-text (1683 KB) | HTML Full-text | XML Full-text
Abstract
Mobile mapping systems have been widely applied for acquiring spatial information in applications such as spatial information systems and 3D city models. Nowadays the most common technologies used for positioning and orientation of a mobile mapping system include a Global Positioning System (GPS)
[...] Read more.
Mobile mapping systems have been widely applied for acquiring spatial information in applications such as spatial information systems and 3D city models. Nowadays the most common technologies used for positioning and orientation of a mobile mapping system include a Global Positioning System (GPS) as the major positioning sensor and an Inertial Navigation System (INS) as the major orientation sensor. In the classical approach, the limitations of the Kalman Filter (KF) method and the overall price of multi-sensor systems have limited the popularization of most land-based mobile mapping applications. Although intelligent sensor positioning and orientation schemes consisting of Multi-layer Feed-forward Neural Networks (MFNNs), one of the most famous Artificial Neural Networks (ANNs), and KF/smoothers, have been proposed in order to enhance the performance of low cost Micro Electro Mechanical System (MEMS) INS/GPS integrated systems, the automation of the MFNN applied has not proven as easy as initially expected. Therefore, this study not only addresses the problems of insufficient automation in the conventional methodology that has been applied in MFNN-KF/smoother algorithms for INS/GPS integrated systems proposed in previous studies, but also exploits and analyzes the idea of developing alternative intelligent sensor positioning and orientation schemes that integrate various sensors in more automatic ways. The proposed schemes are implemented using one of the most famous constructive neural networks––the Cascade Correlation Neural Network (CCNNs)––to overcome the limitations of conventional techniques based on KF/smoother algorithms as well as previously developed MFNN-smoother schemes. The CCNNs applied also have the advantage of a more flexible topology compared to MFNNs. Based on the experimental data utilized the preliminary results presented in this article illustrate the effectiveness of the proposed schemes compared to smoother algorithms as well as the MFNN-smoother schemes. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle Wireless Intelligent Sensors Management Application Protocol-WISMAP
Sensors 2010, 10(10), 8827-8849; doi:10.3390/s101008827
Received: 10 August 2010 / Revised: 13 September 2010 / Accepted: 15 September 2010 / Published: 28 September 2010
Cited by 9 | PDF Full-text (603 KB) | HTML Full-text | XML Full-text
Abstract
Although many recent studies have focused on the development of new applications for wireless sensor networks, less attention has been paid to knowledge-based sensor nodes. The objective of this work is the development in a real network of a new distributed system in
[...] Read more.
Although many recent studies have focused on the development of new applications for wireless sensor networks, less attention has been paid to knowledge-based sensor nodes. The objective of this work is the development in a real network of a new distributed system in which every sensor node can execute a set of applications, such as fuzzy ruled-base systems, measures, and actions. The sensor software is based on a multi-agent structure that is composed of three components: management, application control, and communication agents; a service interface, which provides applications the abstraction of sensor hardware and other components; and an application layer protocol. The results show the effectiveness of the communication protocol and that the proposed system is suitable for a wide range of applications. As real world applications, this work presents an example of a fuzzy rule-based system and a noise pollution monitoring application that obtains a fuzzy noise indicator. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Stereo Vision Tracking of Multiple Objects in Complex Indoor Environments
Sensors 2010, 10(10), 8865-8887; doi:10.3390/s101008865
Received: 31 August 2010 / Revised: 7 September 2010 / Accepted: 25 September 2010 / Published: 28 September 2010
Cited by 11 | PDF Full-text (1070 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition
[...] Read more.
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot’s environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors’ proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Color Regeneration from Reflective Color Sensor Using an Artificial Intelligent Technique
Sensors 2010, 10(9), 8363-8374; doi:10.3390/s100908363
Received: 20 July 2010 / Revised: 10 August 2010 / Accepted: 20 August 2010 / Published: 3 September 2010
Cited by 13 | PDF Full-text (508 KB) | HTML Full-text | XML Full-text
Abstract
A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in
[...] Read more.
A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions
Sensors 2010, 10(9), 8028-8053; doi:10.3390/s100908028
Received: 1 July 2010 / Revised: 9 August 2010 / Accepted: 26 August 2010 / Published: 27 August 2010
Cited by 20 | PDF Full-text (1800 KB) | HTML Full-text | XML Full-text
Abstract
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article,
[...] Read more.
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Class Separation Improvements in Pixel Classification Using Colour Injection
Sensors 2010, 10(8), 7803-7842; doi:10.3390/s100807803
Received: 25 June 2010 / Revised: 20 July 2010 / Accepted: 4 August 2010 / Published: 20 August 2010
PDF Full-text (3457 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an improvement in the colour image segmentation in the Hue Saturation (HS) sub-space. The authors propose to inject (add) a colour vector in the Red Green Blue (RGB) space to increase the class separation in the
[...] Read more.
This paper presents an improvement in the colour image segmentation in the Hue Saturation (HS) sub-space. The authors propose to inject (add) a colour vector in the Red Green Blue (RGB) space to increase the class separation in the HS plane. The goal of the work is the development of an algorithm to obtain the optimal colour vector for injection that maximizes the separation between the classes in the HS plane. The chromatic Chrominace-1 Chrominance-2 sub-space (of the Luminance Chrominace-1 Chrominance-2 (YC1C2) space) is used to obtain the optimal vector to add. The proposal is applied on each frame of a colour image sequence in real-time. It has been tested in applications with reduced contrast between the colours of the background and the object, and particularly when the size of the object is very small in comparison with the size of the captured scene. Numerous tests have confirmed that this proposal improves the segmentation process, considerably reducing the effects of the variation of the light intensity of the scene. Several tests have been made in skin segmentation in applications for sign language recognition via computer vision, where an accurate segmentation of hands and face is required. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle A Universal Intelligent System-on-Chip Based Sensor Interface
Sensors 2010, 10(8), 7716-7747; doi:10.3390/s100807716
Received: 11 June 2010 / Revised: 6 August 2010 / Accepted: 12 August 2010 / Published: 17 August 2010
Cited by 17 | PDF Full-text (1647 KB) | HTML Full-text | XML Full-text
Abstract
The need for real-time/reliable/low-maintenance distributed monitoring systems, e.g., wireless sensor networks, has been becoming more and more evident in many applications in the environmental, agro-alimentary, medical, and industrial fields. The growing interest in technologies related to sensors is an important indicator of these
[...] Read more.
The need for real-time/reliable/low-maintenance distributed monitoring systems, e.g., wireless sensor networks, has been becoming more and more evident in many applications in the environmental, agro-alimentary, medical, and industrial fields. The growing interest in technologies related to sensors is an important indicator of these new needs. The design and the realization of complex and/or distributed monitoring systems is often difficult due to the multitude of different electronic interfaces presented by the sensors available on the market. To address these issues the authors propose the concept of a Universal Intelligent Sensor Interface (UISI), a new low-cost system based on a single commercial chip able to convert a generic transducer into an intelligent sensor with multiple standardized interfaces. The device presented offers a flexible analog and/or digital front-end, able to interface different transducer typologies (such as conditioned, unconditioned, resistive, current output, capacitive and digital transducers). The device also provides enhanced processing and storage capabilities, as well as a configurable multi-standard output interface (including plug-and-play interface based on IEEE 1451.3). In this work the general concept of UISI and the design of reconfigurable hardware are presented, together with experimental test results validating the proposed device. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle A Software Architecture for Adaptive Modular Sensing Systems
Sensors 2010, 10(8), 7514-7560; doi:10.3390/s100807514
Received: 8 June 2010 / Revised: 14 July 2010 / Accepted: 5 August 2010 / Published: 10 August 2010
Cited by 3 | PDF Full-text (5202 KB) | HTML Full-text | XML Full-text
Abstract
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of
[...] Read more.
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines
Sensors 2010, 10(8), 7263-7286; doi:10.3390/s100807263
Received: 26 May 2010 / Revised: 7 July 2010 / Accepted: 29 July 2010 / Published: 3 August 2010
Cited by 8 | PDF Full-text (2073 KB) | HTML Full-text | XML Full-text
Abstract
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is
[...] Read more.
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle Investigation of the Frequency Shift of a SAD Circuit Loop and the Internal Micro-Cantilever in a Gas Sensor
Sensors 2010, 10(7), 7044-7056; doi:10.3390/s100707044
Received: 15 June 2010 / Revised: 30 June 2010 / Accepted: 10 July 2010 / Published: 23 July 2010
Cited by 3 | PDF Full-text (426 KB) | HTML Full-text | XML Full-text
Abstract
Micro-cantilever sensors for mass detection using resonance frequency have attracted considerable attention over the last decade in the field of gas sensing. For such a sensing system, an oscillator circuit loop is conventionally used to actuate the micro-cantilever, and trace the frequency shifts.
[...] Read more.
Micro-cantilever sensors for mass detection using resonance frequency have attracted considerable attention over the last decade in the field of gas sensing. For such a sensing system, an oscillator circuit loop is conventionally used to actuate the micro-cantilever, and trace the frequency shifts. In this paper, gas experiments are introduced to investigate the mechanical resonance frequency shifts of the micro-cantilever within the circuit loop(mechanical resonance frequency, MRF) and resonating frequency shifts of the electric signal in the oscillator circuit (system working frequency, SWF). A silicon beam with a piezoelectric zinc oxide layer is employed in the experiment, and a Self-Actuating-Detecting (SAD) circuit loop is built to drive the micro-cantilever and to follow the frequency shifts. The differences between the two resonating frequencies and their shifts are discussed and analyzed, and a coefficientrelated to the two frequency shifts is confirmed.Micro-cantilever sensors for mass detection using resonance frequency have attracted considerable attention over the last decade in the field of gas sensing. For such a sensing system, an oscillator circuit loop is conventionally used to actuate the micro-cantilever, and trace the frequency shifts. In this paper, gas experiments are introduced to investigate the mechanical resonance frequency shifts of the micro-cantilever within the circuit loop(mechanical resonance frequency, MRF) and resonating frequency shifts of the electric signal in the oscillator circuit (system working frequency, SWF). A silicon beam with a piezoelectric zinc oxide layer is employed in the experiment, and a Self-Actuating-Detecting (SAD) circuit loop is built to drive the micro-cantilever and to follow the frequency shifts. The differences between the two resonating frequencies and their shifts are discussed and analyzed, and a coefficientrelated to the two frequency shifts is confirmed. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Study on a Two-Dimensional Scanning Micro-Mirror and Its Application in a MOEMS Target Detector
Sensors 2010, 10(7), 6848-6860; doi:10.3390/s100706848
Received: 23 June 2010 / Revised: 16 July 2010 / Accepted: 16 July 2010 / Published: 16 July 2010
Cited by 9 | PDF Full-text (373 KB) | HTML Full-text | XML Full-text
Abstract
A two-dimensional (2D) scanning micro-mirror for target detection and measurement has been developed. This new micro-mirror is used in a MOEMS target detector to replace the conventional scanning detector. The micro-mirror is fabricated by MEMS process and actuated by a piezoelectric actuator. To
[...] Read more.
A two-dimensional (2D) scanning micro-mirror for target detection and measurement has been developed. This new micro-mirror is used in a MOEMS target detector to replace the conventional scanning detector. The micro-mirror is fabricated by MEMS process and actuated by a piezoelectric actuator. To achieve large deflection angles, the micro-mirror is excited in the resonance modes. It has two degrees of freedom and changes the direction of the emitted laser beam for a regional 2D scanning. For the deflection angles measurement, piezoresistors are integrated in the micro-mirror and the deflection angles of each direction can be detected independently and precisely. Based on the scanning micro-mirror and the phase-shift ranging technology, a MOEMS target detector has been developed in a size of 90 mm × 35 mm × 50 mm. The experiment shows that the target can be detected in the scanning field and the relative range and orientation can be measured by the MOEMS target detector. For the target distance up to 3 m with a field of view about 20º × 20º, the measurement resolution is about 10.2 cm in range, 0.15º in the horizontal direction and 0.22º in the vertical direction for orientation. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Bathymetry Determination via X-Band Radar Data: A New Strategy and Numerical Results
Sensors 2010, 10(7), 6522-6534; doi:10.3390/s100706522
Received: 27 May 2010 / Revised: 15 June 2010 / Accepted: 25 June 2010 / Published: 6 July 2010
Cited by 15 | PDF Full-text (885 KB) | HTML Full-text | XML Full-text
Abstract
This work deals with the question of sea state monitoring using marine X-band radar images and focuses its attention on the problem of sea depth estimation. We present and discuss a technique to estimate bathymetry by exploiting the dispersion relation for surface gravity
[...] Read more.
This work deals with the question of sea state monitoring using marine X-band radar images and focuses its attention on the problem of sea depth estimation. We present and discuss a technique to estimate bathymetry by exploiting the dispersion relation for surface gravity waves. This estimation technique is based on the correlation between the measured and the theoretical sea wave spectra and a simple analysis of the approach is performed through test cases with synthetic data. More in detail, the reliability of the estimate technique is verified through simulated data sets that are concerned with different values of bathymetry and surface currents for two types of sea spectrum: JONSWAP and Pierson-Moskowitz. The results show how the estimated bathymetry is fairly accurate for low depth values, while the estimate is less accurate as the bathymetry increases, due to a less significant role of the bathymetry on the sea surface waves as the water depth increases. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation
Sensors 2010, 10(7), 6406-6420; doi:10.3390/s100706406
Received: 1 February 2010 / Revised: 17 April 2010 / Accepted: 12 May 2010 / Published: 29 June 2010
PDF Full-text (177 KB) | HTML Full-text | XML Full-text
Abstract
This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths
[...] Read more.
This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths affect the estimation performance, an equation to compute the estimation performance is provided. An algorithm is proposed to find sampling periods and quantization bit lengths combination, which gives good estimation performance while satisfying the transmission rate constraint. Through the numerical example, the proposed algorithm is verified. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle EMMNet: Sensor Networking for Electricity Meter Monitoring
Sensors 2010, 10(7), 6307-6323; doi:10.3390/s100706307
Received: 11 May 2010 / Revised: 27 May 2010 / Accepted: 6 June 2010 / Published: 24 June 2010
Cited by 4 | PDF Full-text (2603 KB) | HTML Full-text | XML Full-text
Abstract
Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an
[...] Read more.
Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an electricity meter monitoring network, named EMMNet, comprised of data collectors, data concentrators, hand-held devices, a centralized server, and clients. EMMNet provides long-distance communication capabilities, which make it suitable suitable for complex urban environments. In addition, the operational cost of EMMNet is low, compared with other existing remote meter monitoring systems based on GPRS. A new dynamic tree protocol based on the application requirements which can significantly improve the reliability of the network is also proposed. We are currently conducting tests on five networks and investigating network problems for further improvements. Evaluation results indicate that EMMNet enhances the efficiency and accuracy in the reading, recording, and calibration of electricity meters. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle A New Collaborative Knowledge-Based Approach for Wireless Sensor Networks
Sensors 2010, 10(6), 6044-6062; doi:10.3390/s100606044
Received: 28 April 2010 / Revised: 15 May 2010 / Accepted: 5 June 2010 / Published: 17 June 2010
Cited by 7 | PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
This work presents a new approach for collaboration among sensors in Wireless Sensor Networks. These networks are composed of a large number of sensor nodes with constrained resources: limited computational capability, memory, power sources, etc. Nowadays, there is a growing interest in the
[...] Read more.
This work presents a new approach for collaboration among sensors in Wireless Sensor Networks. These networks are composed of a large number of sensor nodes with constrained resources: limited computational capability, memory, power sources, etc. Nowadays, there is a growing interest in the integration of Soft Computing technologies into Wireless Sensor Networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks. The objective of this work is to design a collaborative knowledge-based network, in which each sensor executes an adapted Fuzzy Rule-Based System, which presents significant advantages such as: experts can define interpretable knowledge with uncertainty and imprecision, collaborative knowledge can be separated from control or modeling knowledge and the collaborative approach may support neighbor sensor failures and communication errors. As a real-world application of this approach, we demonstrate a collaborative modeling system for pests, in which an alarm about the development of olive tree fly is inferred. The results show that knowledge-based sensors are suitable for a wide range of applications and that the behavior of a knowledge-based sensor may be modified by inferences and knowledge of neighbor sensors in order to obtain a more accurate and reliable output. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle A Fiber Optic Doppler Sensor and Its Application in Debonding Detection for Composite Structures
Sensors 2010, 10(6), 5975-5993; doi:10.3390/s100605975
Received: 30 April 2010 / Revised: 20 May 2010 / Accepted: 29 May 2010 / Published: 14 June 2010
Cited by 5 | PDF Full-text (769 KB) | HTML Full-text | XML Full-text
Abstract
Debonding is one of the most important damage forms in fiber-reinforced composite structures. This work was devoted to the debonding damage detection of lap splice joints in carbon fiber reinforced plastic (CFRP) structures, which is based on guided ultrasonic wave signals captured by
[...] Read more.
Debonding is one of the most important damage forms in fiber-reinforced composite structures. This work was devoted to the debonding damage detection of lap splice joints in carbon fiber reinforced plastic (CFRP) structures, which is based on guided ultrasonic wave signals captured by using fiber optic Doppler (FOD) sensor with spiral shape. Interferometers based on two types of laser sources, namely the He-Ne laser and the infrared semiconductor laser, are proposed and compared in this study for the purpose of measuring Doppler frequency shift of the FOD sensor. Locations of the FOD sensors are optimized based on mechanical characteristics of lap splice joint. The FOD sensors are subsequently used to detect the guided ultrasonic waves propagating in the CFRP structures. By taking advantage of signal processing approaches, features of the guided wave signals can be revealed. The results demonstrate that debonding in the lap splice joint results in arrival time delay of the first package in the guided wave signals, which can be the characteristic for debonding damage inspection and damage extent estimation. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Fast Scene Recognition and Camera Relocalisation for Wide Area Augmented Reality Systems
Sensors 2010, 10(6), 6017-6043; doi:10.3390/s100606017
Received: 29 April 2010 / Revised: 29 May 2010 / Accepted: 2 June 2010 / Published: 14 June 2010
PDF Full-text (3672 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on online scene learning and fast camera relocalisation which are two key problems currently limiting the performance of wide area augmented reality systems. Firstly, we propose to use adaptive random trees to deal with the online scene learning problem. The
[...] Read more.
This paper focuses on online scene learning and fast camera relocalisation which are two key problems currently limiting the performance of wide area augmented reality systems. Firstly, we propose to use adaptive random trees to deal with the online scene learning problem. The algorithm can provide more accurate recognition rates than traditional methods, especially with large scale workspaces. Secondly, we use the enhanced PROSAC algorithm to obtain a fast camera relocalisation method. Compared with traditional algorithms, our method can significantly reduce the computation complexity, which facilitates to a large degree the process of online camera relocalisation. Finally, we implement our algorithms in a multithreaded manner by using a parallel-computing scheme. Camera tracking, scene mapping, scene learning and relocalisation are separated into four threads by using multi-CPU hardware architecture. While providing real-time tracking performance, the resulting system also possesses the ability to track multiple maps simultaneously. Some experiments have been conducted to demonstrate the validity of our methods. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle Data Acquisition, Analysis and Transmission Platform for a Pay-As-You-Drive System
Sensors 2010, 10(6), 5395-5408; doi:10.3390/s100605395
Received: 30 March 2010 / Revised: 15 April 2010 / Accepted: 13 May 2010 / Published: 1 June 2010
Cited by 5 | PDF Full-text (1337 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a platform used to acquire, analyse and transmit data from a vehicle to a Control Centre as part of a Pay-As-You-Drive system. The aim is to monitor vehicle usage (how much, when, where and how) and,
[...] Read more.
This paper presents a platform used to acquire, analyse and transmit data from a vehicle to a Control Centre as part of a Pay-As-You-Drive system. The aim is to monitor vehicle usage (how much, when, where and how) and, based on this information, assess the associated risk and set an appropriate insurance premium. To determine vehicle usage, the system analyses the driver's respect for speed limits, driving style (aggressive or non-aggressive), mobile telephone use and the number of vehicle passengers. An electronic system on board the vehicle acquires these data, processes them and transmits them by mobile telephone (GPRS/UMTS) to a Control Centre, at which the insurance company assesses the risk associated with vehicles monitored by the system. The system provides insurance companies and their customers with an enhanced service and could potentially increase responsible driving habits and reduce the number of road accidents. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Sensors 2010, 10(5), 5209-5232; doi:10.3390/s100505209
Received: 11 February 2010 / Revised: 31 March 2010 / Accepted: 14 April 2010 / Published: 25 May 2010
Cited by 14 | PDF Full-text (2746 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves
[...] Read more.
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessArticle Common Criteria Related Security Design Patterns—Validation on the Intelligent Sensor Example Designed for Mine Environment
Sensors 2010, 10(5), 4456-4496; doi:10.3390/s100504456
Received: 5 March 2010 / Revised: 15 April 2010 / Accepted: 28 April 2010 / Published: 30 April 2010
Cited by 7 | PDF Full-text (382 KB) | HTML Full-text | XML Full-text
Abstract
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems
[...] Read more.
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems should be developed in a restricted way to provide them with assurance. One of assurance creation methodologies is Common Criteria (ISO/IEC 15408), used for IT products and systems. The contribution of the paper is a Common Criteria compliant and pattern-based method for the intelligent sensors security development. The paper concisely presents this method and its evaluation for the sensor detecting methane in a mine, focusing on the security problem of the intelligent sensor definition and solution. The aim of the validation is to evaluate and improve the introduced method. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links
Sensors 2010, 10(4), 4114-4129; doi:10.3390/s100404114
Received: 2 March 2010 / Revised: 20 April 2010 / Accepted: 20 April 2010 / Published: 26 April 2010
Cited by 18 | PDF Full-text (3738 KB) | HTML Full-text | XML Full-text
Abstract
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate
[...] Read more.
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Localization of Mobile Robots Using Odometry and an External Vision Sensor
Sensors 2010, 10(4), 3655-3680; doi:10.3390/s100403655
Received: 15 January 2010 / Revised: 3 March 2010 / Accepted: 31 March 2010 / Published: 13 April 2010
Cited by 15 | PDF Full-text (3085 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of
[...] Read more.
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts
Sensors 2010, 10(4), 3373-3388; doi:10.3390/s100403373
Received: 10 February 2010 / Accepted: 29 March 2010 / Published: 7 April 2010
Cited by 19 | PDF Full-text (1084 KB) | HTML Full-text | XML Full-text
Abstract
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation
[...] Read more.
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Figures

Open AccessArticle Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots
Sensors 2010, 10(4), 3261-3279; doi:10.3390/s100403261
Received: 5 February 2010 / Revised: 23 March 2010 / Accepted: 26 March 2010 / Published: 1 April 2010
Cited by 16 | PDF Full-text (739 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space).
[...] Read more.
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)

Review

Jump to: Research

Open AccessReview A Smart Checkpointing Scheme for Improving the Reliability of Clustering Routing Protocols
Sensors 2010, 10(10), 8938-8952; doi:10.3390/s101008938
Received: 22 July 2010 / Revised: 13 September 2010 / Accepted: 28 September 2010 / Published: 29 September 2010
Cited by 3 | PDF Full-text (639 KB) | HTML Full-text | XML Full-text
Abstract
In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based
[...] Read more.
In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based on data aggregation schemes aimed at minimizing packet numbers have been proposed to meet these requirements. In clustering routing protocols, the cluster head plays an important role. The cluster head collects data from its member nodes and aggregates the collected data. To improve reliability and reduce recovery latency, we propose a checkpointing scheme for the cluster head. In the proposed scheme, backup nodes monitor and checkpoint the current state of the cluster head periodically. We also derive the checkpointing interval that maximizes reliability while using the same amount of energy consumed by clustering routing protocols that operate without checkpointing. Experimental comparisons with existing non-checkpointing schemes show that our scheme reduces both energy consumption and recovery latency. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)
Open AccessReview Intelligent Chiral Sensing Based on Supramolecular and Interfacial Concepts
Sensors 2010, 10(7), 6796-6820; doi:10.3390/s100706796
Received: 17 June 2010 / Revised: 7 July 2010 / Accepted: 8 July 2010 / Published: 13 July 2010
Cited by 37 | PDF Full-text (421 KB) | HTML Full-text | XML Full-text
Abstract
Of the known intelligently-operating systems, the majority can undoubtedly be classed as being of biological origin. One of the notable differences between biological and artificial systems is the important fact that biological materials consist mostly of chiral molecules. While most biochemical processes routinely
[...] Read more.
Of the known intelligently-operating systems, the majority can undoubtedly be classed as being of biological origin. One of the notable differences between biological and artificial systems is the important fact that biological materials consist mostly of chiral molecules. While most biochemical processes routinely discriminate chiral molecules, differentiation between chiral molecules in artificial systems is currently one of the challenging subjects in the field of molecular recognition. Therefore, one of the important challenges for intelligent man-made sensors is to prepare a sensing system that can discriminate chiral molecules. Because intermolecular interactions and detection at surfaces are respectively parts of supramolecular chemistry and interfacial science, chiral sensing based on supramolecular and interfacial concepts is a significant topic. In this review, we briefly summarize recent advances in these fields, including supramolecular hosts for color detection on chiral sensing, indicator-displacement assays, kinetic resolution in supramolecular reactions with analyses by mass spectrometry, use of chiral shape-defined polymers, such as dynamic helical polymers, molecular imprinting, thin films on surfaces of devices such as QCM, functional electrodes, FET, and SPR, the combined technique of magnetic resonance imaging and immunoassay, and chiral detection using scanning tunneling microscopy and cantilever technology. In addition, we will discuss novel concepts in recent research including the use of achiral reagents for chiral sensing with NMR, and mechanical control of chiral sensing. The importance of integration of chiral sensing systems with rapidly developing nanotechnology and nanomaterials is also emphasized. Full article
(This article belongs to the Special Issue Intelligent Sensors - 2010)

Journal Contact

MDPI AG
Sensors Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
sensors@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Sensors
Back to Top