E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Image Sensors"

Quicklinks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 October 2009)

Special Issue Editor

Editorial Advisor
Prof. Dr. Peter Seitz

LEO C12, Leonhardstrasse 27, 8092 Zürich
Website | E-Mail
Fax: +41 44 633 82 17
Interests: semiconductor image sensors; smart pixels; high-performance photosensing; low-noise; high-speed and high-dynamic-range image sensing; photonic microsystems; optical metrology and measurement systems; optical time-of-flight 3D range cameras; organic semiconductors; polymer optoelectronics; monolithic photonic microsystems based on organic semiconductors; entrepreneurship, management, creativity, intellectual property and project management

Keywords

  • image sensors
  • charge-coupled devices (CCD)
  • contact image sensors (CIS)
  • complementary metal–oxide–semiconductors (CMOS)
  • image sensors preformance and improvement potential
  • pixel sensors
  • color sensing
  • thermal imaging
  • x-rays sensor arrays

Published Papers (20 papers)

View options order results:
result details:
Displaying articles 1-20
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms
Sensors 2010, 10(9), 8553-8571; doi:10.3390/s100908553
Received: 15 June 2010 / Revised: 5 September 2010 / Accepted: 8 September 2010 / Published: 14 September 2010
Cited by 4 | PDF Full-text (739 KB) | HTML Full-text | XML Full-text
Abstract
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable
[...] Read more.
Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Relaxation Time Estimation from Complex Magnetic Resonance Images
Sensors 2010, 10(4), 3611-3625; doi:10.3390/s100403611
Received: 20 December 2009 / Revised: 23 February 2010 / Accepted: 24 March 2010 / Published: 9 April 2010
Cited by 16 | PDF Full-text (454 KB) | HTML Full-text | XML Full-text
Abstract
Magnetic Resonance (MR) imaging techniques are used to measure biophysical properties of tissues. As clinical diagnoses are mainly based on the evaluation of contrast in MR images, relaxation times assume a fundamental role providing a major source of contrast. Moreover, they can give
[...] Read more.
Magnetic Resonance (MR) imaging techniques are used to measure biophysical properties of tissues. As clinical diagnoses are mainly based on the evaluation of contrast in MR images, relaxation times assume a fundamental role providing a major source of contrast. Moreover, they can give useful information in cancer diagnostic. In this paper we present a statistical technique to estimate relaxation times exploiting complex-valued MR images. Working in the complex domain instead of the amplitude one allows us to consider the data bivariate Gaussian distributed, and thus to implement a simple Least Square (LS) estimator on the available complex data. The proposed estimator results to be simple, accurate and unbiased. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques
Sensors 2010, 10(3), 1743-1752; doi:10.3390/s100301743
Received: 22 December 2009 / Revised: 25 January 2010 / Accepted: 3 February 2010 / Published: 3 March 2010
Cited by 5 | PDF Full-text (424 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we
[...] Read more.
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Geometric Stability and Lens Decentering in Compact Digital Cameras
Sensors 2010, 10(3), 1553-1572; doi:10.3390/s100301553
Received: 25 December 2009 / Revised: 26 January 2010 / Accepted: 20 February 2010 / Published: 1 March 2010
Cited by 6 | PDF Full-text (1160 KB) | HTML Full-text | XML Full-text
Abstract
A study on the geometric stability and decentering present in sensor-lens systems of six identical compact digital cameras has been conducted. With regard to geometrical stability, the variation of internal geometry parameters (principal distance, principal point position and distortion parameters) was considered. With
[...] Read more.
A study on the geometric stability and decentering present in sensor-lens systems of six identical compact digital cameras has been conducted. With regard to geometrical stability, the variation of internal geometry parameters (principal distance, principal point position and distortion parameters) was considered. With regard to lens decentering, the amount of radial and tangential displacement resulting from decentering distortion was related with the precision of the camera and with the offset of the principal point from the geometric center of the sensor. The study was conducted with data obtained after 372 calibration processes (62 per camera). The tests were performed for each camera in three situations: during continuous use of the cameras, after camera power off/on and after the full extension and retraction of the zoom-lens. Additionally, 360 new calibrations were performed in order to study the variation of the internal geometry when the camera is rotated. The aim of this study was to relate the level of stability and decentering in a camera with the precision and quality that can be obtained. An additional goal was to provide practical recommendations about photogrammetric use of such cameras. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Field Map Reconstruction in Magnetic Resonance Imaging Using Bayesian Estimation
Sensors 2010, 10(1), 266-279; doi:10.3390/s100100266
Received: 30 October 2009 / Revised: 24 December 2009 / Accepted: 25 December 2009 / Published: 30 December 2009
Cited by 19 | PDF Full-text (529 KB) | HTML Full-text | XML Full-text
Abstract
Field inhomogeneities in Magnetic Resonance Imaging (MRI) can cause blur or image distortion as they produce off-resonance frequency at each voxel. These effects can be corrected if an accurate field map is available. Field maps can be estimated starting from the phase of
[...] Read more.
Field inhomogeneities in Magnetic Resonance Imaging (MRI) can cause blur or image distortion as they produce off-resonance frequency at each voxel. These effects can be corrected if an accurate field map is available. Field maps can be estimated starting from the phase of multiple complex MRI data sets. In this paper we present a technique based on statistical estimation in order to reconstruct a field map exploiting two or more scans. The proposed approach implements a Bayesian estimator in conjunction with the Graph Cuts optimization method. The effectiveness of the method has been proven on simulated and real data. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle A 3D Sensor Based on a Profilometrical Approach
Sensors 2009, 9(12), 10326-10340; doi:10.3390/s91210326
Received: 2 November 2009 / Revised: 7 December 2009 / Accepted: 15 December 2009 / Published: 21 December 2009
Cited by 3 | PDF Full-text (1071 KB) | HTML Full-text | XML Full-text
Abstract
An improved method which considers the use of Fourier and wavelet transform based analysis to infer and extract 3D information from an object by fringe projection on it is presented. This method requires a single image which contains a sinusoidal white light fringe
[...] Read more.
An improved method which considers the use of Fourier and wavelet transform based analysis to infer and extract 3D information from an object by fringe projection on it is presented. This method requires a single image which contains a sinusoidal white light fringe pattern projected on it, and this pattern has a known spatial frequency and its information is used to avoid any discontinuities in the fringes with high frequency. Several computer simulations and experiments have been carried out to verify the analysis. The comparison between numerical simulations and experiments has proved the validity of this proposed method. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Nonrigid Registration of Brain Tumor Resection MR Images Based on Joint Saliency Map and Keypoint Clustering
Sensors 2009, 9(12), 10270-10290; doi:10.3390/s91210270
Received: 23 October 2009 / Revised: 1 December 2009 / Accepted: 9 December 2009 / Published: 17 December 2009
Cited by 9 | PDF Full-text (7789 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a novel global-to-local nonrigid brain MR image registration to compensate for the brain shift and the unmatchable outliers caused by the tumor resection. The mutual information between the corresponding salient structures, which are enhanced by the joint saliency map (JSM),
[...] Read more.
This paper proposes a novel global-to-local nonrigid brain MR image registration to compensate for the brain shift and the unmatchable outliers caused by the tumor resection. The mutual information between the corresponding salient structures, which are enhanced by the joint saliency map (JSM), is maximized to achieve a global rigid registration of the two images. Being detected and clustered at the paired contiguous matching areas in the globally registered images, the paired pools of DoG keypoints in combination with the JSM provide a useful cluster-to-cluster correspondence to guide the local control-point correspondence detection and the outlier keypoint rejection. Lastly, a quasi-inverse consistent deformation is smoothly approximated to locally register brain images through the mapping the clustered control points by compact support radial basis functions. The 2D implementation of the method can model the brain shift in brain tumor resection MR images, though the theory holds for the 3D case. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera
Sensors 2009, 9(12), 10080-10096; doi:10.3390/s91210080
Received: 3 November 2009 / Revised: 24 November 2009 / Accepted: 3 December 2009 / Published: 11 December 2009
Cited by 78 | PDF Full-text (1894 KB) | HTML Full-text | XML Full-text
Abstract
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the
[...] Read more.
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets. Full article
(This article belongs to the Special Issue Image Sensors)
Figures

Open AccessArticle Non-Linearity in Wide Dynamic Range CMOS Image Sensors Utilizing a Partial Charge Transfer Technique
Sensors 2009, 9(12), 9452-9467; doi:10.3390/s91209452
Received: 22 September 2009 / Revised: 27 October 2009 / Accepted: 4 November 2009 / Published: 26 November 2009
Cited by 6 | PDF Full-text (553 KB) | HTML Full-text | XML Full-text
Abstract
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity
[...] Read more.
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Asphalted Road Temperature Variations Due to Wind Turbine Cast Shadows
Sensors 2009, 9(11), 8863-8883; doi:10.3390/s91108863
Received: 6 July 2009 / Revised: 9 October 2009 / Accepted: 31 October 2009 / Published: 5 November 2009
PDF Full-text (847 KB) | HTML Full-text | XML Full-text
Abstract
The contribution of this paper is a technique that in certain circumstances allows one to avoid the removal of dynamic shadows in the visible spectrum making use of images in the infrared spectrum. This technique emerged from a real problem concerning the autonomous
[...] Read more.
The contribution of this paper is a technique that in certain circumstances allows one to avoid the removal of dynamic shadows in the visible spectrum making use of images in the infrared spectrum. This technique emerged from a real problem concerning the autonomous navigation of a vehicle in a wind farm. In this environment, the dynamic shadows cast by the wind turbines’ blades make it necessary to include a shadows removal stage in the preprocessing of the visible spectrum images in order to avoid the shadows being misclassified as obstacles. In the thermal images, dynamic shadows completely disappear, something that does not always occur in the visible spectrum, even when the preprocessing is executed. Thus, a fusion on thermal and visible bands is performed. Full article
(This article belongs to the Special Issue Image Sensors)
Figures

Open AccessArticle A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna
Sensors 2009, 9(11), 8438-8455; doi:10.3390/s91108438
Received: 25 August 2009 / Revised: 1 October 2009 / Accepted: 13 October 2009 / Published: 26 October 2009
Cited by 25 | PDF Full-text (2029 KB) | HTML Full-text | XML Full-text
Abstract
The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction) is accompanied by species identification from animals’ outlines
[...] Read more.
The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction) is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan) was analysed. Out of 150,000 frames (1 per 4 s), a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts), red crabs (Paralomis multispina), and snails (Buccinum soyomaruae). Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Sensor Calibration Based on Incoherent Optical Fiber Bundles (IOFB) Used For Remote Image Transmission
Sensors 2009, 9(10), 8215-8229; doi:10.3390/s91008215
Received: 26 August 2009 / Revised: 11 September 2009 / Accepted: 25 September 2009 / Published: 19 October 2009
Cited by 6 | PDF Full-text (507 KB) | HTML Full-text | XML Full-text
Abstract
Image transmission using incoherent optical fiber bundles (IOFB) requires prior calibration to obtain the spatial in-out fiber correspondence in order to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table (LUT), used later for reordering the fiber
[...] Read more.
Image transmission using incoherent optical fiber bundles (IOFB) requires prior calibration to obtain the spatial in-out fiber correspondence in order to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table (LUT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a method based on line-scan to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and increased image quality by introducing a fiber detection algorithm, an intensity compensation process and finally, a single interpolation algorithm. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean
Sensors 2009, 9(7), 5825-5843; doi:10.3390/s90705825
Received: 8 May 2009 / Revised: 16 June 2009 / Accepted: 15 July 2009 / Published: 22 July 2009
Cited by 8 | PDF Full-text (596 KB) | HTML Full-text | XML Full-text
Abstract
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green
[...] Read more.
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. Full article
(This article belongs to the Special Issue Image Sensors)
Figures

Open AccessArticle A High Resolution Color Image Restoration Algorithm for Thin TOMBO Imaging Systems
Sensors 2009, 9(6), 4649-4668; doi:10.3390/s90604649
Received: 20 May 2009 / Revised: 5 June 2009 / Accepted: 5 June 2009 / Published: 15 June 2009
Cited by 3 | PDF Full-text (3801 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a blind image restoration algorithm to reconstruct a high resolution (HR) color image from multiple, low resolution (LR), degraded and noisy images captured by thin (
[...] Read more.
In this paper, we present a blind image restoration algorithm to reconstruct a high resolution (HR) color image from multiple, low resolution (LR), degraded and noisy images captured by thin (< 1mm) TOMBO imaging systems. The proposed algorithm is an extension of our grayscale algorithm reported in [1] to the case of color images. In this color extension, each Point Spread Function (PSF) of each captured image is assumed to be different from one color component to another and from one imaging unit to the other. For the task of image restoration, we use all spectral information in each captured image to restore each output pixel in the reconstructed HR image, i.e., we use the most efficient global category of point operations. First, the composite RGB color components of each captured image are extracted. A blind estimation technique is then applied to estimate the spectra of each color component and its associated blurring PSF. The estimation process is formed in a way that minimizes significantly the interchannel cross-correlations and additive noise. The estimated PSFs together with advanced interpolation techniques are then combined to compensate for blur and reconstruct a HR color image of the original scene. Finally, a histogram normalization process adjusts the balance between image color components, brightness and contrast. Simulated and experimental results reveal that the proposed algorithm is capable of restoring HR color images from degraded, LR and noisy observations even at low Signal-to-Noise Energy ratios (SNERs). The proposed algorithm uses FFT and only two fundamental image restoration constraints, making it suitable for silicon integration with the TOMBO imager. Full article
(This article belongs to the Special Issue Image Sensors)
Figures

Open AccessArticle CMOS Image Sensor with a Built-in Lane Detector
Sensors 2009, 9(3), 1722-1737; doi:10.3390/s90301722
Received: 16 February 2009 / Revised: 8 March 2009 / Accepted: 11 March 2009 / Published: 12 March 2009
Cited by 2 | PDF Full-text (1105 KB) | HTML Full-text | XML Full-text
Abstract
This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of
[...] Read more.
This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of up to approximately 96% under various weather conditions. Instead of a Personal Computer (PC) based system or embedded platform system equipped with expensive high performance chip of Reduced Instruction Set Computer (RISC) or Digital Signal Processor (DSP), the proposed imager, without extra Analog to Digital Converter (ADC) circuits to transform signals, is a compact, lower cost key-component chip. It is also an innovative component device that can be integrated into intelligent automotive lane departure systems. The chip size is 2,191.4 x 2,389.8 mm, and the package uses 40 pin Dual-In-Package (DIP). The pixel cell size is 18.45 x 21.8 mm and the core size of photodiode is 12.45 x 9.6 mm; the resulting fill factor is 29.7%. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors
Sensors 2009, 9(1), 131-147; doi:10.3390/s90100131
Received: 31 October 2008 / Revised: 9 December 2008 / Accepted: 30 December 2008 / Published: 7 January 2009
Cited by 1 | PDF Full-text (540 KB) | HTML Full-text | XML Full-text
Abstract
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling
[...] Read more.
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Sparse Detector Imaging Sensor with Two-Class Silhouette Classification
Sensors 2008, 8(12), 7996-8015; doi:10.3390/s8127996
Received: 4 November 2008 / Revised: 1 December 2008 / Accepted: 4 December 2008 / Published: 8 December 2008
Cited by 26 | PDF Full-text (456 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles
[...] Read more.
This paper presents the design and test of a simple active near-infrared sparse detector imaging sensor. The prototype of the sensor is novel in that it can capture remarkable silhouettes or profiles of a wide-variety of moving objects, including humans, animals, and vehicles using a sparse detector array comprised of only sixteen sensing elements deployed in a vertical configuration. The prototype sensor was built to collect silhouettes for a variety of objects and to evaluate several algorithms for classifying the data obtained from the sensor into two classes: human versus non-human. Initial tests show that the classification of individually sensed objects into two classes can be achieved with accuracy greater than ninety-nine percent (99%) with a subset of the sixteen detectors using a representative dataset consisting of 512 signatures. The prototype also includes a Webservice interface such that the sensor can be tasked in a network-centric environment. The sensor appears to be a low-cost alternative to traditional, high-resolution focal plane array imaging sensors for some applications. After a power optimization study, appropriate packaging, and testing with more extensive datasets, the sensor may be a good candidate for deployment in vast geographic regions for a myriad of intelligent electronic fence and persistent surveillance applications, including perimeter security scenarios. Full article
(This article belongs to the Special Issue Image Sensors)
Open AccessArticle Pattern Recognition via PCNN and Tsallis Entropy
Sensors 2008, 8(11), 7518-7529; doi:10.3390/s8117518
Received: 5 September 2008 / Revised: 7 November 2008 / Accepted: 17 November 2008 / Published: 25 November 2008
Cited by 22 | PDF Full-text (331 KB) | HTML Full-text | XML Full-text
Abstract
In this paper a novel feature extraction method for image processing via PCNN and Tsallis entropy is presented. We describe the mathematical model of the PCNN and the basic concept of Tsallis entropy in order to find a recognition method for isolated objects.
[...] Read more.
In this paper a novel feature extraction method for image processing via PCNN and Tsallis entropy is presented. We describe the mathematical model of the PCNN and the basic concept of Tsallis entropy in order to find a recognition method for isolated objects. Experiments show that the novel feature is translation and scale independent, while rotation independence is a bit weak at diagonal angles of 45° and 135°. Parameters of the application on face recognition are acquired by bacterial chemotaxis optimization (BCO), and the highest classification rate is 72.5%, which demonstrates its acceptable performance and potential value. Full article
(This article belongs to the Special Issue Image Sensors)

Review

Jump to: Research

Open AccessReview Toward 100 Mega-Frames per Second: Design of an Ultimate Ultra-High-Speed Image Sensor
Sensors 2010, 10(1), 16-35; doi:10.3390/s100100016
Received: 3 November 2009 / Revised: 3 December 2009 / Accepted: 8 December 2009 / Published: 24 December 2009
Cited by 19 | PDF Full-text (4509 KB) | HTML Full-text | XML Full-text
Abstract
Our experiencein the design of an ultra-high speed image sensor targeting the theoretical maximum frame rate is summarized. The imager is the backside illuminated in situ storage image sensor (BSI ISIS). It is confirmed that the critical factor limiting the highest frame rate
[...] Read more.
Our experiencein the design of an ultra-high speed image sensor targeting the theoretical maximum frame rate is summarized. The imager is the backside illuminated in situ storage image sensor (BSI ISIS). It is confirmed that the critical factor limiting the highest frame rate is the signal electron transit time from the generation layer at the back side of each pixel to the input gate to the in situ storage area on the front side. The theoretical maximum frame rate is estimated at 100 Mega-frames per second (Mfps) by transient simulation study. The sensor has a spatial resolution of 140,800 pixels with 126 linear storage elements installed in each pixel. The very high sensitivity is ensured by application of backside illumination technology and cooling. The ultra-high frame rate is achieved by the in situ storage image sensor (ISIS) structure on the front side. In this paper, we summarize technologies developed to achieve the theoretical maximum frame rate, including: (1) a special p-well design by triple injections to generate a smooth electric field backside towards the collection gate on the front side, resulting in much shorter electron transit time; (2) design technique to reduce RC delay by employing an extra metal layer exclusively to electrodes responsible for ultra-high speed image capturing; (3) a CCD specific complementary on-chip inductance minimization technique with a couple of stacked differential bus lines. Full article
(This article belongs to the Special Issue Image Sensors)
Figures

Open AccessReview CMOS Image Sensors for High Speed Applications
Sensors 2009, 9(1), 430-444; doi:10.3390/s90100430
Received: 28 November 2008 / Revised: 4 January 2009 / Accepted: 5 January 2009 / Published: 13 January 2009
Cited by 69 | PDF Full-text (482 KB) | HTML Full-text | XML Full-text
Abstract
Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated
[...] Read more.
Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4~5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps). Full article
(This article belongs to the Special Issue Image Sensors)

Journal Contact

MDPI AG
Sensors Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
sensors@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Sensors
Back to Top