sensors-logo

Journal Browser

Journal Browser

Imaging: Sensors and Technologies

A special issue of Sensors (ISSN 1424-8220).

Deadline for manuscript submissions: closed (30 September 2016) | Viewed by 300474

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website1 Website2
Guest Editor
Department Software Engineering and Artificial Intelligence, Faculty of Informatics, University Complutense of Madrid, 28040 Madrid, Spain
Interests: computer vision; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection and tracking; fusion and registering from imaging sensors; superresolution from low-resolution image sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The use of imaging sensors in different areas is obvious. Actively or passively, these sensors capture electromagnetic radiation or acoustic echoes across the whole spectra, which, conveniently arranged in images, allow the extraction of useful information.

Among others, medicine, biology, industry, agriculture, surveillance, security, visual inspection, monitoring, target tracking, photogrammetry, robotics, and navigation aids in manned or unmanned vehicles are areas where advances in imaging sensors and technologies play an important role.

Methods and procedures that are designed to make operational and profitable imaging devices allow for the processing of the relevant information that is oriented toward the efficiency of such systems.

The following is a list of the main topics covered by this Special Issue, where the emphasis is placed on the sensors, devices, and technologies that are oriented toward specific image processing applications. The Special Issue will, however, not be limited to these issues:

• Active or passive sensors and technologies based on physical designs, including CCD, EMCCD, CMOS, NMOS, and Photodiodes.

• Mono-, multi-, and hyper-spectral sensors for spectral analysis: ultraviolet, visible, infrared, thermal, laser or X-Ray.

• Sensors and technologies for radiography, tomography, magnetic resonance, neuroimaging, and microscopy.

• Sensors and technologies for 3D recovery: stereoscopy, TOF, and Laser.

• Multiple and temporal imaging sensors and technologies: video, panoramic,

• Radar and SAR devices.

• Acoustic sensors and devices: ultrasound, sonar.

• Image acquisition and formation: physical and geometric sensory arrangement, optical systems, and spectral filters.

Prof. Dr. Gonzalo Pajares Martinsanz
Guest Editor

Related Journal: Journal of Imaging

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Published Papers (38 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

843 KiB  
Article
Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems
by Stanislav Vítek and Maria Nasyrova
Sensors 2018, 18(1), 77; https://doi.org/10.3390/s18010077 - 29 Dec 2017
Cited by 4 | Viewed by 4019
Abstract
The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction [...] Read more.
The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5045 KiB  
Article
Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras
by Ying He, Bin Liang, Yu Zou, Jin He and Jun Yang
Sensors 2017, 17(1), 92; https://doi.org/10.3390/s17010092 - 05 Jan 2017
Cited by 67 | Viewed by 9737
Abstract
Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external [...] Read more.
Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m). Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5273 KiB  
Article
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor
by Huijie Zhao, Zheng Ji, Na Li, Jianrong Gu and Yansong Li
Sensors 2017, 17(1), 56; https://doi.org/10.3390/s17010056 - 29 Dec 2016
Cited by 9 | Viewed by 5530
Abstract
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene [...] Read more.
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5099 KiB  
Article
Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics
by Sheng-Hsun Hsieh, Yung-Hui Li and Chung-Hao Tien
Sensors 2016, 16(12), 1994; https://doi.org/10.3390/s16121994 - 25 Nov 2016
Cited by 4 | Viewed by 4911
Abstract
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability [...] Read more.
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF) technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

18233 KiB  
Article
Expanding the Detection of Traversable Area with RealSense for the Visually Impaired
by Kailun Yang, Kaiwei Wang, Weijian Hu and Jian Bai
Sensors 2016, 16(11), 1954; https://doi.org/10.3390/s16111954 - 21 Nov 2016
Cited by 76 | Viewed by 12303
Abstract
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers [...] Read more.
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

3190 KiB  
Article
Full-Field Optical Coherence Tomography Using Galvo Filter-Based Wavelength Swept Laser
by Muhammad Faizan Shirazi, Pilun Kim, Mansik Jeon and Jeehyun Kim
Sensors 2016, 16(11), 1933; https://doi.org/10.3390/s16111933 - 17 Nov 2016
Cited by 6 | Viewed by 6622
Abstract
We report a wavelength swept laser-based full-field optical coherence tomography for measuring the surfaces and thicknesses of refractive and reflective samples. The system consists of a galvo filter–based wavelength swept laser and a simple Michelson interferometer. Combinations of the reflective and refractive samples [...] Read more.
We report a wavelength swept laser-based full-field optical coherence tomography for measuring the surfaces and thicknesses of refractive and reflective samples. The system consists of a galvo filter–based wavelength swept laser and a simple Michelson interferometer. Combinations of the reflective and refractive samples are used to demonstrate the performance of the system. By synchronizing the camera with the source, the cross-sectional information of the samples can be seen after each sweep of the swept source. This system can be effective for the thickness measurement of optical thin films as well as for the depth investigation of samples in industrial applications. A resolution target with a glass cover slip and a step height standard target are imaged, showing the cross-sectional and topographical information of the samples. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5522 KiB  
Article
A Selective Change Driven System for High-Speed Motion Analysis
by Jose A. Boluda, Fernando Pardo and Francisco Vegara
Sensors 2016, 16(11), 1875; https://doi.org/10.3390/s16111875 - 08 Nov 2016
Cited by 4 | Viewed by 4793
Abstract
Vision-based sensing algorithms are computationally-demanding tasks due to the large amount of data acquired and processed. Visual sensors deliver much information, even if data are redundant, and do not give any additional information. A Selective Change Driven (SCD) sensing system is based on [...] Read more.
Vision-based sensing algorithms are computationally-demanding tasks due to the large amount of data acquired and processed. Visual sensors deliver much information, even if data are redundant, and do not give any additional information. A Selective Change Driven (SCD) sensing system is based on a sensor that delivers, ordered by the magnitude of its change, only those pixels that have changed most since the last read-out. This allows the information stream to be adjusted to the computation capabilities. Following this strategy, a new SCD processing architecture for high-speed motion analysis, based on processing pixels instead of full frames, has been developed and implemented into a Field Programmable Gate-Array (FPGA). The programmable device controls the data stream, delivering a new object distance calculation for every new pixel. The acquisition, processing and delivery of a new object distance takes just 1.7 μ s. Obtaining a similar result using a conventional frame-based camera would require a device working at roughly 500 Kfps, which is far from being practical or even feasible. This system, built with the recently-developed 64 × 64 CMOS SCD sensor, shows the potential of the SCD approach when combined with a hardware processing system. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

7116 KiB  
Article
A 3D Optical Surface Profilometer Using a Dual-Frequency Liquid Crystal-Based Dynamic Fringe Pattern Generator
by Kyung-Il Joo, Mugeon Kim, Min-Kyu Park, Heewon Park, Byeonggon Kim, JoonKu Hahn and Hak-Rin Kim
Sensors 2016, 16(11), 1794; https://doi.org/10.3390/s16111794 - 27 Oct 2016
Cited by 7 | Viewed by 6864
Abstract
We propose a liquid crystal (LC)-based 3D optical surface profilometer that can utilize multiple fringe patterns to extract an enhanced 3D surface depth profile. To avoid the optical phase ambiguity and enhance the 3D depth extraction, 16 interference patterns were generated by the [...] Read more.
We propose a liquid crystal (LC)-based 3D optical surface profilometer that can utilize multiple fringe patterns to extract an enhanced 3D surface depth profile. To avoid the optical phase ambiguity and enhance the 3D depth extraction, 16 interference patterns were generated by the LC-based dynamic fringe pattern generator (DFPG) using four-step phase shifting and four-step spatial frequency varying schemes. The DFPG had one common slit with an electrically controllable birefringence (ECB) LC mode and four switching slits with a twisted nematic LC mode. The spatial frequency of the projected fringe pattern could be controlled by selecting one of the switching slits. In addition, moving fringe patterns were obtainable by applying voltages to the ECB LC layer, which varied the phase difference between the common and the selected switching slits. Notably, the DFPG switching time required to project 16 fringe patterns was minimized by utilizing the dual-frequency modulation of the driving waveform to switch the LC layers. We calculated the phase modulation of the DFPG and reconstructed the depth profile of 3D objects using a discrete Fourier transform method and geometric optical parameters. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Graphical abstract

4480 KiB  
Article
Geometric Calibration and Validation of Kompsat-3A AEISS-A Camera
by Doocheon Seo, Jaehong Oh, Changno Lee, Donghan Lee and Haejin Choi
Sensors 2016, 16(10), 1776; https://doi.org/10.3390/s16101776 - 24 Oct 2016
Cited by 16 | Viewed by 6769
Abstract
Kompsat-3A, which was launched on 25 March 2015, is a sister spacecraft of the Kompsat-3 developed by the Korea Aerospace Research Institute (KARI). Kompsat-3A’s AEISS-A (Advanced Electronic Image Scanning System-A) camera is similar to Kompsat-3’s AEISS but it was designed to provide PAN [...] Read more.
Kompsat-3A, which was launched on 25 March 2015, is a sister spacecraft of the Kompsat-3 developed by the Korea Aerospace Research Institute (KARI). Kompsat-3A’s AEISS-A (Advanced Electronic Image Scanning System-A) camera is similar to Kompsat-3’s AEISS but it was designed to provide PAN (Panchromatic) resolution of 0.55 m, MS (multispectral) resolution of 2.20 m, and TIR (thermal infrared) at 5.5 m resolution. In this paper we present the geometric calibration and validation work of Kompsat-3A that was completed last year. A set of images over the test sites was taken for two months and was utilized for the work. The workflow includes the boresight calibration, CCDs (charge-coupled devices) alignment and focal length determination, the merge of two CCD lines, and the band-to-band registration. Then, the positional accuracies without any GCPs (ground control points) were validated for hundreds of test sites across the world using various image acquisition modes. In addition, we checked the planimetric accuracy by bundle adjustments with GCPs. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

15699 KiB  
Article
Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models
by Jaka Kravanja, Mario Žganec, Jerneja Žganec-Gros, Simon Dobrišek and Vitomir Štruc
Sensors 2016, 16(10), 1740; https://doi.org/10.3390/s16101740 - 19 Oct 2016
Cited by 1 | Viewed by 5050
Abstract
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. [...] Read more.
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

7759 KiB  
Article
Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging
by Alberto Izquierdo, Juan José Villacorta, Lara Del Val Puente and Luis Suárez
Sensors 2016, 16(10), 1671; https://doi.org/10.3390/s16101671 - 11 Oct 2016
Cited by 26 | Viewed by 6125
Abstract
This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and [...] Read more.
This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

2398 KiB  
Article
Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera
by Thomas C. Wilkes, Andrew J. S. McGonigle, Tom D. Pering, Angus J. Taggart, Benjamin S. White, Robert G. Bryant and Jon R. Willmott
Sensors 2016, 16(10), 1649; https://doi.org/10.3390/s16101649 - 06 Oct 2016
Cited by 66 | Viewed by 22992
Abstract
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which [...] Read more.
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

6168 KiB  
Article
Design of a Sub-Picosecond Jitter with Adjustable-Range CMOS Delay-Locked Loop for High-Speed and Low-Power Applications
by Bilal I. Abdulrazzaq, Omar J. Ibrahim, Shoji Kawahito, Roslina M. Sidek, Suhaidi Shafie, Nurul Amziah Md. Yunus, Lini Lee and Izhal Abdul Halin
Sensors 2016, 16(10), 1593; https://doi.org/10.3390/s16101593 - 28 Sep 2016
Cited by 3 | Viewed by 7041
Abstract
A Delay-Locked Loop (DLL) with a modified charge pump circuit is proposed for generating high-resolution linear delay steps with sub-picosecond jitter performance and adjustable delay range. The small-signal model of the modified charge pump circuit is analyzed to bring forth the relationship between [...] Read more.
A Delay-Locked Loop (DLL) with a modified charge pump circuit is proposed for generating high-resolution linear delay steps with sub-picosecond jitter performance and adjustable delay range. The small-signal model of the modified charge pump circuit is analyzed to bring forth the relationship between the DLL’s internal control voltage and output time delay. Circuit post-layout simulation shows that a 0.97 ps delay step within a 69 ps delay range with 0.26 ps Root-Mean Square (RMS) jitter performance is achievable using a standard 0.13 µm Complementary Metal-Oxide Semiconductor (CMOS) process. The post-layout simulation results show that the power consumption of the proposed DLL architecture’s circuit is 0.1 mW when the DLL is operated at 2 GHz. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

6315 KiB  
Article
A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor
by Changwei Yu, Kaiming Nie, Jiangtao Xu and Jing Gao
Sensors 2016, 16(10), 1572; https://doi.org/10.3390/s16101572 - 23 Sep 2016
Cited by 4 | Viewed by 6002
Abstract
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards [...] Read more.
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

10042 KiB  
Article
Long-Term Continuous Double Station Observation of Faint Meteor Showers
by Stanislav Vítek, Petr Páta, Pavel Koten and Karel Fliegel
Sensors 2016, 16(9), 1493; https://doi.org/10.3390/s16091493 - 14 Sep 2016
Cited by 6 | Viewed by 4693
Abstract
Meteor detection and analysis is an essential topic in the field of astronomy. In this paper, a high-sensitivity and high-time-resolution imaging device for the detection of faint meteoric events is presented. The instrument is based on a fast CCD camera and an image [...] Read more.
Meteor detection and analysis is an essential topic in the field of astronomy. In this paper, a high-sensitivity and high-time-resolution imaging device for the detection of faint meteoric events is presented. The instrument is based on a fast CCD camera and an image intensifier. Two such instruments form a double-station observation network. The MAIA (Meteor Automatic Imager and Analyzer) system has been in continuous operation since 2013 and has successfully captured hundreds of meteors belonging to different meteor showers, as well as sporadic meteors. A data processing pipeline for the efficient processing and evaluation of the massive amount of video sequences is also introduced in this paper. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

13846 KiB  
Article
A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology
by Ruiling Liu, Dexing Zhong, Hongqiang Lyu and Jiuqiang Han
Sensors 2016, 16(9), 1364; https://doi.org/10.3390/s16091364 - 25 Aug 2016
Cited by 12 | Viewed by 9818
Abstract
Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system [...] Read more.
Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

4020 KiB  
Article
Are We Ready to Build a System for Assisting Blind People in Tactile Exploration of Bas-Reliefs?
by Francesco Buonamici, Monica Carfagni, Rocco Furferi, Lapo Governi and Yary Volpe
Sensors 2016, 16(9), 1361; https://doi.org/10.3390/s16091361 - 24 Aug 2016
Cited by 16 | Viewed by 6039
Abstract
Nowadays, the creation of methodologies and tools for facilitating the 3D reproduction of artworks and, contextually, to make their exploration possible and more meaningful for blind users is becoming increasingly relevant in society. Accordingly, the creation of integrated systems including both tactile media [...] Read more.
Nowadays, the creation of methodologies and tools for facilitating the 3D reproduction of artworks and, contextually, to make their exploration possible and more meaningful for blind users is becoming increasingly relevant in society. Accordingly, the creation of integrated systems including both tactile media (e.g., bas-reliefs) and interfaces capable of providing the users with an experience cognitively comparable to the one originally envisioned by the artist, may be considered the next step for enhancing artworks exploration. In light of this, the present work provides a description of a first-attempt system designed to aid blind people (BP) in the tactile exploration of bas-reliefs. In detail, consistent hardware layout, comprising a hand-tracking system based on Kinect® sensor and an audio device, together with a number of methodologies, algorithms and information related to physical design are proposed. Moreover, according to experimental test on the developed system related to the device position, some design alternatives are suggested so as to discuss pros and cons. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Graphical abstract

4037 KiB  
Article
Substrate and Passivation Techniques for Flexible Amorphous Silicon-Based X-ray Detectors
by Michael A. Marrs and Gregory B. Raupp
Sensors 2016, 16(8), 1162; https://doi.org/10.3390/s16081162 - 26 Jul 2016
Cited by 7 | Viewed by 7836
Abstract
Flexible active matrix display technology has been adapted to create new flexible photo-sensing electronic devices, including flexible X-ray detectors. Monolithic integration of amorphous silicon (a-Si) PIN photodiodes on a flexible substrate poses significant challenges associated with the intrinsic film stress of amorphous silicon. [...] Read more.
Flexible active matrix display technology has been adapted to create new flexible photo-sensing electronic devices, including flexible X-ray detectors. Monolithic integration of amorphous silicon (a-Si) PIN photodiodes on a flexible substrate poses significant challenges associated with the intrinsic film stress of amorphous silicon. This paper examines how altering device structuring and diode passivation layers can greatly improve the electrical performance and the mechanical reliability of the device, thereby eliminating one of the major weaknesses of a-Si PIN diodes in comparison to alternative photodetector technology, such as organic bulk heterojunction photodiodes and amorphous selenium. A dark current of 0.5 pA/mm2 and photodiode quantum efficiency of 74% are possible with a pixelated diode structure with a silicon nitride/SU-8 bilayer passivation structure on a 20 µm-thick polyimide substrate. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Graphical abstract

17207 KiB  
Article
Uncertainty Comparison of Visual Sensing in Adverse Weather Conditions
by Shi-Wei Lo, Jyh-Horng Wu, Lun-Chi Chen, Chien-Hao Tseng, Fang-Pang Lin and Ching-Han Hsu
Sensors 2016, 16(7), 1125; https://doi.org/10.3390/s16071125 - 20 Jul 2016
Cited by 6 | Viewed by 6041
Abstract
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and [...] Read more.
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and RegGro. For each growing rule, several tests on adverse weather and lens-stained scenes were performed, taking into account and analyzing different weather conditions with the outdoor visual sensing system. The influence of several weather conditions was analyzed, highlighting their effect on the outdoor visual sensing system with different growing rules. Furthermore, experimental errors and uncertainties obtained with the growing rules were compared. The segmentation accuracy of flood regions yielded by the GrowCut, RegGro, and hybrid methods was 75%, 85%, and 87.7%, respectively. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

2408 KiB  
Article
A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity
by Fan Zhang and Hanben Niu
Sensors 2016, 16(7), 999; https://doi.org/10.3390/s16070999 - 29 Jun 2016
Cited by 5 | Viewed by 8462
Abstract
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each [...] Read more.
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

24948 KiB  
Article
Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System
by Jaehoon Jung, Inhye Yoon and Joonki Paik
Sensors 2016, 16(7), 982; https://doi.org/10.3390/s16070982 - 25 Jun 2016
Cited by 8 | Viewed by 9102
Abstract
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed [...] Read more.
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5967 KiB  
Article
Evaluation of a Wobbling Method Applied to Correcting Defective Pixels of CZT Detectors in SPECT Imaging
by Zhaoheng Xie, Suying Li, Kun Yang, Baixuan Xu and Qiushi Ren
Sensors 2016, 16(6), 772; https://doi.org/10.3390/s16060772 - 27 May 2016
Cited by 2 | Viewed by 5272
Abstract
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) [...] Read more.
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Graphical abstract

4303 KiB  
Article
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
by Chulhee Park and Moon Gi Kang
Sensors 2016, 16(5), 719; https://doi.org/10.3390/s16050719 - 18 May 2016
Cited by 37 | Viewed by 10218
Abstract
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band [...] Read more.
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Graphical abstract

4191 KiB  
Article
Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors
by Pablo Ramon Soria, Robert Bevec, Begoña C. Arrue, Aleš Ude and Aníbal Ollero
Sensors 2016, 16(5), 700; https://doi.org/10.3390/s16050700 - 14 May 2016
Cited by 21 | Viewed by 7310
Abstract
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable [...] Read more.
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object’s shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object’s centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

2210 KiB  
Article
Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder
by Min Huang, Moon S. Kim, Kuanglin Chao, Jianwei Qin, Changyeun Mo, Carlos Esquerre, Stephen Delwiche and Qibing Zhu
Sensors 2016, 16(4), 441; https://doi.org/10.3390/s16040441 - 25 Mar 2016
Cited by 32 | Viewed by 8789
Abstract
The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging [...] Read more.
The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm–5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5–1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm–3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5752 KiB  
Article
A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI
by Wei Chen, Weiping Wang, Qun Li, Qiang Chang and Hongtao Hou
Sensors 2016, 16(3), 410; https://doi.org/10.3390/s16030410 - 19 Mar 2016
Cited by 24 | Viewed by 7551
Abstract
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies [...] Read more.
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

4399 KiB  
Article
Underwater Imaging Using a 1 × 16 CMUT Linear Array
by Rui Zhang, Wendong Zhang, Changde He, Yongmei Zhang, Jinlong Song and Chenyang Xue
Sensors 2016, 16(3), 312; https://doi.org/10.3390/s16030312 - 01 Mar 2016
Cited by 13 | Viewed by 6094
Abstract
A 1 × 16 capacitive micro-machined ultrasonic transducer linear array was designed, fabricated, and tested for underwater imaging in the low frequency range. The linear array was fabricated using Si-SOI bonding techniques. Underwater transmission performance was tested in a water tank, and the [...] Read more.
A 1 × 16 capacitive micro-machined ultrasonic transducer linear array was designed, fabricated, and tested for underwater imaging in the low frequency range. The linear array was fabricated using Si-SOI bonding techniques. Underwater transmission performance was tested in a water tank, and the array has a resonant frequency of 700 kHz, with pressure amplitude 182 dB (μPa·m/V) at 1 m. The −3 dB main beam width of the designed dense linear array is approximately 5 degrees. Synthetic aperture focusing technique was applied to improve the resolution of reconstructed images, with promising results. Thus, the proposed array was shown to be suitable for underwater imaging applications. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

8276 KiB  
Article
A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs
by Min-Kyu Kim, Seong-Kwan Hong and Oh-Kyong Kwon
Sensors 2016, 16(1), 27; https://doi.org/10.3390/s16010027 - 26 Dec 2015
Cited by 10 | Viewed by 9657
Abstract
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a [...] Read more.
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

2668 KiB  
Article
Parallax-Robust Surveillance Video Stitching
by Botao He and Shaohua Yu
Sensors 2016, 16(1), 7; https://doi.org/10.3390/s16010007 - 25 Dec 2015
Cited by 44 | Viewed by 9055
Abstract
This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered [...] Read more.
This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5135 KiB  
Article
Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays
by Javier Contreras, Josep Tornero, Isabel Ferreira, Rodrigo Martins, Luis Gomes and Elvira Fortunato
Sensors 2015, 15(12), 29938-29949; https://doi.org/10.3390/s151229779 - 30 Nov 2015
Cited by 2 | Viewed by 6617
Abstract
A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its [...] Read more.
A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

10980 KiB  
Article
An Indoor Obstacle Detection System Using Depth Information and Region Growth
by Hsieh-Chang Huang, Ching-Tang Hsieh and Cheng-Hsiang Yeh
Sensors 2015, 15(10), 27116-27141; https://doi.org/10.3390/s151027116 - 23 Oct 2015
Cited by 51 | Viewed by 9774
Abstract
This study proposes an obstacle detection method that uses depth information to allow the visually impaired to avoid obstacles when they move in an unfamiliar environment. The system is composed of three parts: scene detection, obstacle detection and a vocal announcement. This study [...] Read more.
This study proposes an obstacle detection method that uses depth information to allow the visually impaired to avoid obstacles when they move in an unfamiliar environment. The system is composed of three parts: scene detection, obstacle detection and a vocal announcement. This study proposes a new method to remove the ground plane that overcomes the over-segmentation problem. This system addresses the over-segmentation problem by removing the edge and the initial seed position problem for the region growth method using the Connected Component Method (CCM). This system can detect static and dynamic obstacles. The system is simple, robust and efficient. The experimental results show that the proposed system is both robust and convenient. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

1328 KiB  
Article
Time-Resolved Synchronous Fluorescence for Biomedical Diagnosis
by Xiaofeng Zhang, Andrew Fales and Tuan Vo-Dinh
Sensors 2015, 15(9), 21746-21759; https://doi.org/10.3390/s150921746 - 31 Aug 2015
Cited by 10 | Viewed by 7642
Abstract
This article presents our most recent advances in synchronous fluorescence (SF) methodology for biomedical diagnostics. The SF method is characterized by simultaneously scanning both the excitation and emission wavelengths while keeping a constant wavelength interval between them. Compared to conventional fluorescence spectroscopy, the [...] Read more.
This article presents our most recent advances in synchronous fluorescence (SF) methodology for biomedical diagnostics. The SF method is characterized by simultaneously scanning both the excitation and emission wavelengths while keeping a constant wavelength interval between them. Compared to conventional fluorescence spectroscopy, the SF method simplifies the emission spectrum while enabling greater selectivity, and has been successfully used to detect subtle differences in the fluorescence emission signatures of biochemical species in cells and tissues. The SF method can be used in imaging to analyze dysplastic cells in vitro and tissue in vivo. Based on the SF method, here we demonstrate the feasibility of a time-resolved synchronous fluorescence (TRSF) method, which incorporates the intrinsic fluorescent decay characteristics of the fluorophores. Our prototype TRSF system has clearly shown its advantage in spectro-temporal separation of the fluorophores that were otherwise difficult to spectrally separate in SF spectroscopy. We envision that our previously-tested SF imaging and the newly-developed TRSF methods will combine their proven diagnostic potentials in cancer diagnosis to further improve the efficacy of SF-based biomedical diagnostics. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

22352 KiB  
Article
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
by Jing Liu, Chunpeng Li, Xuefeng Fan and Zhaoqi Wang
Sensors 2015, 15(8), 20894-20924; https://doi.org/10.3390/s150820894 - 21 Aug 2015
Cited by 8 | Viewed by 8076
Abstract
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain [...] Read more.
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

5597 KiB  
Article
Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors
by Tuyen Danh Pham, Young Ho Park, Dat Tien Nguyen, Seung Yong Kwon and Kang Ryoung Park
Sensors 2015, 15(7), 16866-16894; https://doi.org/10.3390/s150716866 - 13 Jul 2015
Cited by 23 | Viewed by 7864
Abstract
Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is [...] Read more.
Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

10031 KiB  
Article
A High Performance Banknote Recognition System Based on a One-Dimensional Visible Light Line Sensor
by Young Ho Park, Seung Yong Kwon, Tuyen Danh Pham, Kang Ryoung Park, Dae Sik Jeong and Sungsoo Yoon
Sensors 2015, 15(6), 14093-14115; https://doi.org/10.3390/s150614093 - 15 Jun 2015
Cited by 16 | Viewed by 5402
Abstract
An algorithm for recognizing banknotes is required in many fields, such as banknote-counting machines and automatic teller machines (ATM). Due to the size and cost limitations of banknote-counting machines and ATMs, the banknote image is usually captured by a one-dimensional (line) sensor instead [...] Read more.
An algorithm for recognizing banknotes is required in many fields, such as banknote-counting machines and automatic teller machines (ATM). Due to the size and cost limitations of banknote-counting machines and ATMs, the banknote image is usually captured by a one-dimensional (line) sensor instead of a conventional two-dimensional (area) sensor. Because the banknote image is captured by the line sensor while it is moved at fast speed through the rollers inside the banknote-counting machine or ATM, misalignment, geometric distortion, and non-uniform illumination of the captured images frequently occur, which degrades the banknote recognition accuracy. To overcome these problems, we propose a new method for recognizing banknotes. The experimental results using two-fold cross-validation for 61,240 United States dollar (USD) images show that the pre-classification error rate is 0%, and the average error rate for the final recognition of the USD banknotes is 0.114%. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

2827 KiB  
Article
Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball
by Junqin Lin, Baoling Han and Qingsheng Luo
Sensors 2015, 15(6), 13270-13287; https://doi.org/10.3390/s150613270 - 05 Jun 2015
Cited by 1 | Viewed by 5771
Abstract
This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as [...] Read more.
This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Graphical abstract

Review

Jump to: Research, Other

1124 KiB  
Review
Driver Distraction Using Visual-Based Sensors and Algorithms
by Alberto Fernández, Rubén Usamentiaga, Juan Luis Carús and Rubén Casado
Sensors 2016, 16(11), 1805; https://doi.org/10.3390/s16111805 - 28 Oct 2016
Cited by 82 | Viewed by 17284
Abstract
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information [...] Read more.
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

Other

Jump to: Research, Review

3335 KiB  
Technical Note
Forward-Looking Infrared Cameras for Micrometeorological Applications within Vineyards
by Marwan Katurji and Peyman Zawar-Reza
Sensors 2016, 16(9), 1518; https://doi.org/10.3390/s16091518 - 18 Sep 2016
Cited by 6 | Viewed by 4945
Abstract
We apply the principles of atmospheric surface layer dynamics within a vineyard canopy to demonstrate the use of forward-looking infrared cameras measuring surface brightness temperature (spectrum bandwidth of 7.5 to 14 μm) at a relatively high temporal rate of 10 s. The temporal [...] Read more.
We apply the principles of atmospheric surface layer dynamics within a vineyard canopy to demonstrate the use of forward-looking infrared cameras measuring surface brightness temperature (spectrum bandwidth of 7.5 to 14 μm) at a relatively high temporal rate of 10 s. The temporal surface brightness signal over a few hours of the stable nighttime boundary layer, intermittently interrupted by periods of turbulent heat flux surges, was shown to be related to the observed meteorological measurements by an in situ eddy-covariance system, and reflected the above-canopy wind variability. The infrared raster images were collected and the resultant self-organized spatial cluster provided the meteorological context when compared to in situ data. The spatial brightness temperature pattern was explained in terms of the presence or absence of nighttime cloud cover and down-welling of long-wave radiation and the canopy turbulent heat flux. Time sequential thermography as demonstrated in this research provides positive evidence behind the application of thermal infrared cameras in the domain of micrometeorology, and to enhance our spatial understanding of turbulent eddy interactions with the surface. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Graphical abstract

Back to TopTop