Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (435)

Search Parameters:
Keywords = CMOS image sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 6701 KB  
Article
Novel Fabry-Pérot Filter Structures for High-Performance Multispectral Imaging with a Broadband from the Visible to the Near-Infrared
by Bo Gao, Tianxin Wang, Lu Chen, Shuai Wang, Chenxi Li, Fajun Xiao, Yanyan Liu and Weixing Yu
Sensors 2025, 25(19), 6123; https://doi.org/10.3390/s25196123 - 3 Oct 2025
Viewed by 263
Abstract
The integration of a pixelated Fabry–Pérot filter array onto the image sensor enables on-chip snapshot multispectral imaging, significantly reducing the size and weight of conventional spectral imaging equipment. However, a traditional Fabry–Pérot cavity, based on metallic or dielectric layers, exhibits a narrow bandwidth, [...] Read more.
The integration of a pixelated Fabry–Pérot filter array onto the image sensor enables on-chip snapshot multispectral imaging, significantly reducing the size and weight of conventional spectral imaging equipment. However, a traditional Fabry–Pérot cavity, based on metallic or dielectric layers, exhibits a narrow bandwidth, which restricts their utility in broader applications. In this work, we propose novel Fabry–Pérot filter structures that employ dielectric thin films for phase modulation, enabling single-peak filtering across a broad operational wavelength range from 400 nm to 1100 nm. The proposed structures are easy to fabricate and compatible with complementary metal-oxide-semiconductor (CMOS) image sensors. Moreover, the structures show low sensitivity to oblique incident angles of up to 30° with minimal wavelength shifts. This advanced Fabry–Pérot filter design provides a promising pathway for expanding the operational wavelength of snapshot spectral imaging systems, thereby potentially extending their application across numerous related fields. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 6752 KB  
Article
An Area-Efficient Readout Circuit for a High-SNR Triple-Gain LOFIC CMOS Image Sensor
by Ai Otani, Hiroaki Ogawa, Ken Miyauchi, Yuki Morikawa, Hideki Owada, Isao Takayanagi and Shunsuke Okura
Sensors 2025, 25(19), 6093; https://doi.org/10.3390/s25196093 - 2 Oct 2025
Viewed by 314
Abstract
A lateral overflow integration capacitor (LOFIC) CMOS image sensor (CIS) can achieve high-dynamic-range (HDR) imaging by combining a low-conversion-gain (LCG) signal with a high-conversion-gain (HCG) signal. However, the signal-to-noise ratio (SNR) drops at the switching point from HCG signal to LCG signal due [...] Read more.
A lateral overflow integration capacitor (LOFIC) CMOS image sensor (CIS) can achieve high-dynamic-range (HDR) imaging by combining a low-conversion-gain (LCG) signal with a high-conversion-gain (HCG) signal. However, the signal-to-noise ratio (SNR) drops at the switching point from HCG signal to LCG signal due to the significant pixel noise in the LCG signal. To address this issue, a triple-gain LOFIC CIS with a middle-conversion-gain (MCG) signal has been introduced. In this work, we propose an area-efficient readout circuit for the triple-gain LOFIC CIS, using amplifier and capacitor sharing techniques to process the HCG, MCG, and LCG signals. A test chip of the proposed readout circuit was fabricated using the 0.18μm CMOS process. The area overhead was only 7.6%, and the SNR drop was improved by 8.05 dB compared to the readout circuit for a dual-gain LOFIC CIS. Full article
Show Figures

Figure 1

18 pages, 12224 KB  
Article
A Phase-Adjustable Noise-Shaping SAR ADC for Mitigating Parasitic Capacitance Effects from PIP Capacitors
by Xuelong Ouyang, Hua Kuang, Dalin Kong, Zhengxi Cheng and Honghui Yuan
Sensors 2025, 25(19), 6029; https://doi.org/10.3390/s25196029 - 1 Oct 2025
Viewed by 186
Abstract
High parasitic capacitance from poly-insulator-poly capacitors in complementary metal oxide semiconductor (CMOS) processes presents a major bottleneck to achieving high-resolution successive approximation register (SAR) analog-to-digital converters (ADCs) in imaging systems. This study proposes a Phase-Adjustable SAR ADC that addresses this limitation through a [...] Read more.
High parasitic capacitance from poly-insulator-poly capacitors in complementary metal oxide semiconductor (CMOS) processes presents a major bottleneck to achieving high-resolution successive approximation register (SAR) analog-to-digital converters (ADCs) in imaging systems. This study proposes a Phase-Adjustable SAR ADC that addresses this limitation through a reconfigurable architecture. The design utilizes a phase-adjustable logic unit to switch between a conventional SAR mode for high-speed operation and a noise-shaping (NS) SAR mode for high-resolution conversion, actively suppressing in-band quantization noise. An improved SAR logic unit facilitates the insertion of an adjustable phase while concurrently achieving an 86% area reduction in the core logic block. A prototype was fabricated and measured in a 0.35-µm CMOS process. In conventional mode, the ADC achieved a 7.69-bit effective number of bits at 2 MS/s. By activating the noise-shaping circuitry, performance was significantly enhanced to an 11.06-bit resolution, corresponding to a signal-to-noise-and-distortion ratio (SNDR) of 68.3 dB, at a 125 kS/s sampling rate. The results demonstrate that the proposed architecture effectively leverages the trade-off between speed and accuracy, providing a practical method for realizing high-performance ADCs despite the inherent limitations of non-ideal passive components. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 5572 KB  
Article
Real-Time Detection and Segmentation of the Iris At A Distance Scenarios Embedded in Ultrascale MPSoC
by Camilo Ruiz-Beltrán, Óscar Pons, Martín González-García and Antonio Bandera
Electronics 2025, 14(18), 3698; https://doi.org/10.3390/electronics14183698 - 18 Sep 2025
Viewed by 395
Abstract
Iris recognition is currently considered the most promising biometric method and has been applied in many fields. Current commercial and research systems typically use software solutions running on a dedicated computer, whose power consumption, size and price are considerably high. This paper presents [...] Read more.
Iris recognition is currently considered the most promising biometric method and has been applied in many fields. Current commercial and research systems typically use software solutions running on a dedicated computer, whose power consumption, size and price are considerably high. This paper presents a hardware-based embedded solution for real-time iris segmentation. From an algorithmic point of view, the system consists of two steps. The first employs a YOLOX trained to detect two classes: eyes and iris/pupil. Both classes intersect in the last of the classes and this is used to emphasise the detection of the iris/pupil class. The second stage uses a lightweight U-Net network to segment the iris, which is applied only on the locations provided by the first stage. Designed to work in an Iris At A Distance (IAAD) scenario, the system includes quality parameters to discard low-contrast or low-sharpness detections. The whole system has been integrated on one MultiProcessor System-on-Chip (MPSoC) using AMD’s Deep learning Processing Unit (DPU). This approach is capable of processing the more than 45 frames per second provided by a 16 Mpx CMOS digital image sensor. Experiments to determine the accuracy of the proposed system in terms of iris segmentation are performed on several publicly available databases with satisfactory results. Full article
Show Figures

Figure 1

22 pages, 12949 KB  
Article
Accurate, Extended-Range Indoor Visible Light Positioning via High-Efficiency MPPM Modulation with Smartphone Multi-Sensor Fusion
by Dinh Quan Nguyen and Hoang Nam Nguyen
Photonics 2025, 12(9), 859; https://doi.org/10.3390/photonics12090859 - 27 Aug 2025
Viewed by 763
Abstract
Visible Light Positioning (VLP), leveraging Light-Emitting Diodes (LEDs) and smartphone CMOS cameras, provides a high-precision solution for indoor localization. However, existing systems face challenges in accuracy, latency, and robustness due to line-of-sight (LOS) limitations and inefficient signal encoding. To overcome these constraints, this [...] Read more.
Visible Light Positioning (VLP), leveraging Light-Emitting Diodes (LEDs) and smartphone CMOS cameras, provides a high-precision solution for indoor localization. However, existing systems face challenges in accuracy, latency, and robustness due to line-of-sight (LOS) limitations and inefficient signal encoding. To overcome these constraints, this paper introduces a real-time VLP framework that integrates Multi-Pulse Position Modulation (MPPM) with smartphone multi-sensor fusion. By employing MPPM, a high-efficiency encoding scheme, the proposed system transmits LED identifiers (LED-IDs) with reduced inter-symbol interference, enabling robust signal detection even under dynamic lighting conditions and at extended distances. The smartphone’s camera is a receiver that decodes the MPPM-encoded LED-ID, while accelerometer and magnetometer data compensate for device orientation and motion-induced errors. Experimental results demonstrate that the MPPM-driven approach achieves a decoding success rate of over 97% at distances up to 2.4 m, while maintaining a frame processing rate of 30 FPS and sub-35 ms latency. Furthermore, the method reduces angular errors through sensor fusion, yielding 2D positioning accuracy below 10 cm and vertical errors under 16 cm across diverse smartphone orientations. The synergy of MPPM’s spectral efficiency and multi-sensor correction establishes a new benchmark for VLP systems, enabling scalable deployment in real-world environments without requiring complex infrastructure. Full article
Show Figures

Graphical abstract

23 pages, 3846 KB  
Article
A Sea Surface Roughness Retrieval Model Using Multi Angle, Passive, Visible Spectrum Remote Sensing Images: Simulation and Analysis
by Mingzhu Song, Lizhou Li, Yifan Zhang, Xuechan Zhao and Junsheng Wang
Remote Sens. 2025, 17(17), 2951; https://doi.org/10.3390/rs17172951 - 25 Aug 2025
Viewed by 617
Abstract
Sea surface roughness (SSR) retrieval is a frontier topic in the field of ocean remote sensing, and SSR retrieval based on multi angle, passive, visible spectrum remote sensing images has been proven to have potential applications. Traditional multi angle retrieval models ignored the [...] Read more.
Sea surface roughness (SSR) retrieval is a frontier topic in the field of ocean remote sensing, and SSR retrieval based on multi angle, passive, visible spectrum remote sensing images has been proven to have potential applications. Traditional multi angle retrieval models ignored the nonlinear relationship between radiation and digital signals, resulting in low accuracy in SSR retrieval using visible spectrum remote sensing images. Therefore, we analyze the transmission characteristics of signals and random noise in sea surface imaging, establish signals and noise transmission models for typical sea surface imaging visible spectrum remote sensing systems using Complementary Metal Oxide Semiconductor (CMOS) and Time Delay Integration-Charge Coupled Device (TDI-CCD) sensors, and propose a model for SSR retrieval using multi angle passive visible spectrum remote sensing images. The proposed model can effectively suppress the noise behavior in the imaging link and improve the accuracy of SSR retrieval. Simulation experiments show that when simulating the retrieval of multi angle visible spectrum images obtained using CMOS or TDI-CCD imaging systems with four SSR levels of 0.02, 0.03, 0.04, and 0.05, the proposed model relative errors using two angles are decreased by 4.0%, 2.7%, 2.3%, and 2.0% and 6.5%, 4.3%, 3.7%, and 3.2%, compared with the relative errors of the model without considering noise behavior, which are 7.0%, 6.7%, 7.8%, and 9.0% and 9.5%, 8.3%, 9.0%, and 10.2%. When using more fitting data, the relative errors of the model were decreased by 5.0%, 2.7%, 2.5%, and 2.0% and 7.0%, 5.0%, 4.3%, and 3.2%, compared with the relative errors of the model without considering noise behavior, which are 8.5%, 7.0%, 8.0%, and 9.4%, and 10.0%, 8.7%, 9.3%, and 10.0%. Full article
Show Figures

Figure 1

14 pages, 2144 KB  
Article
Analogs of the Prime Number Problem in a Shot Noise Suppression of the Soft-Reset Process
by Yutaka Hirose
Nanomaterials 2025, 15(17), 1297; https://doi.org/10.3390/nano15171297 - 22 Aug 2025
Viewed by 591
Abstract
The soft-reset process, or a sequence of charge emissions from a floating storage node through a transistor biased in a subthreshold bias condition, is modeled by a master (Kolmogorov–Bateman) equation. The Coulomb interaction energy after each one-charge emission leads to a stepwise potential [...] Read more.
The soft-reset process, or a sequence of charge emissions from a floating storage node through a transistor biased in a subthreshold bias condition, is modeled by a master (Kolmogorov–Bateman) equation. The Coulomb interaction energy after each one-charge emission leads to a stepwise potential increase, giving correlated emission rates represented by Boltzmann factors. The governing probability distribution function is a hypoexponential type, and its cumulants describe characteristics of the single-charge Coulomb interaction at room temperature on a mesoscopic scale. The cumulants are further extended into a complex domain. Starting from three fundamental assumptions, i.e., the generation of non-degenerated states due to single-charge Coulomb energy, the Markovian property of each emission event, and the independence of each state, a moment function is identified as a product of mutually prime elements (algebraically termed as prime ideals) comprising the eigenvalues or the lifetimes of the emission states. Then, the algebraic structure of the moment function is found to be highly analogous to that of an integer uniquely factored into prime numbers. Treating the lifetimes as analogs of the prime numbers, two types of zeta functions are constructed. Standard analyses of the zeta functions analogous to the prime number problem or the Riemann Hypothesis are performed. For the zeta functions, the analyticity and poles are specified, and the functional equations are derived. Also, the zeta functions are found to be equivalent to the analytic extension of the cumulants. Finally, between the number of emitted charges and the lifetime, a logarithmic relation analogous to the prime number theorem is derived. Full article
(This article belongs to the Special Issue The Interaction of Electron Phenomena on the Mesoscopic Scale)
Show Figures

Figure 1

22 pages, 1904 KB  
Article
FPGA–STM32-Embedded Vision and Control Platform for ADAS Development on a 1:5 Scale Vehicle
by Karen Roa-Tort, Diego A. Fabila-Bustos, Macaria Hernández-Chávez, Daniel León-Martínez, Adrián Apolonio-Vera, Elizama B. Ortega-Gutiérrez, Luis Cadena-Martínez, Carlos D. Hernández-Lozano, César Torres-Pérez, David A. Cano-Ibarra, J. Alejandro Aguirre-Anaya and Josué D. Rivera-Fernández
Vehicles 2025, 7(3), 84; https://doi.org/10.3390/vehicles7030084 - 17 Aug 2025
Viewed by 1032
Abstract
This paper presents the design, development, and experimental validation of a low-cost, modular, and scalable Advanced Driver Assistance System (ADAS) platform intended for research and educational purposes. The system integrates embedded computer vision and electronic control using an FPGA for accelerated real-time image [...] Read more.
This paper presents the design, development, and experimental validation of a low-cost, modular, and scalable Advanced Driver Assistance System (ADAS) platform intended for research and educational purposes. The system integrates embedded computer vision and electronic control using an FPGA for accelerated real-time image processing and an STM32 microcontroller for sensor data acquisition and actuator management. The YOLOv3-Tiny model is implemented to enable efficient pedestrian and vehicle detection under hardware constraints, while additional vision algorithms are used for lane line detection, ensuring a favorable trade-off between accuracy and processing speed. The platform is deployed on a 1:5 scale gasoline-powered vehicle, offering a safe and cost-effective testbed for validating ADAS functionalities, such as lane tracking, pedestrian and vehicle identification, and semi-autonomous navigation. The methodology includes the integration of a CMOS camera, an FPGA development board, and various sensors (LiDAR, ultrasonic, and Hall-effect), along with synchronized communication protocols to ensure real-time data exchange between vision and control modules. A wireless graphical user interface (GUI) enables remote monitoring and teleoperation. Experimental results show competitive detection accuracy—exceeding 94% in structured environments—and processing latencies below 70 ms per frame, demonstrating the platform’s effectiveness for rapid prototyping and applied training. Its modularity and affordability position it as a powerful tool for advancing ADAS research and education, with high potential for future expansion to full-scale autonomous vehicle applications. Full article
(This article belongs to the Special Issue Design and Control of Autonomous Driving Systems)
Show Figures

Figure 1

19 pages, 11068 KB  
Article
A Deep Learning Approach for Classifying Developmental Stages of Ixodes ricinus Ticks on Images Captured Using a Microscope’s High-Resolution CMOS Sensor
by Aleksandra Marzec, Anna Filipowska, Oliwia Humeniuk, Wojciech Filipowski and Paweł Raif
Sensors 2025, 25(16), 5038; https://doi.org/10.3390/s25165038 - 14 Aug 2025
Viewed by 2893
Abstract
This article presents a deep learning approach for classifying the developmental stages (larvae, nymphs, adult females, and adult males) of Ixodes ricinus ticks, the most common tick species in Europe and a major vector of tick-borne pathogens, including Borrelia burgdorferi, Anaplasma phagocytophilum [...] Read more.
This article presents a deep learning approach for classifying the developmental stages (larvae, nymphs, adult females, and adult males) of Ixodes ricinus ticks, the most common tick species in Europe and a major vector of tick-borne pathogens, including Borrelia burgdorferi, Anaplasma phagocytophilum, and tick-borne encephalitis virus (TBEV). Each developmental stage plays a different role in disease transmission, with nymphs considered the most epidemiologically relevant stage due to their small size and high prevalence. We developed a convolutional neural network (CNN) model trained on a dataset of microscopic tick images collected in the area of Upper Silesia, Poland. Grad-CAM, an XAI technique, was used to identify the regions of the image that most influenced the model’s decisions. This work is the first to utilize a CNN model for the identification of European tick fauna stages. Compared to existing solutions focused on North American tick species, our model addresses the specific challenge of distinguishing developmental stages within I. ricinus. This solution has the potential to be a valuable tool in entomology, healthcare, and tick-borne disease management. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 2685 KB  
Article
High-Speed 1024-Pixel CMOS Electrochemical Imaging Sensor with 40,000 Frames per Second for Dopamine and Hydrogen Peroxide Imaging
by Kevin A. White, Matthew A. Crocker and Brian N. Kim
Electronics 2025, 14(16), 3207; https://doi.org/10.3390/electronics14163207 - 13 Aug 2025
Viewed by 2647
Abstract
Electrochemical sensing arrays enable the spatial study of dopamine levels throughout brain slices, the diffusion of electroactive molecules, as well as neurotransmitter secretion from single cells. The integration of complementary metal-oxide semiconductor (CMOS) devices in the development of electrochemical sensing devices enables large-scale [...] Read more.
Electrochemical sensing arrays enable the spatial study of dopamine levels throughout brain slices, the diffusion of electroactive molecules, as well as neurotransmitter secretion from single cells. The integration of complementary metal-oxide semiconductor (CMOS) devices in the development of electrochemical sensing devices enables large-scale parallel recordings, providing beneficial high-throughput for drug screening studies, brain–machine interfaces, and single-cell electrophysiology. In this paper, an electrochemical sensor capable of recording at 40,000 frames per second using a CMOS sensor array with 1024 electrochemical detectors and a custom field-programmable gate array data acquisition system is detailed. A total of 1024 on-chip electrodes are monolithically integrated onto the designed CMOS chip through post-CMOS fabrication. Each electrode is paired with a dedicated transimpedance amplifier, providing 1024 parallel electrochemical sensors for high-throughput studies. To support the level of data generated by the electrochemical device, a powerful data acquisition system is designed to operate the sensor array as well as digitize and transmit the output of the CMOS chip. Using the presented electrochemical sensing system, both dopamine and hydrogen peroxide diffusions across the sensor array are successfully recorded at 40,000 frames per second across the 32 × 32 electrochemical detector array. Full article
(This article belongs to the Special Issue Lab-on-Chip Biosensors)
Show Figures

Figure 1

13 pages, 3882 KB  
Article
Thermal Damage Characterization of Detector Induced by Nanosecond Pulsed Laser Irradiation
by Zhilong Jian, Weijing Zhou, Hao Chang, Yingjie Ma, Xiaoyuan Quan and Zikang Wang
Photonics 2025, 12(8), 790; https://doi.org/10.3390/photonics12080790 - 5 Aug 2025
Viewed by 1211
Abstract
Experimental and simulation analysis was conducted on the effects of 532 nm nanosecond laser-induced thermal damage on the front-side illuminated CMOS detector. The study examined CMOS detector output images at different stages of damage, including point damage, line damage, and complete failure, and [...] Read more.
Experimental and simulation analysis was conducted on the effects of 532 nm nanosecond laser-induced thermal damage on the front-side illuminated CMOS detector. The study examined CMOS detector output images at different stages of damage, including point damage, line damage, and complete failure, and correlated these with microscopic structural changes observed through optical and scanning electron microscopy. A finite element model was used to study the thermal–mechanical coupling effect during laser irradiation. The results indicated that at a laser energy density of 78.9 mJ/cm2, localized melting occurs within photosensitive units in the epitaxial layer, manifesting as an irreversible white bright spot appearing in the detector output image (point damage). When the energy density is further increased to 241.9 mJ/cm2, metal routings across multiple pixel units melt, resulting in horizontal and vertical black lines in the output image (line damage). Upon reaching 2005.4 mJ/cm2, the entire sensor area failed to output any valid image due to thermal stress-induced delamination of the silicon dioxide insulation layer, with cracks propagating to the metal routing and epitaxial layers, ultimately causing structural deformation and device failure (complete failure). Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

16 pages, 1702 KB  
Article
Mobile and Wireless Autofluorescence Detection Systems and Their Application for Skin Tissues
by Yizhen Wang, Yuyang Zhang, Yunfei Li and Fuhong Cai
Biosensors 2025, 15(8), 501; https://doi.org/10.3390/bios15080501 - 3 Aug 2025
Cited by 1 | Viewed by 758
Abstract
Skin autofluorescence (SAF) detection technology represents a noninvasive, convenient, and cost-effective optical detection approach. It can be employed for the differentiation of various diseases, including metabolic diseases and dermatitis, as well as for monitoring the treatment efficacy. Distinct from diffuse reflection signals, the [...] Read more.
Skin autofluorescence (SAF) detection technology represents a noninvasive, convenient, and cost-effective optical detection approach. It can be employed for the differentiation of various diseases, including metabolic diseases and dermatitis, as well as for monitoring the treatment efficacy. Distinct from diffuse reflection signals, the autofluorescence signals of biological tissues are relatively weak, making them challenging to be captured by photoelectric sensors. Moreover, the absorption and scattering properties of biological tissues lead to a substantial attenuation of the autofluorescence of biological tissues, thereby worsening the signal-to-noise ratio. This has also imposed limitations on the development and application of compact-sized autofluorescence detection systems. In this study, a compact LED light source and a CMOS sensor were utilized as the excitation and detection devices for skin tissue autofluorescence, respectively, to construct a mobile and wireless skin tissue autofluorescence detection system. This system can achieve the detection of skin tissue autofluorescence with a high signal-to-noise ratio under the drive of a simple power supply and a single-chip microcontroller. The detection time is less than 0.1 s. To enhance the stability of the system, a pressure sensor was incorporated. This pressure sensor can monitor the pressure exerted by the skin on the detection system during the testing process, thereby improving the accuracy of the detection signal. The developed system features a compact structure, user-friendliness, and a favorable signal-to-noise ratio of the detection signal, holding significant application potential in future assessments of skin aging and the risk of diabetic complications. Full article
Show Figures

Figure 1

17 pages, 4504 KB  
Article
A 1000 fps High-Dynamic-Range Global Shutter CMOS Image Sensor with Full Thermometer Code Current-Steering Ramp
by Liqiang Han, Ganlin Cheng, Xu Zhang, Gengyun Wang, Weijun Pan, Yao Yao, Guihai Yu, Ruimeng Zhang, Shuaichen Mu, Songbo Wu, Hongbo Bu, Liqun Dai, Ben Fan, Dan Wang, Wei Fan and Ruiming Chen
Sensors 2025, 25(14), 4483; https://doi.org/10.3390/s25144483 - 18 Jul 2025
Viewed by 755
Abstract
We present a 1024 × 512, 1000 fps, high-dynamic-range global shutter CMOS image sensor. The pixel is based on a voltage domain global shutter architecture, featuring a pitch of 24 μm × 24 μm. Both high-gain and low-gain signals can be captured within [...] Read more.
We present a 1024 × 512, 1000 fps, high-dynamic-range global shutter CMOS image sensor. The pixel is based on a voltage domain global shutter architecture, featuring a pitch of 24 μm × 24 μm. Both high-gain and low-gain signals can be captured within a single frame. The combined dynamic range is 95 dB, and the full well capacity is 620 ke-. In this paper, we analyze the pixel noise performance as well as the non-linearity and image lag that arise from parasitic capacitance in the pixel. The ramp generator is based on a 12-bit full thermometer code current-steering DAC with high driving capability. We discuss the design considerations for the ramp generator, particularly addressing the phenomenon of non-linear response. Finally, the comparator design and the column readout noise are analyzed. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

16 pages, 2521 KB  
Article
A Multimodal CMOS Readout IC for SWIR Image Sensors with Dual-Mode BDI/DI Pixels and Column-Parallel Two-Step Single-Slope ADC
by Yuyan Zhang, Zhifeng Chen, Yaguang Yang, Huangwei Chen, Jie Gao, Zhichao Zhang and Chengying Chen
Micromachines 2025, 16(7), 773; https://doi.org/10.3390/mi16070773 - 30 Jun 2025
Viewed by 1033
Abstract
This paper proposes a dual-mode CMOS analog front-end (AFE) circuit for short-wave infrared (SWIR) image sensors, which integrates a hybrid readout circuit (ROIC) and a 12-bit two-step single-slope analog-to-digital converter (TS-SS ADC). The ROIC dynamically switches between buffered-direct-injection (BDI) and direct-injection (DI) modes, [...] Read more.
This paper proposes a dual-mode CMOS analog front-end (AFE) circuit for short-wave infrared (SWIR) image sensors, which integrates a hybrid readout circuit (ROIC) and a 12-bit two-step single-slope analog-to-digital converter (TS-SS ADC). The ROIC dynamically switches between buffered-direct-injection (BDI) and direct-injection (DI) modes, thus balancing injection efficiency against power consumption. While the DI structure offers simplicity and low power, it suffers from unstable biasing and reduced injection efficiency under high background currents. Conversely, the BDI structure enhances injection efficiency and bias stability via an input buffer but incurs higher power consumption. To address this trade-off, a dual-mode injection architecture with mode-switching transistors is implemented. Mode selection is executed in-pixel via a low-leakage transmission gate and coordinated by the column timing controller, enabling low-current pixels to operate in low-noise BDI mode, whereas high-current pixels revert to the low-power DI mode. The TS-SS ADC employs a four-terminal comparator and dynamic reference voltage compensation to mitigate charge leakage and offset, which improves signal-to-noise ratio (SNR) and linearity. The prototype occupies 2.1 mm × 2.88 mm in a 0.18 µm CMOS process and serves a 64 × 64 array. The AFE achieves a dynamic range of 75.58 dB, noise of 249.42 μV, and 81.04 mW power consumption. Full article
Show Figures

Figure 1

14 pages, 3205 KB  
Article
A 209 ps Shutter-Time CMOS Image Sensor for Ultra-Fast Diagnosis
by Houzhi Cai, Zhaoyang Xie, Youlin Ma and Lijuan Xiang
Sensors 2025, 25(12), 3835; https://doi.org/10.3390/s25123835 - 19 Jun 2025
Cited by 1 | Viewed by 721
Abstract
A conventional microchannel plate framing camera is typically utilized for inertial confinement fusion diagnosis. However, as a vacuum electronic device, it has inherent limitations, such as a complex structure and the inability to achieve single-line-of-sight imaging. To address these challenges, a CMOS image [...] Read more.
A conventional microchannel plate framing camera is typically utilized for inertial confinement fusion diagnosis. However, as a vacuum electronic device, it has inherent limitations, such as a complex structure and the inability to achieve single-line-of-sight imaging. To address these challenges, a CMOS image sensor that can be seamlessly integrated with an electronic pulse broadening system can provide a viable alternative to the microchannel plate detector. This paper introduces the design of an 8 × 8 pixel-array ultrashort shutter-time single-framing CMOS image sensor, which leverages silicon epitaxial processing and a 0.18 μm standard CMOS process. The focus of this study is on the photodiode and the readout pixel-array circuit. The photodiode, designed using the silicon epitaxial process, achieves a quantum efficiency exceeding 30% in the visible light band at a bias voltage of 1.8 V, with a temporal resolution greater than 200 ps for visible light. The readout pixel-array circuit, which is based on the 0.18 μm standard CMOS process, incorporates 5T structure pixel units, voltage-controlled delayers, clock trees, and row-column decoding and scanning circuits. Simulations of the pixel circuit demonstrate an optimal temporal resolution of 60 ps. Under the shutter condition with the best temporal resolution, the maximum output swing of the pixel circuit is 448 mV, and the output noise is 77.47 μV, resulting in a dynamic range of 75.2 dB for the pixel circuit; the small-signal responsivity is 1.93 × 10−7 V/e, and the full-well capacity is 2.3 Me. The maximum power consumption of the 8 × 8 pixel-array and its control circuits is 0.35 mW. Considering both the photodiode and the pixel circuit, the proposed CMOS image sensor achieves a temporal resolution better than 209 ps. Full article
(This article belongs to the Special Issue Ultrafast Optoelectronic Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop