Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = CMOS color sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5464 KB  
Article
Research on Flame Temperature Measurement Technique Combining Spectral Analysis and Two-Color Pyrometry
by Pan Pei, Xiaojian Hao, Shenxiang Feng, Tong Wei and Chenyang Xu
Appl. Sci. 2025, 15(11), 5864; https://doi.org/10.3390/app15115864 - 23 May 2025
Viewed by 680
Abstract
This work presents a method for measuring flame temperatures through an imaging technique that combines spectral analysis with two-color pyrometry. Initially, we employed Laser-Induced Breakdown Spectroscopy (LIBS) to analyze the radiation spectrum of nitrocellulose, selecting 694 nm and 768 nm as the two [...] Read more.
This work presents a method for measuring flame temperatures through an imaging technique that combines spectral analysis with two-color pyrometry. Initially, we employed Laser-Induced Breakdown Spectroscopy (LIBS) to analyze the radiation spectrum of nitrocellulose, selecting 694 nm and 768 nm as the two spectral lines for temperature measurement. Subsequently, we constructed a temperature measurement system utilizing two sCMOS cameras and conducted calibration within the range of 600 to 1000 °C, achieving a maximum temperature measurement uncertainty of 3.43%. Finally, we successfully performed two-dimensional temperature field detection and imaging of nitrocellulose flames of varying qualities, achieving a flame image resolution of 2048 (H) × 2048 (V). In comparison to traditional two-color infrared thermometers and Tunable Diode Laser Absorption Spectroscopy (TDLAS) technology, the maximum relative temperature measurement error was 2.1%. This work provides technical insights into the development of high-resolution, low-cost flame temperature imaging technology applicable across a wide range of fields. Full article
Show Figures

Graphical abstract

11 pages, 3253 KB  
Article
Development of a Smartphone-Linked Immunosensing System for Oxytocin Determination
by Miku Sarubo, Yoka Suzuki, Yuka Numazaki and Hiroyuki Kudo
Biosensors 2025, 15(4), 261; https://doi.org/10.3390/bios15040261 - 18 Apr 2025
Viewed by 517
Abstract
We report an optical immunosensing system for oxytocin (OXT) based on image analysis of color reactions in an enzyme-linked immunosorbent assay (ELISA). We employed a miniaturized optical immunosensing unit that was functionally connected to an LED and a smartphone camera. Our system measures [...] Read more.
We report an optical immunosensing system for oxytocin (OXT) based on image analysis of color reactions in an enzyme-linked immunosorbent assay (ELISA). We employed a miniaturized optical immunosensing unit that was functionally connected to an LED and a smartphone camera. Our system measures OXT levels using a metric called the RGBscore, which is derived from the red, green, and blue (RGB) information in the captured images. By calculating the RGBscore regressively using the brute-force method, this approach can be applied to smartphones with various CMOS image sensors and firmware. The lower detection limit was determined to be 5.26 pg/mL, and the measurement results showed a higher correlation (r = 0.972) with those obtained from conventional ELISA. These results suggest the potential for its application in a simplified health management system for individuals. Full article
(This article belongs to the Special Issue Biosensors Based on Microfluidic Devices—2nd Edition)
Show Figures

Figure 1

17 pages, 1973 KB  
Article
Research on Water Quality Chemical Oxygen Demand Detection Using Laser-Induced Fluorescence Image Processing
by Ying Guo, Zhaoshuo Tian, Zongjie Bi, Xiaohua Che and Songlin Yin
Sensors 2025, 25(5), 1404; https://doi.org/10.3390/s25051404 - 25 Feb 2025
Viewed by 770
Abstract
Chemical Oxygen Demand (COD) serves as a crucial metric for assessing the extent of water pollution attributable to organic substances. This study introduces an innovative approach for the detection of low-concentration COD in aqueous environments through the application of Laser-Induced Fluorescence (LIF) image [...] Read more.
Chemical Oxygen Demand (COD) serves as a crucial metric for assessing the extent of water pollution attributable to organic substances. This study introduces an innovative approach for the detection of low-concentration COD in aqueous environments through the application of Laser-Induced Fluorescence (LIF) image processing. The technique employs an image sensor to capture fluorescence image data generated by organic compounds in water when excited by ultraviolet laser radiation. Subsequently, the COD value, indicative of the concentration of organic matter in the water, is derived via image processing techniques. Utilizing this methodology, an LIF image processing COD detection system has been developed. The system is primarily composed of a CMOS image sensor, an STM32 microprocessor, a laser emission module, and a display module. In this study, the system was employed to detect mixed solutions of sodium humate and glucose at varying concentrations, resulting in the acquisition of corresponding fluorescence images. By isolating color channels and processing the image data features, variations in RGB color characteristics were analyzed. The Partial Least Squares Regression (PLSR) analysis method was utilized to develop a predictive model for COD concentration values based on the average RGB color feature values from the characteristic regions of the fluorescence images. Within the COD concentration range of 0–12 mg/L, the system demonstrated a detection relative error of less than 10%. In summary, the system designed in this research, utilizing the LIF image processing method, exhibits high sensitivity, robust stability, miniaturization, and non-contact detection capabilities for low-concentration COD measurement. It is well-suited for rapid, real-time online water quality monitoring. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

12 pages, 2657 KB  
Article
A Compact Fluorescence System for Tumor Detection: Performance and Integration Potential
by Jean Pierre Ndabakuranye, John Raschke, Preston Avagiannis and Arman Ahnood
Biosensors 2025, 15(2), 95; https://doi.org/10.3390/bios15020095 - 7 Feb 2025
Viewed by 1231
Abstract
Fluorescence-guided surgery (FGS) is an innovative technique for accurately localizing tumors during surgery, particularly valuable in brain tumor detection. FGS uses advanced spectral and imaging tools to provide precise, quantitative fluorescence measurements that enhance surgical accuracy. However, the current challenge with these advanced [...] Read more.
Fluorescence-guided surgery (FGS) is an innovative technique for accurately localizing tumors during surgery, particularly valuable in brain tumor detection. FGS uses advanced spectral and imaging tools to provide precise, quantitative fluorescence measurements that enhance surgical accuracy. However, the current challenge with these advanced tools lies in their lack of miniaturization, which limits their practicality in complex surgical environments. In this study, we present a miniaturized fluorescence detection system, developed using state-of-the-art CMOS color sensors, to overcome this challenge and improve brain tumor localization. Our 3.1 × 3 mm multispectral sensor platform measures fluorescence intensity ratios at 635 nm and 514 nm, producing a high-resolution fluorescence distribution map for a 16 mm × 16 mm area. This device shows a high correlation (R2 > 0.98) with standard benchtop spectrometers, confirming its accuracy for real-time, on-chip fluorescence detection. With its compact size, our system has strong potential for integration with existing handheld surgical tools, aiming to improve outcomes in tumor resection and enhance intraoperative tumor visualization. Full article
(This article belongs to the Special Issue Advanced Fluorescence Biosensors)
Show Figures

Figure 1

16 pages, 8593 KB  
Article
Smart Machine Vision System to Improve Decision-Making on the Assembly Line
by Carlos Americo de Souza Silva and Edson Pacheco Paladini
Machines 2025, 13(2), 98; https://doi.org/10.3390/machines13020098 - 27 Jan 2025
Cited by 2 | Viewed by 1786
Abstract
Technological advances in the production of printed circuit boards (PCBs) are increasing the number of components inserted on the surface. This has led the electronics industry to seek improvements in their inspection processes, often making it necessary to increase the level of automation [...] Read more.
Technological advances in the production of printed circuit boards (PCBs) are increasing the number of components inserted on the surface. This has led the electronics industry to seek improvements in their inspection processes, often making it necessary to increase the level of automation on the production line. The use of machine vision for quality inspection within manufacturing processes has increasingly supported decision making in the approval or rejection of products outside of the established quality standards. This study proposes a hybrid smart-vision inspection system with a machine vision concept and vision sensor equipment to verify 24 components and eight screw threads. The goal of this study is to increase automated inspection reliability and reduce non-conformity rates in the manufacturing process on the assembly line of automotive products using machine vision. The system uses a camera to collect real-time images of the assembly fixtures, which are connected to a CMOS color vision sensor. The method is highly accurate in complex industry environments and exhibits specific feasibility and effectiveness. The results indicate high performance in the failure mode defined during this study, obtaining the best inspection performance through a strategy using Vision Builder for automated inspection. This approach reduced the action priority by improving the failure mode and effect analysis (FMEA) method. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

14 pages, 2037 KB  
Article
Design of a Deep Learning-Based Metalens Color Router for RGB-NIR Sensing
by Hua Mu, Yu Zhang, Zhenyu Liang, Haoqi Gao, Haoli Xu, Bingwen Wang, Yangyang Wang and Xing Yang
Nanomaterials 2024, 14(23), 1973; https://doi.org/10.3390/nano14231973 - 8 Dec 2024
Cited by 1 | Viewed by 1355
Abstract
Metalens can achieve arbitrary light modulation by controlling the amplitude, phase, and polarization of the incident waves and have been applied across various fields. This paper presents a color router designed based on metalens, capable of effectively separating spectra from visible light to [...] Read more.
Metalens can achieve arbitrary light modulation by controlling the amplitude, phase, and polarization of the incident waves and have been applied across various fields. This paper presents a color router designed based on metalens, capable of effectively separating spectra from visible light to near-infrared light. Traditional design methods for meta-lenses require extensive simulations, making them time-consuming. In this study, we propose a deep learning network capable of forward prediction across a broad wavelength range, combined with a particle swarm optimization algorithm to design metalens efficiently. The simulation results align closely with theoretical predictions. The designed color router can simultaneously meet the theoretical transmission phase of the target spectra, specifically for red, green, blue, and near-infrared light, and focus them into designated areas. Notably, the optical efficiency of this design reaches 40%, significantly surpassing the efficiency of traditional color filters. Full article
Show Figures

Figure 1

8 pages, 3761 KB  
Proceeding Paper
Preservation and Archiving of Historic Murals Using a Digital Non-Metric Camera
by Suhas Muralidhar and Ashutosh Bhardwaj
Eng. Proc. 2024, 82(1), 60; https://doi.org/10.3390/ecsa-11-20519 - 26 Nov 2024
Cited by 1 | Viewed by 543
Abstract
Digital non-metric cameras with high-resolution capabilities are being used in various domains such as digital heritage, artifact documentation, art conservation, and engineering applications. In this study, a novel approach consisting of the application of the combined use of close-range photogrammetry (CRP) and mapping [...] Read more.
Digital non-metric cameras with high-resolution capabilities are being used in various domains such as digital heritage, artifact documentation, art conservation, and engineering applications. In this study, a novel approach consisting of the application of the combined use of close-range photogrammetry (CRP) and mapping techniques is used to capture the depth of a mural digitally, serving as a database for the preservation and archiving of historic murals. The open hall next to the main sanctuary of the Virupaksha temple in Hampi, Karnataka, India, which is a UNESCO World Heritage site, depicts cultural events on a mural-covered ceiling. A mirrorless Sony Alpha 7 III camera with a full-frame 24 MP CMOS sensor mounted with a 50 mm lens and 24 mm lens has been used to acquire digital photographs with an image size of 6000 × 6000 pixels. The suggested framework incorporates five main steps: data acquisition, color correction, image mosaicking, orthorectification, and image filtering. The results show a high level of accuracy and precision attained during the image capture and processing steps. A comparative study was performed in which the 24 mm lens orthoimage resulted in an image size of 9131 × 14,910 and a pixel size of 1.05 mm, whereas the 50 mm lens produced a 14,283 × 21,676 image size and a pixel size of 0.596 mm of the mural on the ceiling. This degree of high spatial resolution is essential for maintaining the fine details of the artwork in the digital documentation as well as its historical context, subtleties, and painting techniques. The study’s findings demonstrate the effectiveness of using digital sensors with the close-range photogrammetry (CRP) technique as a useful method for recording and preserving historical ceiling murals. Full article
Show Figures

Figure 1

17 pages, 2884 KB  
Article
On-Chip Data Reduction and Object Detection for a Feature-Extractable CMOS Image Sensor
by Yudai Morikaku, Ryuichi Ujiie, Daisuke Morikawa, Hideki Shima, Kota Yoshida and Shunsuke Okura
Electronics 2024, 13(21), 4295; https://doi.org/10.3390/electronics13214295 - 31 Oct 2024
Cited by 1 | Viewed by 1225
Abstract
In order to improve image recognition technologies in an IoT environment, we propose a data reduction scheme for a feature-extractable CMOS image sensor and present simulation results for object recognition using feature data. We evaluated the accuracy of the simulated feature data in [...] Read more.
In order to improve image recognition technologies in an IoT environment, we propose a data reduction scheme for a feature-extractable CMOS image sensor and present simulation results for object recognition using feature data. We evaluated the accuracy of the simulated feature data in object recognition based on YOLOX trained with a feature dataset. According to our simulation results, the obtained object recognition accuracy was 56.6% with the large-scale COCO dataset, even though the amount of data was reduced by 97.7% compared to conventional RGB color images. When the dataset was replaced with the RAISE RAW image dataset for more accurate simulation, the object recognition accuracy improved to 76.3%. Furthermore, the feature-extractable CMOS image sensor can switch its operation mode between RGB color image mode and feature data mode. When the trigger for switching from feature data mode to RGB color image mode was set to the detection of a large-sized person, the feature data achieved an accuracy of 93.5% with the COCO dataset. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

10 pages, 4937 KB  
Article
Silicon Nanowire Phototransistor Arrays for CMOS Image Sensor Applications
by Hyunsung Jun, Johyeon Choi and Jinyoung Hwang
Sensors 2023, 23(24), 9824; https://doi.org/10.3390/s23249824 - 14 Dec 2023
Viewed by 2347
Abstract
This paper introduces a new design of silicon nanowire (Si NW) phototransistor (PT) arrays conceived explicitly for improved CMOS image sensor performance, and comprehensive numerical investigations clarify the characteristics of the proposed devices. Each unit within this array architecture features a top-layer vertical [...] Read more.
This paper introduces a new design of silicon nanowire (Si NW) phototransistor (PT) arrays conceived explicitly for improved CMOS image sensor performance, and comprehensive numerical investigations clarify the characteristics of the proposed devices. Each unit within this array architecture features a top-layer vertical Si NW optimized for the maximal absorption of incoming light across the visible spectrum. This absorbed light generates carriers, efficiently injected into the emitter–base junction of an underlying npn bipolar junction transistor (BJT). This process induces proficient amplification of the output collector current. By meticulously adjusting the diameters of the NWs, the PTs are tailored to exhibit distinct absorption characteristics, thus delineating the visible spectrum’s blue, green, and red regions. This specialization ensures enriched color fidelity, a sought-after trait in imaging devices. Notably, the synergetic combination of the Si NW and the BJT augments the electrical response under illumination, boasting a quantum efficiency exceeding 10. In addition, by refining parameters like the height of the NW and gradient doping depth, the proposed PTs deliver enhanced color purity and amplified output currents. Full article
(This article belongs to the Special Issue Recent Advances in CMOS Image Sensor)
Show Figures

Figure 1

24 pages, 13986 KB  
Article
A 3.0 µm Pixels and 1.5 µm Pixels Combined Complementary Metal-Oxide Semiconductor Image Sensor for High Dynamic Range Vision beyond 106 dB
by Satoko Iida, Daisuke Kawamata, Yorito Sakano, Takaya Yamanaka, Shohei Nabeyoshi, Tomohiro Matsuura, Masahiro Toshida, Masahiro Baba, Nobuhiko Fujimori, Adarsh Basavalingappa, Sungin Han, Hidetoshi Katayama and Junichiro Azami
Sensors 2023, 23(21), 8998; https://doi.org/10.3390/s23218998 - 6 Nov 2023
Cited by 3 | Viewed by 2831
Abstract
We propose a new concept image sensor suitable for viewing and sensing applications. This is a report of a CMOS image sensor with a pixel architecture consisting of a 1.5 μm pixel with four-floating-diffusions-shared pixel structures and a 3.0 μm pixel with an [...] Read more.
We propose a new concept image sensor suitable for viewing and sensing applications. This is a report of a CMOS image sensor with a pixel architecture consisting of a 1.5 μm pixel with four-floating-diffusions-shared pixel structures and a 3.0 μm pixel with an in-pixel capacitor. These pixels are four small quadrate pixels and one big square pixel, also called quadrate–square pixels. They are arranged in a staggered pitch array. The 1.5 μm pixel pitch allows for a resolution high enough to recognize distant road signs. The 3 μm pixel with intra-pixel capacitance provides two types of signal outputs: a low-noise signal with high conversion efficiency and a highly saturated signal output, resulting in a high dynamic range (HDR). Two types of signals with long exposure times are read out from the vertical pixel, and four types of signals are read out from the horizontal pixel. In addition, two signals with short exposure times are read out again from the square pixel. A total of eight different signals are read out. This allows two rows to be read out simultaneously while reducing motion blur. This architecture achieves both an HDR of 106 dB and LED flicker mitigation (LFM), as well as being motion-artifact-free and motion-blur-less. As a result, moving subjects can be accurately recognized and detected with good color reproducibility in any lighting environment. This allows a single sensor to deliver the performance required for viewing and sensing applications. Full article
Show Figures

Figure 1

10 pages, 3989 KB  
Article
Image Reconstruction and Investigation of Factors Affecting the Hue and Wavelength Relation Using Different Interpolation Algorithms with Raw Data from a CMOS Sensor
by Eun-Min Kim, Kyung-Kwang Joo and Hyeon-Woo Park
Photonics 2023, 10(11), 1216; https://doi.org/10.3390/photonics10111216 - 31 Oct 2023
Viewed by 1455
Abstract
An image processing method was employed to obtain wavelength information using light irradiated during camera exposure. Physically, hue (H) and wavelength (W) are closely related. Once the H value is known through image pixel analysis, the wavelength can be obtained. In this paper, [...] Read more.
An image processing method was employed to obtain wavelength information using light irradiated during camera exposure. Physically, hue (H) and wavelength (W) are closely related. Once the H value is known through image pixel analysis, the wavelength can be obtained. In this paper, the H-W curve was investigated from 400 to 650 nm using raw image data with a complementary metal oxide semiconductor (CMOS) sensor technology. We reconstructed the H-W curve from raw image data based on a demosaicing method with 2 × 2 pixel images. To date, no study has reported on reconstructing the H-W curve using several different interpolation algorithms in the 400~650 nm wavelength region. In addition, several factors affecting the H-W curve with a raw digital image, such as exposure time, aperture, and international organization for standardization (ISO) settings, were investigated for the first time. Full article
(This article belongs to the Special Issue Advanced Photonic Sensing and Measurement II)
Show Figures

Figure 1

18 pages, 4458 KB  
Article
Crime Light Imaging (CLI): A Novel Sensor for Stand-Off Detection and Localization of Forensic Traces
by Andrea Chiuri, Roberto Chirico, Federico Angelini, Fabrizio Andreoli, Ivano Menicucci, Marcello Nuvoli, Cristina Cano-Trujillo, Gemma Montalvo and Violeta Lazic
Sensors 2023, 23(18), 7736; https://doi.org/10.3390/s23187736 - 7 Sep 2023
Cited by 1 | Viewed by 2670
Abstract
Stand-off detection of latent traces avoids the scene alteration that might occur during close inspection by handheld forensic lights. Here, we describe a novel sensor, named Crime Light Imaging (CLI), designed to perform high-resolution photography of targets at a distance of 2–10 m [...] Read more.
Stand-off detection of latent traces avoids the scene alteration that might occur during close inspection by handheld forensic lights. Here, we describe a novel sensor, named Crime Light Imaging (CLI), designed to perform high-resolution photography of targets at a distance of 2–10 m and to visualize some common latent traces. CLI is based on four high-power illumination LEDs and one color CMOS camera with a motorized objective plus frontal filters; the LEDs and camera could be synchronized to obtain short-exposure images weakly dependent on the ambient light. The sensor is integrated into a motorized platform, providing the target scanning and necessary information for 3D scene reconstruction. The whole system is portable and equipped with a user-friendly interface. The preliminary tests of CLI on fingerprints at distance of 7 m showed an excellent image resolution and drastic contrast enhancement under green LED light. At the same distance, a small (1 µL) blood droplet on black tissue was captured by CLI under NIR LED, while a trace from 15 µL semen on white cotton became visible under UV LED illumination. These results represent the first demonstration of true stand-off photography of latent traces, thus opening the way for a completely new approach in crime scene forensic examination. Full article
(This article belongs to the Special Issue Feature Papers in Optical Sensors 2023)
Show Figures

Figure 1

14 pages, 3562 KB  
Article
A Comprehensive Methodology for Optimizing Read-Out Timing and Reference DAC Offset in High Frame Rate Image Sensing Systems
by Jaehoon Jun
Sensors 2023, 23(16), 7048; https://doi.org/10.3390/s23167048 - 9 Aug 2023
Cited by 3 | Viewed by 2478 | Correction
Abstract
This paper presents a comprehensive timing optimization methodology for power-efficient high-resolution image sensors with column-parallel single-slope analog-to-digital converters (ADCs). The aim of the method is to optimize the read-out timing for each period in the image sensor’s operation, while considering various factors such [...] Read more.
This paper presents a comprehensive timing optimization methodology for power-efficient high-resolution image sensors with column-parallel single-slope analog-to-digital converters (ADCs). The aim of the method is to optimize the read-out timing for each period in the image sensor’s operation, while considering various factors such as ADC decision time, slew rate, and settling time. By adjusting the ramp reference offset and optimizing the amplifier bandwidth of the comparator, the proposed methodology minimizes the power consumption of the amplifier array, which is one of the most power-hungry circuits in the system, while maintaining a small color linearity error and ensuring optimal performance. To demonstrate the effectiveness of the proposed method, a power-efficient 108 MP 3-D stacked CMOS image sensor with a 10-bit column-parallel single-slope ADC array was implemented and verified. The image sensor achieved a random noise of 1.4 erms, a column fixed-pattern noise of 66 ppm at an analog gain of 16, and a remarkable figure-of-merit (FoM) of 0.71 e·nJ. This timing optimization methodology enhances energy efficiency in high-resolution image sensors, enabling higher frame rates and improved system performance. It could be adapted for various imaging applications requiring optimized performance and reduced power consumption, making it a valuable tool for designers aiming to achieve optimal performance in power-sensitive applications. Full article
(This article belongs to the Special Issue Integrated Circuit Design and Sensing Applications)
Show Figures

Figure 1

22 pages, 7771 KB  
Article
Evaluation of Microlenses, Color Filters, and Polarizing Filters in CIS for Space Applications
by Clémentine Durnez, Cédric Virmontois, Pierre Panuel, Aubin Antonsanti, Vincent Goiffon, Magali Estribeau, Olivier Saint-Pé, Valérian Lalucaa, Erick Berdin, Franck Larnaudie, Jean-Marc Belloir, Catalin Codreanu and Ludovic Chavanne
Sensors 2023, 23(13), 5884; https://doi.org/10.3390/s23135884 - 25 Jun 2023
Cited by 4 | Viewed by 3401
Abstract
For the last two decades, the CNES optoelectronics detection department and partners have evaluated space environment effects on a large panel of CMOS image sensors (CIS) from a wide range of commercial foundries and device providers. Many environmental tests have been realized in [...] Read more.
For the last two decades, the CNES optoelectronics detection department and partners have evaluated space environment effects on a large panel of CMOS image sensors (CIS) from a wide range of commercial foundries and device providers. Many environmental tests have been realized in order to provide insights into detection chain degradation in modern CIS for space applications. CIS technology has drastically improved in the last decade, reaching very high performances in terms of quantum efficiency (QE) and spectral selectivity. These improvements are obtained thanks to the introduction of various components in the pixel optical stack, such as microlenses, color filters, and polarizing filters. However, since these parts have been developed only for commercial applications suitable for on-ground environment, it is crucial to evaluate if these technologies can handle space environments for future space imaging missions. There are few results on that robustness in the literature. The objective of this article is to give an overview of CNES and partner experiments from numerous works, showing that the performance gain from the optical stack is greater than the degradation induced by the space environment. Consequently, optical stacks can be used for space missions because they are not the main contributor to the degradation in the detection chain. Full article
(This article belongs to the Special Issue Recent Advances in CMOS Image Sensor)
Show Figures

Figure 1

18 pages, 6608 KB  
Article
VLSI Design Based on Block Truncation Coding for Real-Time Color Image Compression for IoT
by Shih-Lun Chen, He-Sheng Chou, Shih-Yao Ke, Chiung-An Chen, Tsung-Yi Chen, Mei-Ling Chan, Patricia Angela R. Abu, Liang-Hung Wang and Kuo-Chen Li
Sensors 2023, 23(3), 1573; https://doi.org/10.3390/s23031573 - 1 Feb 2023
Cited by 7 | Viewed by 3268
Abstract
It has always been a major issue for a hospital to acquire real-time information about a patient in emergency situations. Because of this, this research presents a novel high-compression-ratio and real-time-process image compression very-large-scale integration (VLSI) design for image sensors in the Internet [...] Read more.
It has always been a major issue for a hospital to acquire real-time information about a patient in emergency situations. Because of this, this research presents a novel high-compression-ratio and real-time-process image compression very-large-scale integration (VLSI) design for image sensors in the Internet of Things (IoT). The design consists of a YEF transform, color sampling, block truncation coding (BTC), threshold optimization, sub-sampling, prediction, quantization, and Golomb–Rice coding. By using machine learning, different BTC parameters are trained to achieve the optimal solution given the parameters. Two optimal reconstruction values and bitmaps for each 4 × 4 block are achieved. An image is divided into 4 × 4 blocks by BTC for numerical conversion and removing inter-pixel redundancy. The sub-sampling, prediction, and quantization steps are performed to reduce redundant information. Finally, the value with a high probability will be coded using Golomb–Rice coding. The proposed algorithm has a higher compression ratio than traditional BTC-based image compression algorithms. Moreover, this research also proposes a real-time image compression chip design based on low-complexity and pipelined architecture by using TSMC 0.18 μm CMOS technology. The operating frequency of the chip can achieve 100 MHz. The core area and the number of logic gates are 598,880 μm2 and 56.3 K, respectively. In addition, this design achieves 50 frames per second, which is suitable for real-time CMOS image sensor compression. Full article
(This article belongs to the Special Issue Sensors and Signal Processing for Biomedical Application)
Show Figures

Figure 1

Back to TopTop