Next Article in Journal
A Novel Addressing Scheme for PMIPv6 Based Global IP-WSNs
Previous Article in Journal
Development of an Integrated Microfluidic Perfusion Cell Culture System for Real-Time Microscopic Observation of Biological Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios

by
Antonio Martínez-Sánchez
1,
Carlos Fernández
2,*,
Pedro J. Navarro
2 and
Andrés Iborra
2
1
Supercomputing and Algorithms Group, CSIC-UAL Associated Unit, University of Almeria, 04120 Almeria, Spain
2
DSIE, Universidad Politécnica de Cartagena, Campus Muralla del Mar, s/n. E-30202 Cartagena, Spain
*
Author to whom correspondence should be addressed.
Sensors 2011, 11(9), 8412-8429; https://doi.org/10.3390/s110908412
Submission received: 28 July 2011 / Revised: 25 August 2011 / Accepted: 26 August 2011 / Published: 29 August 2011
(This article belongs to the Section Physical Sensors)

Abstract

: Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.

Graphical Abstract

1. Introduction

The dynamic range (DR) of an image sensor defines the relation between the minimum and maximum light that it can detect [1]. Broadly speaking, the dynamic range for a CCD/CMOS sensor represents its capacity to retain scene information from both highly lighted and shaded scenes. Common CCD/CMOS sensors present a linear response to scene irradiance. More advanced sensors try to increase the dynamic range by converting the linear response to a logarithmic-like response, as shown in Figure 1, providing image enhancement [2] as shown in Figure 2.

A high dynamic range is of major importance for computer vision systems that work with images taken from outdoor scenes—traffic control [3], security surveillance systems [4], outdoor visual inspection, etc. [5]. Researchers and manufacturers have recently developed a new generation of image sensors and new techniques that make it possible to increase the typical 60 dB range for a CCD sensor to 120 dB. Reference [6] reports several techniques to expand the dynamic range by pre-estimation of the sensor response curve. Reference [7] proposes to combine several RGB images of different exposures into one image with greater dynamic range. A US patent [8] claimed a CCD which reduces smear, thus providing greater dynamic range. Reference [9] reported a technique to convert a linear response of a CMOS sensor to a logarithmic response, and [10] proposed attaching a filter to the sensor to attenuate the light received by the pixels following a fixed pattern; after that, the image is processed to produce a new image with a greater dynamic range.

Although great progress has been made in the last decade concerning CMOS imaging, logarithmic response shows a large fixed pattern noise (FPN) and slower response time for low light levels, yielding limited sensitivity [11]. The main disadvantage of using a fixed logarithmic curve is that it reduces the contrast of the image as compared to a linear response, so that a part of the scene information is lost, as seen in Figure 2(b).

2. The Lin-Log CMOS Sensor

The work described in this paper concerns the development of a novel method which combines different algorithms to adjust the parameters which control the response curve of a Lin-Log CMOS sensor in order to increase its yield in HDR scenes.

LinLog™ CMOS image technology was developed at the Swiss Federal Institute of Electronics and Microtechnology (Zurich, Switerland). A LinLog CMOS sensor presents a linear response for low light levels and a logarithmic compression as light intensity increases. Linear response for low light levels assures high sensitivity, while compression for high light levels avoids saturation.

The transition between the two responses can be adjusted. Special attention is required to guarantee a smooth transition between them. There are various cameras (e.g., like the MV1-D1312-40-GB-12 from Photonfocus AG equipped with the Photonfocus A1312-40 active pixel LinLog CMOS image sensor which we have used for test purposes) that use LinLog technology, and sensor response can be controlled by adjusting four parameters, hereafter designated T1, T2, V1 and V2.V1 and V2 represent the compression voltage applied to the sensor. T1 and T2 are normalized parameters, expressed as a fraction of the exposure time, and can be adjusted from 0 to a maximum value of 1; their values determine the percentage of exposure time during which V1 and V2 are applied [Figure 3(a)]. The values of these four parameters determine the LinLog response of the sensor [Figure 3(b)]. Note that the final LinLog response is a combination of: (1) the linear response, (2) the logarithmic response with strong compression (V1) and (3) the logarithmic response with weak compression (V2) [Figure 3(b)]. These responses are combined by adjusting T1 and T2 values.

We have taken control characteristics of the LinLog CMOS sensor to develop a real-time image improvement method for high dynamic range scenes. This is made up of three different algorithms: (1) an algorithm to control the exposure time, (2) an algorithm to avoid image saturation, and (3) an algorithm to maximize the image entropy.

3. Exposure Time Control Module

We have developed a module that controls the exposure time in order to assure that the average intensity level of the image tends to a set value (usually near the mean of available intensity levels), thus offering automatic correction of the deviations caused by variable lighting conditions in the scene. For this purpose we have implemented an Adaptive Proportional-Integral-Derivative controller (APID) which compensates non-linear effects at the time of image acquisition, by adjusting the exposure time as scene lighting conditions vary. An adaptive control system [12] measures the process response, compares it with the response given by a reference process and is capable of adjusting process parameters to assure the desired response as shown in Figure 4.

In our case, the process that is controlled is acquisition of an image by a LinLog CMOS sensor. The output is the intensity level of that image. To quantify this level we use Equations (1) and (2):

N g = ( i = 0 D 1 ( i + 1 ) H ( i ) ) N N ( D 1 )
D = 2 d
where Ng [0,1] represents the intensity level, d is the number of bits per pixel and D is the number of gray levels. H(i) represents the i-th histogram entry and N the number of pixels in the image. Ng is used as an input parameter for a Proportional-Integral-Derivative (PID) controller [1315] which controls some camera parameters (see Figure 5), as shown in Equations (3) and (4).
o ( t ) = K p e ( t ) + K i 0 t e ( t ) dt + K d d e ( t ) d t
e ( t ) = N g o N g ( t )
where o(t) is controller output (exposure time) and e(t) the error value (difference between real -Ng(t)- and set - N g o- intensity levels). Kp, Ki and Kd, are gain values for the PID action. These gain values can be adjusted by means of either empirical or specific methods [16]. For implementation of the controller, there are a number of requirements to be considered:
  • - The integral action must be set to a reference value.

  • - The time taken to calculate integration error must be limited.

  • - The integral term must not continue to increase once the maximum or the minimum output values have been reached.

The next step is to model the process. Usually, if we obviate the LinLog effect, the total number of electrons for every pixel in the image can be defined as Equation (5):

n e = P s A p T e h c λ η ( λ )
where ne is the number of electrons per pixel, Ap is the pixel area, Te is the exposure time, Ps is the power radiated onto the pixel area, h is the Plank constant, c is the light speed and η(λ) is the quantum efficiency. The conversion of electrons to an output voltage and then to a quantification level in the A/D converter depends on sensor amplification, but it can be modelled by a constant, k, resulting in Nc = ne/k. As we can see, then, the only time-dependant variables are Ps and Te, Te being the output to be controlled Equation (6):
N c ( t ) = C P s ( t ) T e ( t )

The gain of the process can thus be defined as CPs(t) (where C represents the constants of Equation (5). PID parameters Kp, Ki, and Kd are functions of this gain [10], so temporal variation of them is related to gain variation. We can use Equation (7) to estimate the gain variation every time the system feeds back, τ being the feedback period. From now on, to simplify following Equations (7), (8) and (9) we will use G(t) for CPs(t):

Δ t G ( t ) = G ( t ) G ( t τ ) = N c ( t ) T e ( t ) N c ( t τ ) T e ( t τ )

In time t we use G(t-τ) to calculate PID parameters, since G(t) cannot be calculated until Te(t) is known. To calculate G(t) we use Nc(t-τ), Nc(t-2τ), Te(t-τ) and Te(t-2τ), justified by Equation (8):

τ 0 Δ t G ( t τ ) Δ t G ( t )

Only parameters Kp and Ki are updated, as shown in Equation (9), where α, β, M and l are parameters to be fixed by the designer:

Δ t K p = { α Δ t G if Δ t G [ l , l ] 0 otherwise
Δ t K i = { β Δ t G if Δ t G ( [ M , l ) ( l , M ] ) sgn ( Δ t G ) β M if Δ t G ( ( , M ) ( M , ) ) 0 otherwise

Figure 6(a) shows the response of the proposed APID controller—“*”, blue- versus other controllers mentioned in the references: Proportional-Integral (PI) [17,18]—“o” red- and a controller designated Incremental, based on increments that are proportional to the error (“-” green) [19]. In order to compare the responses, the PI and APID controllers were configured with the same constants (Kp = 0.01, ki = 0.4 and Kd = 0) and the parameters of the Incremental controller were adjusted to achieve the best combination of speed and stability. Even so, unwanted oscillations may appear and this has proven to be the slowest of the three controllers. During the first frames the PI and APID controllers showed the same response because they had the same initial configuration.

Figure 6(b) shows the performance details of the controllers versus a decrease of the process gain ΔG =1. The APID controller increases its internal gain, producing faster performance (l = 0.1, M = 0.6, α = 0.01, β = 0.1).

Figure 6(c) shows the performance details of the controllers versus an increase of the process gain ΔG = 3. The APID controller reduces its internal gain to prevent overshoot and oscillations and keeps speed. To the contrary, the PI controller is unable to prevent overshoot. Table 1 shows measurements for time response, overshoot and oscillation of controller responses shown in Figure 6.

4. Saturation Control

We can detect image saturation when saturation N S B width (see Equation (10)) reaches a given value. To reduce saturation we increase the voltage values that control the LinLog compression effect:

N S B = H ( D 1 ) N
where H(D−1) is the (D−1)-th entry for the image histogram, H.

Figure 7 shows the algorithm for saturation control. This measures the saturation width given by Equation (10). V1 and V2 are increased or reduced depending on whether the measured value is greater or smaller than the set value.

Figure 8 shows the effect of varying V1 and V2 values.

5. Entropy Maximization

The concept of information entropy describes how much randomness (or uncertainty) there is in a signal or an image; in other words, how much information is provided by the signal or image. In terms of physics, the greater the information entropy of the image, the higher its quality will be [20].

Shannon’s entropy (information entropy) [21] is used to evaluate the quality of acquired images. Assuming that images have a negligible noise content, the more detail (or uncertainty) there is, the better the image will be (the entropy value for a completely homogenous image is 0). That is, without analyzing the image content, we assume (for two images obtained from an invariant scene) that the richer the information, the greater will be the entropy of the image.

The response curves, as shown in Figure 9, cause a loss of resolution in the bright areas of the image. Moreover, although the algorithm presented in Section 4 prevents saturation it can reduce the contrast in dark areas of the image. To deal with this problem, we have developed an algorithm (see Figure 10) that maximizes the entropy (Equation (11)) of the image.

For this purpose we adjust T1 (Figure 11) to produce light linearization for the high irradiance response curve. To reduce the complexity of the algorithm, T2 is set to a maximum and remains constant:

E ( X ) = { i = 1 D p ( x i ) log 2   p ( x i ) p ( x ) 0 0 p ( x ) = 0
where E(X) is the entropy of the image X and p(xi) is the probability mass function of the grey level xi. The entropy is a measure of the information contained in the image. In this paper, we assume that an image of a scene has been taken with optimum sensor configuration when its maximum entropy has been reached.

The main difficulty in developing an algorithm for entropy maximization [22] lies in the fact that it is not possible to fix a target entropy a priori, since this value depends on the scene. As shown in Figure 10, the algorithm is local maximizer-like [23] and has desirable properties for our purpose. The most desirable property in this case is robustness; the control method based on the conjugated gradient ensures an asymptotic tendency toward the nearest local maximum with δ accuracy, and furthermore is an easy method to implement. For this reason it has already been used to control parameters of a camera sensor [24]. In other cases, non-adaptive PI controllers have been used [17,18], but they are not robust in non-linear systems. The second-order Taylor polynomial expansions of the gradient method (Newton, Levenberg-Marquard, etc.) [25] present a higher convergence speed but are more prone to instabilities [26]. When the scene changes, the gradient direction may also change and, in a first step, the algorithm will get the maximization direction wrong, but this will be corrected in the next step. Therefore, the algorithm’s performance is robust if we assume that scene variation is slower than the period between algorithm steps.

The execution of the algorithm will be stopped when a minimum variation in entropy, δ, is reached. To avoid undesired oscillations of image contrast γ needs to be small. Even so, the algorithm developed here shows a quick response when working in continuous grabbing mode. We can see how V1, V2 and T1 are adjusted (Figure 12) and the improvement provided by the algorithm developed (Figure 13).

6. Results and Discussion

The proposed method comprises three algorithms to control the sensor response: the algorithm that controls exposure time is executed simultaneously with the other two—the algorithm that controls image saturation (adjusts V1 and V2) and the algorithm that maximizes the image entropy (adjusts T1); these last two algorithms are executed consecutively. Hence, the total time for the adjustment process will be the maximum of: (1) exposure adjustment time and (2) the sum of the times of the two algorithms for controlling the LinLog parameters. The time exposure controller takes less than 10 frames to respond to the step inputs (Figure 6) with the sensor running at 27 fps, which makes it suitable for use in real-time outdoor vision (Figure 15).

To gauge the performance of the image saturation control and the entropy maximization algorithms, an experiment has been designed to determine both the response speed and the resulting image quality. For this purpose:

(a) The sensor response has been modelled versus the irradiance, by approximating it to the curves provided by the manufacturer (Figures 9 and 11), as seen in Equations (12) and (13):

N c = { C l I I < I V min { C l I , C 1 g + C 2 g   log 10 ( I ) , D } otherwise
where Nc is the grey (unitless) output level for each pixel and I is the effective irradiance on the pixel (we assume I = I0 − Ir; Ir is the real irradiance and I0 the minimum irradiance detectable by the sensor. I0 will depend on the configuration of the sensor exposure time parameter; in our experiment we assume a fixed exposure time and I0 = 0). The other values of the above mentioned expression depend on the configuration of the Lin-Log parameters of the sensor and were obtained by approximating the curves and data provided by Photonfocus in the User Manual of the camera used in the tests (MV1-D1312):
C 2 g = D C l I v log 10 ( I T ) log 10 ( I v ) ,     C 1 g = D C 2 g   log 10 ( I T ) ,     I T = ( I h 1 I h 2 ) T + I h 2 I v = D C 1 I h 1 C l C 1 ,     I h 1 = 10 1 2 ( V 7 ) , I h 2 = 10 1 2 ( V 8 )
where Cl = 2.55 m2/W and Cl = 10,911 m2/W. V is the parameter V1; it is assumed that V2 = V1 − 5, and T corresponds to the parameter T1 (it is assumed that T2 = 1). According to this model, the sensor has a dynamic range of 120 dB when configured in maximum compression mode (V = 19 and T = 1) and its response is linear when there is no compression (V = 14 and T = 0).

(b) Three synthetic scenes have been generated with patterns I p i ( x , y ) with i ∈ [1,2,3] as illustrated in Figure 14. Each value of the pattern I p i ( x , y ) represents the irradiance at the point (x, y) (Table 2). The dynamic range of each scene is shown in Table 2.

The characteristic entropy value has been calculated for each one of the scenes (Table 3). The entropy value (defined as pattern entropy) has been calculated using Equation (11). Pattern data are expressed in double precision floating point format.

(c) The synthetic scenes have been consecutively processed with the camera model to evaluate the temporal performance of the system both in the start-up and when responding to the scene changes. Figure 15 shows the time evolution of the LinLog parameters produced by the saturation control and entropy maximization algorithms.

To use a sensor’s model together with synthetic scenes allowed defining a metric to quantify the retained entropy, which resulted on reliable yield evaluation, useful for further comparisons.

Figure 14 shows the synthetic scenes used to test the proposed method, together with images obtained when the scenes have been processed by the LinLog CMOS sensor’s model [Equations (12,13)]. Figure 15 shows how the control method adjusts LinLog parameters as different scenes are presented to the sensor.

Table 3 shows the numeric results of the experiment. Besides the response time, it shows the recovered entropy once the synthetic scenes are processed by: (1) a model of a typical linear CMOS sensor (DR of 60 dB)—adjusted so that I0 corresponds to grey level 0—and (b) the proposed LinLog model with its parameters adjusted using the proposed control method.

The pattern entropy values will be reduced during the digital image generation process. Hence, the pattern entropy percentage retained in the acquired image of the scene provides an objective measurement of the goodness of the sensor’s parameters control process. The higher the entropy of the acquired image, the better the control process is, as there is more scene information. As we can see in Table 3, with the proposed method at least 67% of pattern entropy can be retained in the images.

Images (a) to (f) in Figure 16 show how the proposed method performs over a very high dynamic range scene (the ceiling of our lab, with a powerful lighting source).

Figure 17 shows how exposure time was controlled by the APID controller for a period of almost two hours between 4:30 and 6:10 pm on a windy day with clouds crossing the camera field of view (producing illumination changes), to acquire images from the scene shown in Figure 13. Sunset lasts from 5:40 until 6:10 pm.

7. Conclusions

This paper presents a reliable method for optimizing LinLog CMOS sensor response and hence improving images acquired from high dynamic range scenes. Adaptation to environment conditions is automatic and very fast.

The implementation has been divided into three algorithms. The first makes it possible to control the exposure time by using an Adaptive PID (APID) controller; the second controls image saturation through appropriate compression of the response curve for brilliant scenes and the third provides entropy maximization by slightly linearizing the response curve for high scene irradiance.

The simplicity of the control algorithms used in this method makes the computational cost of the processing needed to calculate the image parameters (histogram-based descriptors) negligible; therefore the computational cost of implementing the presented method practically coincides with the cost of calculating the histogram. As Table 3 shows, the control takes up less than eight frames with high quality images.

The method proposed in this paper has been implemented using NI LabVIEW [27], resulting in: (1) high-level hardware-independent development; (2) rapid prototyping due to the use of libraries (Real-Time, PID and FPGA libraries); and (3) rapid testing of the control application.

The hardware used to implement the system consisted of a Real-Time PowerPC Embedded Controller (cRIO-9022) and a reconfigurable chassis based on a Virtex-5 FPGA (cRIO-9114) from National Instruments. The chosen system permits deterministic control and real time execution of applications. The control system and the camera to be easily connected thanks to an Embedded PowerPC with GBit Ethernet and RS232 ports.

Acknowledgments

The work submitted here was carried out as part of the projects EXPLORE (TIN2009-08572) and SiLASVer (TRACE PET 2008-0131) funded by the Spanish National R&D&I Plan. It has also received funding from the Government of the Region of Murcia (Séneca Foundation).

References and Notes

  1. Reinhard, E; Ward, G; Pattanaik, S; Debevec, P. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting; Elsevier/Morgan Kaufmann: Amsterdam, The Netherlands, 2006; pp. 7–18. [Google Scholar]
  2. Bandoh, Y; Qiu, G; Okuda, M; Daly, S; Aach, T; Au, OC. Recent Advances in High Dynamic Range Imaging Technology. Proceedings of the 17th IEEE International Conference on Image Processing, ICIP’10, Hong Kong, China, 26–29 September 2010; pp. 3125–3128.
  3. Llorca, DF; Sánchez, S; Ocaña, M; Sotelo, MA. Vision-based traffic data collection sensor for automotive applications. Sensors 2010, 10, 860–875. [Google Scholar]
  4. Foresti, GL; Micheloni, C; Piciarelli, C; Snidaro, L. Review: Visual sensor technology for advanced surveillance systems: Historical view, technological aspects and research activities in Italy. Sensors 2009, 9, 2252–2270. [Google Scholar]
  5. Navarro, PJ; Iborra, A; Fernández, C; Sánchez, P; Suardíaz, J. A sensor system for detection of hull surface defects. Sensors 2010, 10, 7067–7081. [Google Scholar]
  6. Battiato, S; Castorina, A; Mancuso, M. High dynamic range imaging for digital still camera: An overview. J. Electron. Imag 2003, 12, 459–469. [Google Scholar]
  7. Brauers, J; Schulte, N; Bell, A; Aach, T. Multispectral High Dynamic Range Imaging; IS/&T/SPIE Electronic Imaging: San Jose, CA, USA, 2008; pp. 680704:1–680704:12. [Google Scholar]
  8. Hazelwood, M; Hutton, S; Weatherup, C. Smear Reduction in CCD Images. US Patent 7,808,534, 2010.
  9. Burghartz, JN; Graf, H; Harendt, G; Klinger, W; Richter, H; Strobel, M. HDR CMOS Imagers and Their Applications. Proceedings of the 8th International Conference on Solid-State and Integrated Circuit Technology, ICSICT’06, Shanghai, China, 22–26 October 2006; pp. 528–531.
  10. Nayar, SK; Mitsunaga, T. High Dynamic Range Imaging: Spatially Varying Pixel Exposures. Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, 13–15 June 2000; 1, pp. 472–479.
  11. Bigas, M; Cabruja, E; Forest, J; Salvi, J. Review of CMOS image sensors. Microelectron. J 2006, 37, 433–451. [Google Scholar]
  12. Dumont, GA; Huzmezan, M. Concepts, methods and techniques in adaptive control. Proceedings of the 2002 American Control Conference; 2002; pp. 1137–1150. [Google Scholar]
  13. Bela, L. Instrument Engineers’ Handbook: Process Control; Chilton Book Company: Radnor, PA, USA, 1995; pp. 20–29.
  14. Mohan, M; Sinha, A. Mathematical Model of the Simplest Fuzzy PID Controller with Asymmetric Fuzzy Sets. Proceedings of the 17th World Congress the International Federation of Automatic Control, Seoul, Korea, 6–11 July 2008; 7, pp. 15399–15404.
  15. Chaínho, J; Pereira, P; Rafael, S; Pires, AJ. A Simple PID Controller with Adaptive Parameter in a dsPIC: A Case of Study. Proceedings of the 9th Spanish-Portuguese Congress on Electrical Engineering, Marbella, Spain, 30 June–2 July 2005.
  16. Hang, CC; Astrom, JK; Wo, WK. Refinements of the Ziegler-Nichols tuning formula. IEEE Proc. Control Theory Appl 1991, 138, 111–118. [Google Scholar]
  17. Navid, N; Roberts, J. Automatic Camera Exposure Control. Proceedings of the 2007 Australasian Conference on Robotics & Automation, Brisbane, Australia, 10–12 December 2007.
  18. Neves, JA; Cunha, B; Pinho, A; Pinheiro, I. Autonomous Configuration of Parameters in Robotic Digital Cameras. Proceedings of the 4th Iberian Conference on Pattern Recognition and Image Analysis, IbPRIA’09, Póvoa de Varzim, Portugal, 10–12 June 2009.
  19. Nilsson, M; Weerasinghe, C; Lichman, S; Shi, Y; Kharitonenko, I. Design and Implementation of a CMOS Sensor Based Video Camera Incorporating a Combined AWB/AEC Module. Proceedings of 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP '03), Hong Kong, China, 6−10 April 2003; 2, pp. 477–480.
  20. Tsai, DY; Lee, Y; Matsuyama, E. Information entropy measure for evaluation of image quality. J Digital Imaging 2008, 21, 338–347. [Google Scholar]
  21. Shannon, CE; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Urbana, IL, USA, 1949. [Google Scholar]
  22. Gray, RM. Entropy and Information Theory, 2nd ed; Springer-Verlag Inc: New York, NY, USA, 2010; pp. 17–44. [Google Scholar]
  23. Hendrix, EMT; Toth, BG. Introduction to Nonlinear and Global Optimization; Springer: Cambridge, UK, 2010. [Google Scholar]
  24. Moneta, CA; de Natale, FGB; Vernazza, G. Adaptive Control in Visual Sensing. Proceedings of the IMACS International Symposium on Signal Processing, Robotics, and Neural Networks, Lille, France, 25–27 April 1994.
  25. Malis, E. Improving Vision-Based Control Using Efficient Second-Order Minimization Techniques. Proceedings of the IEEE Internacional Conference on Robotics and Automation (ICRA’04), New Orleans, LA, USA, 26 April–1 May 2004.
  26. Kabus, S; Netsch, T; Fischer, B; Modersitzki, J. B-spline registration of 3D images with levenberg-marquardt optimization. Proc. SPIE 2004, 5370, 304–313. [Google Scholar]
  27. LabVIEW (Laboratory Virtual Instrumentation Engineering Workbench). National Instruments Corporation: Austin, TX, USA. Available online: http://www.ni.com/labview/ (accessed on 10 August 2011).
Figure 1. Two typical sensor responses: linear response (red) and logarithmic response (blue). Adjusting the sensor response to a logarithmic curve is a good strategy for increasing the dynamic range.
Figure 1. Two typical sensor responses: linear response (red) and logarithmic response (blue). Adjusting the sensor response to a logarithmic curve is a good strategy for increasing the dynamic range.
Sensors 11 08412f1 1024
Figure 2. A logarithmic response improves the brighter areas of a scene, but reduces the contrast. Source: OMROM.
Figure 2. A logarithmic response improves the brighter areas of a scene, but reduces the contrast. Source: OMROM.
Sensors 11 08412f2 1024
Figure 3. Response control for a LinLog CMOS sensor. Source: Photonfocus AG.
Figure 3. Response control for a LinLog CMOS sensor. Source: Photonfocus AG.
Sensors 11 08412f3 1024
Figure 4. Model for an adaptive controller.
Figure 4. Model for an adaptive controller.
Sensors 11 08412f4 1024
Figure 5. PID controller scheme.
Figure 5. PID controller scheme.
Sensors 11 08412f5 1024
Figure 6. Performance of PI, APID and Incremental controllers.
Figure 6. Performance of PI, APID and Incremental controllers.
Sensors 11 08412f6a 1024Sensors 11 08412f6b 1024
Figure 7. Algorithm for saturation control in a LinLog CMOS sensor.
Figure 7. Algorithm for saturation control in a LinLog CMOS sensor.
Sensors 11 08412f7 1024
Figure 8. Different images taken from a scene with increasing values of V1 and V2 from left to right. The last image on the right shows the local entropy map of the image on its left (maximum values in red, minimum values in blue).
Figure 8. Different images taken from a scene with increasing values of V1 and V2 from left to right. The last image on the right shows the local entropy map of the image on its left (maximum values in red, minimum values in blue).
Sensors 11 08412f8 1024
Figure 9. By increasing V1 and V2 values we can increase the compression for high intensity levels (values for MV1-D1312 camera). Source: Photonfocus AG.
Figure 9. By increasing V1 and V2 values we can increase the compression for high intensity levels (values for MV1-D1312 camera). Source: Photonfocus AG.
Sensors 11 08412f9 1024
Figure 10. Algorithm to adjust T1 value for a LinLog sensor (where “c” is current iteration, “c − 1” is the result of previous iteration (Ec − 1 = 0 when algorithm starts), “δ” is the condition for entropy to stop the algorithm) and “γ” is the step size.
Figure 10. Algorithm to adjust T1 value for a LinLog sensor (where “c” is current iteration, “c − 1” is the result of previous iteration (Ec − 1 = 0 when algorithm starts), “δ” is the condition for entropy to stop the algorithm) and “γ” is the step size.
Sensors 11 08412f10 1024
Figure 11. Reduction of T1 value permits a more linear response for high level illumination, although the slope of the response is smaller (MV1-D1312 camera). Source: Photonfocus AG.
Figure 11. Reduction of T1 value permits a more linear response for high level illumination, although the slope of the response is smaller (MV1-D1312 camera). Source: Photonfocus AG.
Sensors 11 08412f11 1024
Figure 12. Algorithm adjusts V1, V2 and T1.
Figure 12. Algorithm adjusts V1, V2 and T1.
Sensors 11 08412f12 1024
Figure 13. Image captured with the same values for V1 and V2, but with APID adjustment of T1 (centre). The local entropy map of the central image is shown on the right. Note that in the middle image the details in the scene are well defined with no loss of contrast, as compared with the image on the left.
Figure 13. Image captured with the same values for V1 and V2, but with APID adjustment of T1 (centre). The local entropy map of the central image is shown on the right. Note that in the middle image the details in the scene are well defined with no loss of contrast, as compared with the image on the left.
Sensors 11 08412f13 1024
Figure 14. Synthetic scenes (a, b, c)—corresponding to patterns 1, 2, and 3 of Table 2 and details of images obtained by the CMOS sensor working in linear mode (d, e, f) and in LinLog mode (g, h, i), with LinLog parameters adjusted by the proposed method.
Figure 14. Synthetic scenes (a, b, c)—corresponding to patterns 1, 2, and 3 of Table 2 and details of images obtained by the CMOS sensor working in linear mode (d, e, f) and in LinLog mode (g, h, i), with LinLog parameters adjusted by the proposed method.
Sensors 11 08412f14 1024
Figure 15. Adjustment of LinLog parameters as scene changes. For scene 1 the saturation control algorithm increases V1—from start-up settings—until saturation disappears; for scenes 2 and 3 saturation is kept under control, while T1 is adjusted to maximize the entropy.
Figure 15. Adjustment of LinLog parameters as scene changes. For scene 1 the saturation control algorithm increases V1—from start-up settings—until saturation disappears; for scenes 2 and 3 saturation is kept under control, while T1 is adjusted to maximize the entropy.
Sensors 11 08412f15 1024
Figure 16. Image (a) was acquired with the image sensor working in linear mode; exposure time was adjusted to capture details of the lighting source; Image (b) was also acquired with the image sensor working in linear mode, but here the exposure time was adjusted to capture details out of the lighting source; in this case, the details from the lighting source disappear; Image (c) was acquired in LinLog mode automatically adjusted using the proposed method; as we can see, both details of the source and of the scene are retained in the image; Local entropy maps (d), (e) and (f), which correspond to images (a), (b) and (c) respectively, help give an idea of the extent of the improvement in image (c).
Figure 16. Image (a) was acquired with the image sensor working in linear mode; exposure time was adjusted to capture details of the lighting source; Image (b) was also acquired with the image sensor working in linear mode, but here the exposure time was adjusted to capture details out of the lighting source; in this case, the details from the lighting source disappear; Image (c) was acquired in LinLog mode automatically adjusted using the proposed method; as we can see, both details of the source and of the scene are retained in the image; Local entropy maps (d), (e) and (f), which correspond to images (a), (b) and (c) respectively, help give an idea of the extent of the improvement in image (c).
Sensors 11 08412f16 1024
Figure 17. Variations of the APID controller output error (red) and the exposure time (blue)—normalized—for the system in outdoor use. There are various causes of variations in short time periods (scene changes, clouds, etc.). The reason why the exposure time shows a rising trend is that the control is displayed in a run-up to sundown.
Figure 17. Variations of the APID controller output error (red) and the exposure time (blue)—normalized—for the system in outdoor use. There are various causes of variations in short time periods (scene changes, clouds, etc.). The reason why the exposure time shows a rising trend is that the control is displayed in a run-up to sundown.
Sensors 11 08412f17 1024
Table 1. Metrics for quantifying the controllers response shown in Figure 6. Settling time is a metric, expressed in frames, which measures the run-time until the error is lower than 2%. Overshoot measures the difference between the maximum response value and the reference level. Stationary oscillation measures the amplitude of non-attenuated oscillation. The last two metrics evaluate the robustness of the controllers and are expressed as a percentage of the reference level ( N g o ).
Table 1. Metrics for quantifying the controllers response shown in Figure 6. Settling time is a metric, expressed in frames, which measures the run-time until the error is lower than 2%. Overshoot measures the difference between the maximum response value and the reference level. Stationary oscillation measures the amplitude of non-attenuated oscillation. The last two metrics evaluate the robustness of the controllers and are expressed as a percentage of the reference level ( N g o ).
IncrementalPIAPID
Δ G (t)0⇨22⇨11⇨40⇨22⇨11⇨40⇨22⇨11⇨4
Settling time--13--5812564
Overshoot4040024000
Stationary oscillation4016000000
Table 2. Patterns for synthetic scenes generation.
Table 2. Patterns for synthetic scenes generation.
ScenePattern, Ip(x,y)DR(dB)
1105 POS(cos(x2)+cos(y2))104
25·105 POS(cos(x3+y3))116
35·105 POS(sin(x2+y2))116

POS ( f ) { f f > 0 0 otherwise
Table 3. Performance measurement illustrated in the graphs of Figure 15. F (fast) corresponds to the configuration (γ = 0.04 and δ = 0.04) and P (precise) to the configuration (γ = 0.01 and δ = 0.01).
Table 3. Performance measurement illustrated in the graphs of Figure 15. F (fast) corresponds to the configuration (γ = 0.04 and δ = 0.04) and P (precise) to the configuration (γ = 0.01 and δ = 0.01).
ScenePattern EntropyLin-LogLineal
Time (frames)Retained EntropyRetained Entropy(%)Retained EntropyRetained Entropy(%)
Control Mode (Fast/Precise)FPFPFP
17.90885.325.3367.3567.460.081.00
24.866273.424.3870.4271.571.0120.79
34.586273.423.4870.5071.621.0120.77

Share and Cite

MDPI and ACS Style

Martínez-Sánchez, A.; Fernández, C.; Navarro, P.J.; Iborra, A. A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios. Sensors 2011, 11, 8412-8429. https://doi.org/10.3390/s110908412

AMA Style

Martínez-Sánchez A, Fernández C, Navarro PJ, Iborra A. A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios. Sensors. 2011; 11(9):8412-8429. https://doi.org/10.3390/s110908412

Chicago/Turabian Style

Martínez-Sánchez, Antonio, Carlos Fernández, Pedro J. Navarro, and Andrés Iborra. 2011. "A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios" Sensors 11, no. 9: 8412-8429. https://doi.org/10.3390/s110908412

Article Metrics

Back to TopTop