Next Article in Journal
EEG-Based Person Identification during Escalating Cognitive Load
Previous Article in Journal
Novel Electrochemical Sensors Based on L-Proline Assisted LDH for H2O2 Determination in Healthy and Diabetic Urine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Control for Backlight Power-Saving Algorithm Using Motion Vectors from the Decoded Video Stream

1
Department of Electronic Engineering, Chung Yuan Christian University, Chung Li City 32023, Taiwan
2
Department of Electronic Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
3
Department of Electronic Engineering, Ming Chi University of Technology, New Taipei City 24301, Taiwan
4
Department of Computer Science and Information Engineering, National Ilan University, Yilan City 26047, Taiwan
5
National Synchrotron Radiation Research Center, Hsinchu City 30076, Taiwan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2022, 22(19), 7170; https://doi.org/10.3390/s22197170
Submission received: 19 August 2022 / Revised: 11 September 2022 / Accepted: 14 September 2022 / Published: 21 September 2022
(This article belongs to the Section Sensor Networks)

Abstract

:
Backlight power-saving algorithms can reduce the power consumption of the display by adjusting the frame pixels with optimal clipping points under some tradeoff criteria. However, the computation for the selected clipping points can be complex. In this paper, a novel algorithm is created to reduce the computation time of the state-of-the-art backlight power-saving algorithms. If the current frame is similar to the previous frame, it is unnecessary to execute the backlight power-saving algorithm for the optimal clipping points, and the derived clipping point from the previous frame can be used for the current frame automatically. In this paper, the motion vector information was used as the measurement of the similarity between adjacent frames, where the generation of the motion vector information requires no extra complexity since it is generated to reconstruct the decoded frame pixels before the display. The experiments showed that the proposed work can reduce the running time of the state-of-the-art methods by 25.21% to 64.22%, while the performances are maintained; the differences with the state-of-the-art methods in PSNR are only 0.02~1.91 dB, and those in power are only −0.001~0.008 W.

1. Introduction

With the development of technology, portable consumer electronic products not only use a large number of liquid crystal displays (LCDs), but also combine this technology to develop more diversified products, such as wearable devices, smartphones, tablet PC, notebook computers, etc. Moreover, in many application environments of the Internet of Things (IoT), the sensor’s lifetime is critical. One important factor is its power supply [1]. The available resources are limited in the device, and achieving lower power consumption on the device is also important [2]. For all kinds of products, battery life is obviously one of the critical factors when considering whether to buy. Displays are important components of computers, pads, and mobile phones. However, the displays also consume a large portion of the power of those applications. The TFT LCD is a non-self-luminance panel that requires an LED backlight module to display [3]. Compared with traditional LCD and LED technologies, mini-LED backlight technology has become a research hotspot because of its size advantage (100–200 µm), which can be combined with quantum dot technology [4]. However, behind the significant improvement in the display performance of elements such as contrast ratio, color gamut, and dark state, the halo effect and excessive power consumption are the main problems. Therefore, how to reduce the power consumption of the displays and maintain good image quality becomes an important research issue. The technique to tackle the problem is called the backlight dimming algorithm, or backlight power-saving algorithm. Backlight dimming technology can be implemented by either local dimming algorithm or global dimming algorithm. In the local dimming technique, a local dimming backlight system for mini-LEDs has also been proposed [5]. The local dimming algorithm divides the frame into a number of small blocks, then controls the backlight brightness of each block separately. This method reduces the brightness of the backlight locally, to maintain the quality of the image and effectively reduce the power. However, this method induces the large number of complex calculations, which in turn further sacrifices the power. On the other hand, the global dimming algorithm only controls the brightness of the backlight unit for the whole frame. It has lower complexity and is widely used in various LCD systems. Therefore, the global dimming algorithm is an effective way to reduce the LCD backlight power consumption, and the base architecture of the proposed work.
The work in [6] proposed a backlight dimming algorithm that reduced the power by up to 50%. For the improvement of the hardware, the literature [7] combined the characteristics of the switching TFT and the compensation circuit mechanism to realize the drive circuit to reduce the power consumption of the mini-LED display. In [8], the authors realize a highly dynamic mini-LED backlight technology using point spread function (PSF) theory. In [9], a two-dimensional (2D) adaptive dimming technique for an RGB-LED backlight is proposed to achieve a high dynamic contrast ratio. In [10], detection relative ratios capture scene features and patterns, resulting in an extremely high dynamic range. In [11], the video transmission system with autonomously controlled throughput is proposed. The work in [12] utilized current compensation technique for X-Y channels in an LED backlight system with I-V characteristics. In [13], I2GEC (Gray Level Error Control based on Image Integrity) defines the target PSNR. The clipping points are set decreasingly until the resulting PSNR satisfies the target PSNR. In the work of MGEC [14], the frame is divided into 4 or 16 blocks for local selections of the clipping points. The SSIM (Structural Similarity Index) [15] has been used to design the backlight power saving algorithm in [16] instead of the MSE. The work in [17] used the Gaussian distribution model to obtain the best clipping point for the LCD backlight module. For the analysis of different color spaces, the literature [18] takes RGB as the research object and automatically analyzes the color of urine test paper. A study on tooth color with HSV color space was proposed by [19]. Furthermore, a color space YEF transform was used to save the computational complexity. Based on human visual perception, the authors in [20] proposed a decomposition of the image intensity into an illumination layer and a reflectance layer, where the color saturation is also enhanced. A sub-band decomposition is proposed in [21] to preserve the luminance and contrast levels, while the excessive power consumption is prevented. In [22], a method of image fusion is proposed to detect the details of different images. For the common haze problem in dynamic images, a dehazing method with fast histogram fusion is proposed in [23] as a solution. They proposed a modified white balance algorithm to recognize and remove color veils (unbalanced color channels) to reduce the effect of unbalanced color channels ignored by an Atmospheric Illumination Prior (AIP). The renders balanced image contrast enhancement and inherent color preservation. In [24], an optimization problem is formed for video quality and power consumption. A modified shuffled frog leaping algorithm is proposed to solve the problem.
For the problem of the power consumption and the image quality, the above works are performed for considered frames. In [25], the disparity vector extrapolation technique was applied in the multi-view video transmission by using a frame loss concealment algorithm. Furthermore, the analysis of motion vectors in [26] provides a direction for thinking about this issue, focusing on the similarity of overlapping regions of adjacent frames, and the priority of target blocks in missing frames. In the proposed work, we consider the fact that in a video, the pixel characteristics of adjacent frames are similar, therefore the power-saving decisions (clipping points) should be sharable among consecutive frames with good performance. The most important benefit of this approach is saving the computation time of each frame. That is, if a certain criterion is met for the current frame, the power-saving algorithm will not be performed, and the power-saving decision (clipping point) for the current frame will automatically be the one decided in the previous frame, to save the execution time for power-saving method in the current frame. The criterion is designed based on the motion vector information that is already there during the decoding of the video image in the receiver at the display end, therefore there is no extra work for the information extraction. Furthermore, the algorithm is very simple to process the motion vector information so that the computation complexity of the whole proposed fast algorithm is low.
The novelties of the proposed method are as follows:
  • To the best of our knowledge, the proposed method is the first work to use a motion vector to design the method for the backlight power-saving algorithm.
  • The motion vector is used as the similarity measurement of the adjacent frames, and if qualified, the power-saving decision (the clipping point) of the current frame will automatically be the one of the previous frame, without activating the whole power-saving method in order to save the computation time.
  • The availability of the motion vector is at no further expense, since it is the product accompanied with the decoding of the video images in the display/receiver end.
  • The proposed algorithm of processing the motion vector is very simple to avoid the overhead of the proposed work.
  • The combination of the above two points makes the proposed work a fast algorithm. The proposed work can be used with the existing methods with the use of clipping points.
The proposed method can eliminate 52.01%, 48.10%, 49.84%, and 64.22% of the processing time of the works I2GEC [13], MGEC4 [14], MGEC16 [14], and Gaussian [17], respectively, while the performances of image quality and power reduction are maintained well (the differences are only −0.02~1.91 dB and −0.001~0.008 W, respectively).

2. Materials and Methods

2.1. Existing Clipping-Point-Based Backlight Power-Saving Algorithms

In this section, several state-of-the-art backlight power-saving methods will be introduced I2GEC [13], MGEC4 [14], MGEC16 [14], and Gaussian [17]; the above works are based on clipping points. The execution time of above works is to be reduced by the proposed method.

2.1.1. I2GEC: Integrity-Based Gray-Level Error Control

I2GEC (Integrity-based Gray-level Error Control) [13] uses the best-known image quality index called PSNR (Peak Signal-to-Noise Ratio) to find the clipping point with the lowest power consumption at the target quality. The formula for PSNR is as follows:
MSE = 1 N i = 1 N ( x i y i ) 2
PSNR = 10 l o g 10 ( 255 2 MSE )
where x i is the pixel in original image, y i   is the pixel in processed image, N is the total number of pixels in the image. In this method, for a given target PSNR, the mean-square error (MSE) and the total mean square error ( T S E T ) is calculated:
T S E T = M S E T × λ   ,   λ = λ r o w × λ c o l × λ c o l o r .  
M S E T = 255 2 10 P S N R / 10  
where λ r o w , λ c o l   are the length and width of the image. Since color images are composed of three primary colors (RGB), the λ c o l o r equals to 3. The algorithm sets the clipping point to make the processed image satisfying the target T S E T . When the clipping point is set to be I c c p , 0 I C C P I M A X , the resulting degradation of the image T S E C is computed by:
T S E R _ C = i = I c c p 1 I M A X H R ( i ) × ( i I c c p ) 2 , .  
T S E G _ C = i = I c c p 1 I M A X H G ( i ) × ( i I c c p ) 2 ,
T S E B _ C = i = I c c p 1 I M A X H B ( i ) × ( i I c c p ) 2 ,
T S E C = T S E R _ C + T S E G _ C + T S E B _ C .  
H R ( i ) ,   H G ,   and   H B ( i ) represents the number of R, G, and B pixels corresponding to gray level i. The clipping point I c c p is tested starting from I M A X to see if the T S E C satisfies T S E T . If not, the clipping point I c c p is decreased by 1, and then the resulting T S E C will be checked again. This procedure will repeat until the condition is satisfied. As can be seen, the complexity of the method is high.

2.1.2. MGEC: Multi-Histogram-Based Gray Level Error Control

MGEC (Multi-histogram-based Gray Level Error Control) [14] has a better performance on the PSNR and power-saving performances since it is a block-based algorithm. Assuming the image is divided into M × N blocks, the mean-square error (MSE) and the equivalent target T S E T is computed as:
M S E T = 255 2 10 P S N R / 10  
T S E T = M S E T × λ r o w M × λ c o l N × λ c o l o r
For each clipping point I c c p , the image quality degradation T S E C n is computed for each block, n = 1~M × N:
T S E R _ C n = i = I c c p I M A X H R n ( i ) × ( i I c c p ) 2
T S E G _ C n = i = I c c p I M A X H G n ( i ) × ( i I c c p ) 2
T S E B _ C n = i = I c c p I M A X H B n ( i ) × ( i I c c p ) 2 .  
T S E C n = T S E R _ C n + T S E G _ C n + T S E B _ C n .  
H R ( i ) ,   H G ,   and   H B ( i ) represents the number of R, G, and B pixels corresponding to gray level i. After obtaining the T S E C n for all blocks, the maximum T S E C n is choosen as T S E M :
T S E M = arg max n { T S E C n }
T S E M is then further compared with the target T S E T . Similar to I2GEC, if the T S E M does not satisfy T S E T , the clipping point I c c p is decreased by 1, and then the resulting T S E M is checked again. The procedure is repeated until the condition is satisfied. Again, the complexity of the method is high. This paper works with the four-segment version, denoted as MGEC4, and the sixteen-segment version, denoted as MGEC16.

2.1.3. Gaussian: Adaptive Local Dimming Backlight Control Based on Gaussian Distribution

The state-of-the-art method, adaptive local dimming backlight control based on Gaussian distribution [17], is denoted as Gaussian. The backlight dimming algorithm is first performed with the same concept that the clipping point will be the largest pixel value where   x i   is the pixel in original image, N is the total number of pixels in the image. The Gaussian distribution of the image pixels is first formulated:
σ = 1 N i = 1 N ( x i μ ) 2
f ( x ; μ , σ ) = 1 σ 2 π e ( x μ ) 2 2 σ 2
In order to improve the power-saving rate and maintain high image quality, and according to the normal distribution probability, the clipping point C p ( m , n ) for each block is found where Z means the distance between the expected probability and the maternal average of luminance by
C p ( m , n ) = Z × σ + μ e ( x 2 2 )
The pixel compensator and the PWN (power width modulation) module will then be applied for the algorithm.

2.1.4. Power Model

The descriptions of the above algorithms focus on the aspects of the resulting image quality given a certain setting of the clipping point. The other aspect of the backlight power-saving algorithms is the power consumption for a given clipping point. All the above works (I2GEC [13], MGEC4 [14], MGEC16 [14] and Gaussian [17]) follow the same power model, which is derived in [27]:
P b a c k l i g h t ( β ) = { A l i n · β + C l i n     0 β C s A s a t · β + C s a t     C s < β 1   β = c l i p p i n g   p o i n t 255 ,   A l i n = 1.9600 ,   A s a t = 6.9440 , C l i n = 0.2372     C s a t = 4.3240 ,   C s = 0.8234 }
In general, lower clipping point produces lower PSNR and lower power consumption. Table 1 tabulates the power of some example clipping points.

2.2. Fast Algorithm for Power Control and Image-Quality Control Using Motion Vectors

In this section, we proposed to use the motion vectors decoded at the decoder as the important information to estimate the similarity among adjacent frames. An associated algorithm is designed to speed up state-of-the-art methods with very slight performance degradation.

2.2.1. Motion Vector Estimation (Performed at the Encoder)

In the video encoder, the previous frame served as the reference frame to reduce the bit rate for storing the information of the current frame and to check if there are block pixels similar to the block pixels in the current frame. The displacement of the current block to the reference block describes the “motion” of the object, moving from previous (frame) to the current frame. The displacement is defined by a set of “motion vector”, (mvx, mvy). The process of finding the optimal motion vector in the encoder is called “motion estimation” [28].
The motion estimation is proceeded as follows. The prediction error, defined as D, of a block refers to the pixel differences between the current block and the reference block in the reference frame. The number of bits to store the associated motion vector (mvx, mvy) is recorded as R. To find the best motion vector (mvx, mvy), one of the most basic methods is to perform search within a fixed range in the reference frame, finding the testing block in the reference frame with the minimum (optimal) L = D + λR, to have minimal weighted sum of prediction error and bitrate. The search illustration can be seen in Figure 1. The best (optimal) motion vector is then recorded in the video bit stream to be stored in the hard drive or transmitted over the network.
As can be seen, the motion estimation is a complex process. However, we have to note that this complex process is performed in the encoder, as opposed to the decoder at the display side, where the proposed backlight saving algorithm is performed. Therefore, the complexity of the motion estimation has nothing to do with the complexity of the proposed backlight saving algorithm.

2.2.2. The Proposed Power-Saving Method Using Motion Vectors

The correlation between the current frame and the reference frame (previous frame) can be characterized by the motion vector in some sense. The differences between the current frame and the previous frame are small if the values of motion vectors are small. In this case, the correlation between the current frame and the reference frame (previous frame) is very high. On the other hand, if the motion vectors are so large that the differences between the current frame and the previous frame are very large, the correlation between the current frame and the reference frame (previous frame) is very small. This is the design concept of the proposed power-saving method.
What is more, there is no extra effort required to obtain the motion vector in the display side. This is because, in order to decode the videos at the receiver, the motion vectors in the bitstream are first decoded to reconstruct the frame [28]. All we have to do is to store them for the proposed fast backlight-saving algorithm after the reconstruction of the video, instead of throwing them away. Therefore, the availability of the motion vectors for each frame for our application is at no cost, and thus will not increase the complexity of our algorithm.
The proposed algorithm is designed as follows: the regular power-saving method is applied for the first frame of the video. For the following frames, the motion vectors are used to analyze the correlation between the current frame and the previous frame. For each pair of the motion vector (mvxi, mvyi) of a block i, the magnitude of the motion vector is computed as
Mag i = | m v x i | + | m v y i |
The Magi for all the blocks in a frame are summed to be SumMag:
SumMag = i Mag i        
If SumMag is smaller than a threshold, it shows that the current image and the previous image have small difference, therefore, the proposed algorithm does not need to perform the whole regular method again. Instead, the proposed algorithm can automatically use the clipping point decision of the previous frame (reference frame).
By doing this, the proposed algorithm saves the computational complexity of finding the clipping point for the current frame. If SumMag is greater than a threshold, it means that the current frame and the previous frame have the huge difference. Therefore, the proposed algorithm needs to perform the whole regular method for the current frame. The overall algorithm is illustrated in Figure 2.
It is obvious that when the threshold is set to be small, it means that it is easy to be over the threshold to re-run the whole regular method; this does not result in time saving, but the image quality is maintained. On the other hand, if the threshold is set to be large, it means that it is easy to be under the threshold, so that the proposed method automatically takes the clipping point of the previous frame frequently; this results in large time saving, but with possible drops in image quality. In the experiment section, we set the various threshold to reveal the tradeoffs.

3. Results

In this section, the experimental results are presented. As discussed, the proposed work aims to reduce the execution time of the existing works while keeping the performances by using the information of the decoded motion vectors. The proposed method works as the flow chart shown in Figure 2. The existing works focus on the methods discussed earlier, I2GEC [13], MGEC4 [14], MGEC16 [14], and Gaussian [17], as the regular power-saving algorithm in Figure 2. The test videos are NASASOF-WindTunnelTesting_100, NASASF-ISSLife_0324, NASASOF-WindTunnelTesting_0324, NASA-EPAQ_0325 and NASASF-FOT_0325.
The proposed method works as the flow in Figure 2. The thresholds set in Figure 2 are 0, 15,000, and 30,000 to reveal the various performances under different settings. Note that the setting of threshold equal to 0 corresponds to the original/regular power-saving algorithm, since it is always executed for every frame. In Appendix A, several performances are evaluated: execution time (seconds) in Table A1, PSNR (Peak Signal-to-Noise Ratio, dB) in Table A2, power (watts) in Table A3, and selected clipping points in Table A4. The average performances over different videos for a specific method and threshold are shown in Figure 3, Figure 4, Figure 5 and Figure 6, respectively. Note that all the methods are realized in software and run on a personal computer with CPU i7-7700K, and the measurement of time (in seconds) is the interval of the beginning of the method and the end of the method for all video frames.
The average time comparisons in Table A1 are first discussed. As can be seen, for I2GEC, the original work (threshold = 0) takes 28.88 s, and when the proposed work participates with the threshold being 15,000, the time reduced to 21.69 secs and the DT (decreased time, %) is 25.21%. When the threshold is set with the even higher value of 30,000, the time is further reduced to 13.86 secs, with DT = 52.01%. For MGEC4, the threshold 15,000 has DT 26.48%, and the threshold 30,000 has 48.10%. For MGEC16, the threshold 15,000 has DT 28.28%, and the threshold 30,000 has 49.84%. For Gaussian, the threshold 15,000 has DT 37.78%, and the threshold 30,000 has 64.22%. Figure 3 provides bar comparisons for the above descriptions. As can be observed, a higher threshold induces higher DT since the higher threshold allows higher chances to simply use the clipping point from the previous frame. Therefore, the proposed work can indeed reduce the execution time of the existing methods, and the reduction can vary with different settings of the threshold.
For the comprehensive comparisons, the average PSNR (Table A2, Figure 4), power consumption (Table A3, Figure 5; computed by Equation (19)) and the selected clipping points (Table A4, Figure 6) are discussed together. As discussed, the proposed work can help I2GEC with DT = 25.21% and 52.01%. What is more, the PSNR differences are only 0.01 dB, the power difference is only 0.006 W and 0.007 W, and the clipping points are the same on average. These indicate that the proposed work can not only reduce the execution time by good ratios, but also maintain the quality of the work. For MGEC4, the DTs in the proposed work are 26.48% and 48.10%. The performances are also maintained since the PSNR differences are only −0.02 dB and 1.91 dB, power differences are only 0.006 W and 0.007 W, and the clipping points are the same on average. Similarly, for MGEC16, the proposed work can provide DT = 28.28% and 49.84%. These good reductions in time do not affect the performances where the PSNR differences are only 0.16 dB and 0.19 dB, the power differences are only 0.007 W and 0.008 W, and the selected clipping points are the same again. Finally, for Gaussian, when the DTs caused by the proposed work are 37.78% and 64.22%, the performances are not degraded as the PSNR differences are 0.00 dB and 0.01 dB, the power differences are −0.001 W and 0.000 W, and the selected clipping points are the same on average.
We have to note that the proposed work does not aim to improve or degrade the PSNR and the power performance of the existing works; the proposed work aims to position the PSNR and the power performance close to those of the existing works. Therefore, it does not matter if the differences in the above-mentioned tables are positive or negative; we focus on the magnitudes of the differences being small.

4. Conclusions

This paper presents an innovative fast algorithm to reduce the computation time of the existing display backlight control energy-saving methods. The proposed work is based on the analysis of the motion vectors decoded for each video frame before the display. The motion vectors were used as the similarity measurement of the adjacent frames. If the adjacent frames are similar, the selected clipping point, which is the backlight algorithm decision of the current frame can simply duplicate that of the reference frame, without needing to be computed by the original algorithm; this reduces the execution time of the current frame for the clipping point by the backlight-saving algorithm. Working with several state-of-the-art methods, under different similarity threshold settings, the execution time was reduced by 25.21~64.22%. In addition, the proposed work does not make significant differences in PSNR performances (−0.02~1.91 dB) and power consumption (−0.001~0.008 W) with those of the state-of-the-art methods.

Author Contributions

Conceptualization, T.-L.L., Y.-L.C. and K.-H.T.; Data curation, Y.-L.C. and K.-H.T.; Formal analysis, T.-Y.C. and T.-L.L.; Funding acquisition, S.-L.C., T.-L.L., C.-A.C., S.-Y.L. and W.-Y.C.; Methodology, T.-Y.C., Y.-L.C. and K.-H.T.; Re-sources, S.-L.C., T.-L.L., C.-A.C., S.-Y.L. and W.-Y.C.; Software, Y.-L.C., K.-H.T. and T.-Y.C.; Supervision, T.-L.L., S.-Y.L., S.-L.C. and C.-A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Ministry of Science and Technology, Taiwan, under Grant MOST 111-2221-E-033-041, 111-2823-8-033-001, 110-2223-8-033-002, 110-2221-E-027-044-MY3, 110-2218-E-035-007, 110-2622-E-131-002, 109-2622-E-131-001-CC3, and the National Chip Implementation Center, Taiwan.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Results of the Power-Saving Algorithm at Three Thresholds

This appendix aims to present the proposed method, evaluated for threshold settings 0, 15,000, and 3000. This evaluation looks at various performances in different settings. When the threshold is set to 0, it is executed once per frame. Therefore, it corresponds to the original/traditional power-saving algorithm. Four performances are listed at the end, including four comparison tables for execution time (seconds), PSNR (peak signal-to-noise ratio, dB), power (watts), and selected clipping points. Comparison tables can be found in Table A1, Table A2, Table A3 and Table A4.
Table A1. Execution time (seconds; to the second decimal place) of different algorithms at different thresholds.
Table A1. Execution time (seconds; to the second decimal place) of different algorithms at different thresholds.
Time
I2GEC [13]MGEC4 [14]MGEC16 [14]Gaussian [17]
Thresholds015,00030,000015,00030,000015,00030,000015,00030,000
NASASOF-WindTunnelTesting_10027.6322.038.83330.19243.06125.07391.87275.78136.3112,430.018540.831945.16
NASASF-ISSLife_032427.588.837.84319.13127.79113.60343.16123.45118.2011,980.383490.332751.46
NASASOF-WindTunnelTesting_032432.5234.6915.16413.58418.85206.54433.74434.84212.3012,436.0813,156.244644.61
NASA-EPAQ_032530.8334.5230.17261.37278.75268.34271.89280.82277.9612,521.7612,887.7612,548.58
NASASF-FOT_032525.838.387.32270.73104.26114.28272.17113.52114.3814,898.991909.271105.39
Average28.8821.69 (−25.21%)13.86 (−52.01%)319.00234.5 (−26.48%)165.56 (−48.10%)342.57245.68 (−28.28%)171.8 (−49.84%)12,853.447996.89 (−37.78%)4599.04 (−64.22)
Table A2. PSNR (db; to the second decimal place) of different algorithms at different thresholds.
Table A2. PSNR (db; to the second decimal place) of different algorithms at different thresholds.
PSNR
I2GEC [13]MGEC4 [14]MGEC16 [14]Gaussian [17]
Thresholds015,00030,000015,00030,000015,00030,000015,00030,000
NASASOF-WindTunnelTesting_10030.1030.1130.1237.4937.5837.6534.9735.0735.1630.1130.0930.10
NASASF-ISSLife_032430.1230.1130.1136.9237.2139.8735.5535.8335.8330.7730.8130.81
NASASOF-WindTunnelTesting_032430.1330.1330.1039.7239.7339.8738.8038.8038.8740.9040.9040.92
NASA-EPAQ_032530.0930.0930.0938.8938.8938.8938.8038.8038.8036.0336.0336.03
NASASF-FOT_032530.1530.1330.1336.7037.0537.0536.9837.3937.3930.4130.4130.41
Average (differences from original)30.1230.11 (−0.01)30.11 (−0.01)36.7436.72 (−0.02)38.65 (1.91)37.0237.18 (0.16)37.21 (0.19)33.6533.65 (0.00)33.66 (0.01)
Table A3. Power (watts; to the third decimal place) of different algorithms at different thresholds.
Table A3. Power (watts; to the third decimal place) of different algorithms at different thresholds.
Power
I2GEC [13]MGEC4 [14]MGEC16 [14]Gaussian [17]
Thresholds015,00030,000015,00030,000015,00030,000015,00030,000
NASASOF-WindTunnelTesting_1000.7750.7750.7751.1531.1571.1591.0621.0661.0690.8390.8340.836
NASASF-ISSLife_03240.7880.7870.7871.1931.2011.2011.1571.1651.1650.7810.7810.781
NASASOF-WindTunnelTesting_03240.5540.5540.5530.9640.9640.9680.9390.9390.9410.8800.8800.884
NASA-EPAQ_03250.7570.7570.7571.3301.3301.3301.3351.3351.3350.7920.7920.792
NASASF-FOT_03250.9520.9510.9511.6281.6471.6471.6401.6631.6630.7270.7270.727
Average (differences from original)0.7650.765 (0.000)0.765 (0.000)1.2541.260 (0.006)1.261 (0.007)1.2261.233 (0.007)1.234 (0.008)0.8040.803 (−0.001)0.804 (0.000)
Table A4. Selected clipping points (in unit of grey scale pixel) of different algorithms at different thresholds.
Table A4. Selected clipping points (in unit of grey scale pixel) of different algorithms at different thresholds.
Selected Clipping Points
I2GEC [13]MGEC4 [14]MGEC16 [14]Gaussian [17]
Thresholds015,00030,000015,00030,000015,00030,000015,0003000
NASASOF-WindTunnelTesting_100131131131194194194170170170757575
NASASF-ISSLife_0324133133133196196196182182182909090
NASASOF-WindTunnelTesting_0324103103102166166166154154153707070
NASA-EPAQ_0325129129129216216216204204204107107107
NASASF-FOT_0325154154154215215215207207207959595
Average130130130197197197183183183878787

References

  1. Luis, Á.; Casares, P.; Cuadrado-Gallego, J.J.; Patricio, M.A. PSON: A Serialization Format for IoT Sensor Networks. Sensors 2021, 21, 4559. [Google Scholar] [CrossRef] [PubMed]
  2. Kokert, J.; Reindl, L.M.; Rupitsch, S.J. Behavioral Modeling of DC/DC Converters in Self-Powered Sensor Systems with Modelica. Sensors 2021, 21, 4599. [Google Scholar] [CrossRef] [PubMed]
  3. Chen, S.-L.; Tsai, H.-J.; Lin, T.-L.; Lee, H.-Y. Block-Based Content Adaptive Backlight Controller VLSI Design for Local Dimming LCDs. In Proceedings of the 2016 23rd International Workshop on Active-Matrix Flatpanel Displays and Devices (AM-FPD), Kyoto, Japan, 6–8 July 2016; pp. 63–66. [Google Scholar]
  4. Gao, Z.; Ning, H.; Yao, R.; Xu, W.; Zou, W.; Guo, C.; Luo, D.; Xu, H.; Xiao, J. Mini-LED Backlight Technology Progress for Liquid Crystal Display. Crystals 2022, 12, 313. [Google Scholar] [CrossRef]
  5. Chen, C.-C.; Qiu, Y.-Y.; Zheng, W.-W.; Yu, G.; Chiu, C.-Y.; Zhao, B.; Zhang, X. 17-4: Evaluate and Upgrade Picture Quality of Local Dimming Mini-LED LCD. In SID Symposium Digest of Technical Papers; Wiley-Blackwell: Hoboken, NJ, USA, 2020; pp. 235–238. [Google Scholar]
  6. Lai, C.-C.; Tsai, C.-C. Backlight Power Reduction and Image Contrast Enhancement Using Adaptive Dimming for Global Backlight Applications. IEEE Trans. Consum. Electron. 2008, 54, 669–674. [Google Scholar] [CrossRef]
  7. Deng, M.-Y.; Hsiang, E.-L.; Yang, Q.; Tsai, C.-L.; Chen, B.-S.; Wu, C.-E.; Lee, M.-H.; Wu, S.-T.; Lin, C.-L. Reducing Power Consumption of Active-Matrix Mini-LED Backlit LCDs by Driving Circuit. IEEE Trans. Electron. Devices 2021, 68, 2347–2354. [Google Scholar] [CrossRef]
  8. Tan, G.; Huang, Y.; Li, M.-C.; Lee, S.-L.; Wu, S.-T. High dynamic range liquid crystal displays with a mini-LED backlight. Opt. Express 2018, 26, 16572–16584. [Google Scholar] [CrossRef]
  9. Oh, W.-S.; Cho, D.; Cho, K.-M.; Moon, G.-W.; Yang, B.; Jang, T. A Novel Two-Dimensional Adaptive Dimming Technique of X-Y Channel Drivers for LED Backlight System in LCD TVs. J. Disp. Technol. 2009, 5, 20–26. [Google Scholar] [CrossRef]
  10. Katic, N.; Popovic, V.; Cojbasic, R.; Schmid, A.; Leblebici, Y. A Relative Imaging CMOS Image Sensor for High Dynamic Range and High Frame-Rate Machine Vision Imaging Applications. IEEE Sens. J. 2015, 15, 4121–4129. [Google Scholar] [CrossRef]
  11. Yamagiwa, S.; Ichinomiya, Y. Stream-Based Visually Lossless Data Compression Applying Variable Bit-Length ADPCM Encoding. Sensors 2021, 21, 4602. [Google Scholar] [CrossRef]
  12. Cho, D.; Oh, W.-S.; Moon, G.W. A Novel Adaptive Dimming LED Backlight System With Current Compensated X-Y Channel Drivers for LCD TVs. J. Disp. Technol. 2011, 7, 29–35. [Google Scholar] [CrossRef]
  13. Kang, S.; Kim, Y.H. Image Integrity-Based Gray-Level Error Control for Low Power Liquid Crystal Displays. IEEE Trans. Consum. Electron. 2009, 55, 2401–2406. [Google Scholar] [CrossRef]
  14. Kang, S.-J.; Kim, Y.H. Multi-Histogram-Based Backlight Dimming for Low Power Liquid Crystal Displays. J. Disp. Technol. 2011, 7, 544–549. [Google Scholar] [CrossRef]
  15. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  16. Kang, S.-J. SSIM Preservation-Based Backlight Dimming. J. Disp. Technol. 2014, 10, 247–250. [Google Scholar] [CrossRef]
  17. Chen, S.-L.; Tsai, H.-J. A Novel Adaptive Local Dimming Backlight Control Chip Design Based on Gaussian Distribution for Liquid Crystal Displays. J. Disp. Technol. 2016, 12, 1494–1505. [Google Scholar] [CrossRef]
  18. Kim, S.-C.; Cho, Y.-S. Predictive System Implementation to Improve the Accuracy of Urine Self-Diagnosis with Smartphones: Application of a Confusion Matrix-Based Learning Model through RGB Semiquantitative Analysis. Sensors 2022, 22, 5445. [Google Scholar] [CrossRef]
  19. Chen, S.-L.; Zhou, H.-S.; Chen, T.-Y.; Lee, T.-H.; Chen, C.-A.; Lin, T.-L.; Lin, N.-H.; Wang, L.-H.; Lin, S.-Y.; Chiang, W.-Y.; et al. Dental Shade Matching Method Based on Hue, Saturation, Value Color Model with Machine Learning and Fuzzy Decision. Sens. Mater. 2020, 32, 3185–3207. [Google Scholar] [CrossRef]
  20. Pei, S.-C.; Shen, C.-T. Color Enhancement With Adaptive Illumination Estimation for Low-Backlighted Displays. IEEE Trans. Multimed. 2017, 19, 1956–1961. [Google Scholar] [CrossRef]
  21. Choi, D.Y.; Song, B.C. Power-Constrained Image Enhancement Using Multiband Processing for TFT LCD Devices with an Edge LED Backlight Unit. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 1445–1456. [Google Scholar] [CrossRef]
  22. Wang, Z.; Wang, F.; Wu, D.; Gao, G. Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network. Sensors 2022, 22, 5430. [Google Scholar] [CrossRef]
  23. Huo, F.; Zhu, X.; Zeng, H.; Liu, Q.; Qiu, J. Fast Fusion-Based Dehazing With Histogram Modification and Improved Atmospheric Illumination Prior. IEEE Sens. J. 2021, 21, 5259–5270. [Google Scholar] [CrossRef]
  24. Zhang, T.; Zhao, X.; Pan, X.; Li, X.; Lei, Z. Optimal Local Dimming Based on an Improved Shuffled Frog Leaping Algorithm. IEEE Access 2018, 6, 40472–40484. [Google Scholar] [CrossRef]
  25. Zhou, Y.; Xiang, W.; Wang, G. Frame Loss Concealment for Multiview Video Transmission Over Wireless Multimedia Sensor Networks. IEEE Sens. J. 2015, 15, 1892–1901. [Google Scholar] [CrossRef]
  26. Lin, T.-L.; Tseng, H.-W.; Wen, Y.; Lai, F.-W.; Lin, C.-H.; Wang, C.-J. Reconstruction Algorithm for Lost Frame of Multiview Videos in Wireless Multimedia Sensor Network Based on Deep Learning Multilayer Perceptron Regression. IEEE Sens. J. 2018, 18, 9792–9801. [Google Scholar] [CrossRef]
  27. Iranli, A.; Pedram, M. DTM: Dynamic Tone Mapping for Backlight Scaling. In Proceedings of the 42nd Design Automation Conference, Anaheim, CA, USA, 13–17 June 2005; pp. 612–616. [Google Scholar]
  28. Richardson, I.E. The H.264 Advanced Video Compression Standard, 2nd ed.; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
Figure 1. The illustration of motion estimation and motion vector (mvx, mvy).
Figure 1. The illustration of motion estimation and motion vector (mvx, mvy).
Sensors 22 07170 g001
Figure 2. Flow chart of the proposed work.
Figure 2. Flow chart of the proposed work.
Sensors 22 07170 g002
Figure 3. Bar comparisons of the DT (decreased time, %) of different algorithms at different thresholds [9,10,13].
Figure 3. Bar comparisons of the DT (decreased time, %) of different algorithms at different thresholds [9,10,13].
Sensors 22 07170 g003
Figure 4. Bar comparisons of PSNR (dB) of different algorithms at different thresholds [9,10,13].
Figure 4. Bar comparisons of PSNR (dB) of different algorithms at different thresholds [9,10,13].
Sensors 22 07170 g004
Figure 5. Bar comparisons of power (watts) of different algorithms at different thresholds [9,10,13].
Figure 5. Bar comparisons of power (watts) of different algorithms at different thresholds [9,10,13].
Sensors 22 07170 g005
Figure 6. Bar comparisons of the selected clipping points (in unit of gray scale pixel) of different algorithms at different thresholds [9,10,13].
Figure 6. Bar comparisons of the selected clipping points (in unit of gray scale pixel) of different algorithms at different thresholds [9,10,13].
Sensors 22 07170 g006
Table 1. The example of the clipping points and the corresponding power consumption (computed by Equation (19)).
Table 1. The example of the clipping points and the corresponding power consumption (computed by Equation (19)).
Clipping PointsPower (Watt)
2552.6200
2001.3001
1500.9157
1000.5314
500.1471
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, S.-L.; Chen, T.-Y.; Lin, T.-L.; Chen, C.-A.; Lin, S.-Y.; Chiang, Y.-L.; Tung, K.-H.; Chiang, W.-Y. Fast Control for Backlight Power-Saving Algorithm Using Motion Vectors from the Decoded Video Stream. Sensors 2022, 22, 7170. https://doi.org/10.3390/s22197170

AMA Style

Chen S-L, Chen T-Y, Lin T-L, Chen C-A, Lin S-Y, Chiang Y-L, Tung K-H, Chiang W-Y. Fast Control for Backlight Power-Saving Algorithm Using Motion Vectors from the Decoded Video Stream. Sensors. 2022; 22(19):7170. https://doi.org/10.3390/s22197170

Chicago/Turabian Style

Chen, Shih-Lun, Tsung-Yi Chen, Ting-Lan Lin, Chiung-An Chen, Szu-Yin Lin, Yu-Liang Chiang, Kun-Hsien Tung, and Wei-Yuan Chiang. 2022. "Fast Control for Backlight Power-Saving Algorithm Using Motion Vectors from the Decoded Video Stream" Sensors 22, no. 19: 7170. https://doi.org/10.3390/s22197170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop