Next Article in Journal
A Web of Things-Based Emerging Sensor Network Architecture for Smart Control Systems
Previous Article in Journal
Proof of Concept of Integrated Load Measurement in 3D Printed Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback

1
School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
2
Astronaut Research and Training Center of China, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(2), 321; https://doi.org/10.3390/s17020321
Submission received: 7 January 2017 / Revised: 1 February 2017 / Accepted: 3 February 2017 / Published: 9 February 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.

1. Introduction

In recent years intelligent Light Emitting Diode (LED) lighting systems have been widely applied in our daily life, such as in intelligent street lighting systems [1,2], intelligent office lighting devices, or even the tele-robotic arm equipment [3]. The intelligent lighting system uses a mini-type sensor and its control circuit to perceive the environment luminance, then estimates the lighting effect, and implements a precise output control of LED lamp(s) [4]; then an optimal lighting vision effect can be obtained. The intelligent lighting system is not used only in the nighttime; it also can be employed as a light compensation tool to solve complex environment light source problems or uneven illumination distribution problems during the daytime. Improper lighting will decrease working efficiency [5], affect task scheduling, or even cause some accidents. For example, in our past research work it can be found that the visual fatigue and the human error rate may be increased when an astronaut participates in some aerospace flight tasks under improper environment lighting [6]. Figure 1 shows different lighting effects of a display, where: (a) uses one light source; (b) and (c) use two light sources, and (c) suffers from the glare. From Figure 1, it can be found that different lighting effects will seriously influence the display reading task [7]. Thus it is necessary to develop an intelligent lighting system and method to achieve optimal lighting effects.
Many research works have been done to improve the output performance of intelligent LED lighting systems. For example, in [8], the authors developed a system which utilized a passive infrared sensor to control street lamps according to vehicle detection results. In [9], the authors used an infrared sensor to detect human movement, and a photosensitive resistance was also employed to tune the lighting intensity. In [10], the authors used an ultrasonic array sensor to detect the indoor presence of people so that an energy-efficient occupancy-adaptive indoor lighting system was realized. Obviously, the traditional sensors, such as the infrared sensor, the photosensitive resistance sensor, or the ultrasonic sensor, only have limited environment perception ability. Their outputs are too “coarse” for the lighting effect analysis because they only can acquire one-dimensional sensor data or they just provide a switching value. Thus, to get an elaborate description and understanding of the external environment, other sensors should be considered.
The aim of this research work is to use an imaging sensor to perceive environmental luminance changes and implement a kind of intelligent lighting control. Without loss of generality, let us take an indoor desk application as a research object. Before this system is employed, first lots of imaging lighting data of the desk application are accumulated. A wearable camera can be used to collect images. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and evaluated for these datasets above; and then the cluster benchmarks of these objective LEEMs can be obtained. The LEEMs include: the Subjective and the Objective Surface Luminance Degree [11] of Display SMSLDD and OMSLDD, the Subjective and the Objective Color Deviation Degree [12] of Display Content SMCDDDC and OMCDDDC, the Subjective and the Objective Region Contrast Degree [13] of Display Content SMRCDDC and OMRCDDC, the Subjective and the Objective Edge Blur Degree [14] of Display Content SMEBDDC and OMEBDDC, and the Objective Glare Degree [15] of Display OMGDD. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image by a wearable camera. Then it computes the objective LEEMs of the captured image data above and compares the corresponding results with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get an optimal lighting effect.
The main contributions of this paper include: first, an imaging environment perception method of lighting effects is developed. A series of blind image quality evaluation metrics [16] are utilized as the perception features; thus the lighting effect can be described and controlled quantitatively. Second, a practical application system of proposed technique is designed and developed. The wearable camera sensor together with a color card and a LED lamp are considered in this system; the intelligent hardware product is also employed to realize the corresponding information processing function. In the following sections, first a big view of proposed system is presented. Then the specific techniques are introduced. Finally, some experiment results and discussions are given.

2. Proposed Lighting System Design

2.1. Application Formulations

The prototype of the proposed intelligent LED lighting system for desk applications is shown in Figure 2. This system works only for a single user. This prototype has many application instances, such as the hermetic cockpit design for pilots or industrial worktable development for workers, etc. In Figure 2, a user sits in front of a display to perform a reading or writing task; a white light LED source is fixed in the ceiling above the user’s head; the distance between the LED lighting source and the user’s head is from 0.25 m to 1.0 m. A surface light source is utilized here. The output intensity of LED lamp can be adaptively tuned by its controller. This proposed lighting system can work for both daytime and nighttime application. If it works for a daytime application, this LED lamp can be used as a lighting supplement apparatus: it can balance the uneven luminance effect for an ocular operation task. Differently, if it is utilized for a nighttime application, it can work as a lighting providing source. In this paper, it is supposed that the size of display is about 200 mm × 300 mm; its luminance output is stable. After an extensive investigation of the application requirements, here the “good lighting effect” means the normal color and a proper environment luminance (including the intensity, the contrast, and the distribution) without any glare.

2.2. Design of Intelligent LED Lighting System

Figure 3a shows the hardware design of the intelligent lighting system. A wearable visible light camera (or a glass camera) and its imaging data processing circuit are utilized by a user; a standard color card is pasted in the margin of the display; and a LED lamp is fixed near the user. When this system works, the camera is used to capture the image of display in front of the user. The data processing circuit [17] implements the image capture, the image analysis and the luminance control of the LED lamp. Then the LED lamp can carry out the optimal lighting output. Currently, the captured images can be transmitted to the data processing circuit by a wireless network [18]; while the connection method between the data processing circuit and the LED lamp is a cabled network. The color card is used to provide the color evaluation benchmark for the imaging sensor; and the basic color selection of it is decided by the typical desk operation task. The visual flicker of display should be removed by the proper setting of display output frequency. It is supposed the luminance output of display is stable when the desk operation task is carried out.
Figure 3b shows the working flow chart of the proposed system. The system works under an offline processing mode and an online processing mode. When implementing the offline processing mode, first a lighting effect image dataset is built. This dataset traverses most of the typical imaging lighting effects of the desk application. Second, a series of subjective evaluations are performed according to different subjective LEEMs to classify the dataset above into several sub-sorts. For example, four subjective LEEMs are assessed and the classification degree of each LEEM is five; thus 4 × 5 sub-datasets can be obtained. Third, the objective LEEMs are computed for the datasets above and their cluster centers are also analyzed. Then the cluster benchmarks of the objective LEEMs can be obtained.
When implementing the online processing mode, first the imaging sensor is used to capture the lighting effect image. Then the glare is eliminated from the proposed application. After that the objective LEEMs are computed and their calculation results are compared with the objective LEEMs cluster benchmarks. Finally, the optimal lighting control, i.e., a single LEEM-based or a multiple LEEMs-based control, can be used to get a kind of optimal lighting effect output.

3. The Key Techniques of Intelligent Lighting System

3.1. The Subjective Lighting Effect Evaluation Method

The research target of the subjective lighting effect evaluation experiment is to accumulate the image sub-datasets of the subjective LEEMs with different evaluation degrees. Then these sub-datasets can be regarded as the computation benchmarks for the following objective LEEMs computation. To achieve that target, first the typical lighting effect image sub-datasets are built; then the subjects are asked to implement the subjective lighting effect evaluations.

3.1.1. The Typical Lighting Effect Image Dataset

A typical lighting effect image dataset is built. This image dataset is captured by a wearable camera in a dark room. The wall of the dark room is pasted by some light wave absorbing materials thus the lighting environment can be assessed and controlled elaborately. The application prototype of lighting system is shown in Figure 2. Without loss of generality, let us take the display read and monitoring task of desk application as an example. When the subject is performing the related tasks, different lighting conditions will be provided by a LED lamp constantly. Here “different lighting conditions” mean various kinds of settings of the LED output intensity or direction. The output intensity can be tuned by the control circuit. The output direction of LED lamp is also changeable because the user can move his or her working position. For example a mobile PC can be moved by user. To guarantee the application popularity, we do not restrict the lighting intensity output and the user’s working pose. The subjects can change these factors by their habits.

3.1.2. The Subjective Lighting Effect Evaluation Experiment

The subjective lighting effect evaluation experiment can be carried out during the collection course of the typical lighting effect image dataset. An experimental software is developed to guide that experiment course and record the subjective evaluation results. Figure 4 shows two software interface photos. In Figure 4, (a) is the initial interface of the subjective evaluation experiment software; (b) is the interface of the subjective lighting effect evaluation experiment. This software was developed by Dr. Heng Zhang in Astronaut Research and Training Center of China in 2008. It was written by C++ and Matlab. The basic functions of this software include: first it can record and play the image dataset which is captured by a camera. Second, this software can select subjective evaluation index and set the evaluation reaction time of subject. Third the subjects can use it to score with respect to the subjective LEEMs. The largest score is 10. The subjective evaluation results can be saved in the local path of experiment computer by a file.
When performing the subjective evaluation experiment four LEEMs are considered. They include SMSLDD, SMCDDDC, SMRCDDC, and SMEBDDC. In this paper, the evaluation degree of each index is 5: degree 1 means the worst evaluation effect; while degree 5 means the best evaluation effect. For example, regarding the index SMEBDDC, the evaluation degree 5 means the blur degree of the display content almost cannot be noticed; differently, the evaluation degree 1 means the blur degree cannot be tolerated. From Figure 3b it can be seen after the subjective evaluation experiment, 4 × 5 image sub-datasets can be built. These datasets can be used to implement the objective LEEMs computation in the following processing steps. The glare is intolerant in our proposed application. That means once the glare is detected the system will notify the user to change his or her working pose to eliminate this glare. Thus it is not necessary to evaluate the subjective glare degree in this paper.

3.2. The Objective Lighting Effect Evaluation Method

Five objective LEEMs are computed to realize the lighting effect perception. Among these LEEMs, the indices OMSLDD, OMCDDDC, OMRCDDC, and OMEBDDC are used to compute the lighting effect; while the index OMGDD is employed to identify glare.

3.2.1. The Surface Luminance Degree of Display

Like an imaging luminance meter, the visible light camera can be used to evaluate the surface luminance degree of display after luminance calibration. The International Commission on Illumination has defined two colorimetry coordinates: the 1931 CIE-RGB coordinate and the 1931 CIE-XYZ coordinate. The image luminance can be obtained by the 1931 CIE-XYZ coordinate directly. In the 1931 CIE-XYZ coordinate, the Y component stands for the luminance of color; it can be estimated by the relationship: Y = −1.7392R + 2.7671G − 0.0279B, where R, G, and B are the color components in the 1931 CIE-RGB coordinate. However, the commercial camera always has its own RGB spectrum response curve which is different from the standard response curves of two color coordinates mentioned above. In this paper the color coordinate of the commercial camera is called the imaging RGB color coordinate. Thus it is necessary to design a method to estimate the transformation rule between the 1931 CIE-XYZ coordinate and the imaging RGB color coordinate.
To realize the luminance measurement with a camera, a color calibration method is designed. A standard Munsell color card is placed near a standard lamp (its output power is 500 W and its spectrum scope is about 350 nm~800 nm), and a camera is used to shoot that color card. Then the color card can provide a color benchmark for the color measurement [19]. Table 1 shows part of the color components of the color card which uses the 1931 CIE-XYZ color coordinate. These color components can be measured by a spectrophotometer or provided by the color card manufacturer directly. More than 20 color cards are used in this experiment. The spatial distance between the color card and the standard lamp is about 1.0 m. All these experiments are carried out in a dark room. For each measurement result, the transform relationship between the imaging RGB color coordinate and the 1931 CIE-XYZ coordinate can be defined by (1) or (2). The least squares method [20] in (3) can be used to compute M. Finally, for any targets, the image luminance can be estimated by L. Here L is an estimation of the Y component in the 1931 CIE-XYZ coordinate:
[ L 1 L 2 L n ] = [ R 1 G 1 B 1 R 2 G 2 B 2 R n G n B n ] [ m 1 m 2 m 3 ]
L = X × M
M = ( X T X ) 1 X T L
where L = [L1 L2Ln] is the object luminance vector, it also stands for the Y component in the 1931 CIE-XYZ coordinate; [Ri Gi Bi] (i = 1, 2, …, n) is the coordinate in the imaging RGB color coordinate; X is a color matrix, here X = [R G B]; to increase the computational precision, X also can be set by [1 R G B RG RB GB R2 G2 B2 RGB]; here R, G, and B are the color components in the imaging RGB color coordinate; M is an unknown matrix.
When calibrating the luminance of a camera, a luminance meter is utilized. In this paper the distance between the luminance meter and the target is also about 1.0 m. The relationship between the image luminance and the target luminance can be written using Equation (4), which can also be rewritten using Equations (5) and (6). If the target luminance D is measured by the luminance meter and the image luminance L can be calculated by Equation (2), in addition the variables F and T are known; then the variables v and w can be estimated easily: both the camera and the luminance meter are used to observe the same target, then a series of measurement results of the image luminance L and the target luminance D can be obtained; finally the relationship between the variables v and w can be fitted by the least squares method again. In this paper the observed target is the standard Munsell color card. Finally, the target luminance degree can be estimated by Equation (7):
D = v lg ( π τ T L / 4 F 2 ) + m
D = v lg ( T L / F 2 ) + w
w = v lg ( π τ / 4 ) + m
O M S L D D = D
where ν is the contrast ratio; D is the target luminance; L is the image luminance; τ is the optical transmittance of camera lens; T is the camera exposure time; F is the camera light ring coefficient, F = f/d; f is the camera focus, d is the size of light aperture; m is a coefficient.

3.2.2. The Color Deviation Degree of Display Content

Because the object will absorb and reflect the spectrum components selectively, different objects will have different color characters under the same light source; similarly, as the light source has different spectrum components, the same object will also present different colors under different light sources. To assess the luminance effect, it is necessary to evaluate the color deviation degree of display. A Standard White Color Card (SWCC) is pasted in the margin of display. After a primary evaluation, in this paper it can be thought the SWCC can be identified by the visible light camera easily under a complex environmental light due to its excellent color contrast to the display margin. To identify the color deviation degree between the SWCC image block and the standard white color, first the camera searches and captures the imaging content of SWCC; then the color of SWCC block is transformed from the imaging RGB color space to the L*a*b* color space [21]; finally the Euclidean distance between the SWCC image block and the standard white color in (8) is used to assess the color deviation degree. The L*a*b* color space is utilized here because of its good color uniformity:
O M C D D D C = ( L 1 L 2 ) 2 + ( a 1 a 2 ) 2 + ( b 1 b 2 ) 2
where L1, a1, and b1 are the components of the SWCC image block in the L*a*b* color space; L2, a2, and b2 are the components of the standard white color in the L*a*b* space.

3.2.3. The Region Contrast Degree of Display Content

The contrast reflects the salience and the difference degree between the image pixels of foreground and background. To improve the robustness of contrast evaluation, both the color region contrast [12] and the gray region contrast [22] are considered. Equation (9) shows the computation method of the gray region contrast. That contrast is a kind of Michelson contrast [23] which can measure the metric of periodic pattern. When estimating the gray region contrast, many points are sampled stochastically from the original image; then Equation (9) can be used to estimate the gray region contrast. Equation (10) shows the calculation method of the color region contrast. This function measures the root mean enhancement contrast in the log domain. It reflects the difference between the center pixel and its neighbors. The entire pixels in image will be used to compute that metric. Finally, the region contrast degree of display content can be defined by the linear combination of the gray region contrast and the color region contrast in (11):
O M G C = 1 N k = 1 N [ ( I k max I k min ) / ( I k max + I k min ) ]
O M C C = β k 1 k 2 i = 1 k 1 j = 1 k 2 [ lg | I i , j c = 1 3 λ c ( I c 1 + I c 2 + + I c n ) / n | lg | I i , j + c = 1 3 λ c ( I c 1 + I c 2 + + I c n ) / n | ] α
O M R C D D C = w 1 × O M G C + w 2 × O M C C
where Ikmax and Ikmin are the maximum and the minimum gray values of the kth image block; k1 × k2 is the image block numbers of whole image, Ii,j is the center image gray intensity; n is the pixel number of image block, (Ic1 + Ic2 + …+ Icn)/n is the average intensity of image block, λc denotes the weights for each color component, here λc can be λr, λg, λb, and λr = 0.299, λg = 0.587, λb = 0.144; α and β are parameters, α = 0.8 and β = 1000; w1 and w2 are weights, w1 = 0.2, and w2 = 0.8.

3.2.4. The Edge Blur Degree of Display Content

The edge blur degree represents the definition of image edges. As for the proposed application, the edge blur comes from the weak eyesight of subjects, the low luminance of the display, or even disturbances of the non-uniform environment lighting output. To evaluate the edge blur degree, the computation method in [14] is utilized. The computation equation of edge blur is shown in (12). This metric evaluates the edge blur by computing the edge spread degree of the edge points. When designing this lighting system, the moiré pattern [24] may happen. The moiré pattern is generated from the micro-grooved structure of imaging sensor. It will occur when two or more images are combined nonlinearly to create a new superposition image with a magnified period in comparison with the original images. Thus the moiré pattern should be identified and distinguished from the image blur phenomenon. Regarding this proposed application, the moiré pattern can be identified by the image filtering technique and decreased by the image interpolation method [25]:
O M E B D D C = max i Θ { arctan [ I ( x i 1 , y i 1 ) I ( x i 2 , y i 2 ) ] w i }
where Θ is the set of image block, I(xi, yi) is the gray value of the ith image block in position (x, y), wi is the width between the edge-spread points (xi1, yi1) and (xi2, yi2).

3.2.5. The Glare Degree of Display

A glare may occur in the screen when the display or the LED lamp is moved by the user. The glare may seriously affect the visual output effect. For example, the display contents, such as the characters or the content details, may be sheltered by the glare region; in addition, the glare will also evoke the visual fatigue or the bad emotion of user; therefore it is necessary to avoid it. In general, the glare can be classified into two types [15], i.e., the disability glare and the discomfort glare. As for the proposed desk application, only the discomfort glare is considered. Many indices [26] have been invented to assess the degree of manmade glare, including the British Glare Index (BGI), the CIE Glare Index (CGI), and the Unified Glare Rating (URG) etc. In this paper, only the existence or the nonexistence of glare is considered. If a glare is identified by the system, an alarm will be reported to the user; and then the user should change his or her working pose to avoid it.
To detect and evaluate the glare degree, first the fixed threshold-based image segmentation method is utilized to search and mark the glare region. The computation equation is shown in (13). The threshold can be gotten from the practical experiment test of typical desk application. The initial binary segmentation image can be gotten by this method. Then the mathematical morphology can be employed to expand the region and the edge of the segmented binary image. In this paper the segmented high-lighted glare regions can be defined as the foreground region; while other regions can be defined as the background region. Finally, the pixel numbers inside of the foreground region will be counted. Equation (14) shows the identification method of the glare region: if the pixel ratio between the number of foreground pixel and that of the total pixel is large than a threshold, the foreground can be regarded as the glare region:
I S = { 0 I > T S 255 e l s e
O M G D D = { 1 C ( I S ) / C ( I ) > T G 0 e l s e
where I is the image grey intensity of the original image, TS is a threshold, IS is the binary segmentation result of the original image, function C(*) computes the pixel number of image block “*”, TG is a threshold, here TG = 0.08.

3.3. The Intelligent Lighting Control Methods

After the quantitative evaluation of the lighting environment, a luminance tuning of LED lamp should be made. Currently only the luminance intensity of LED lamp is tuned. The control rules of the lighting system are: first, the glare which appears in the display should be detected and avoided; second, a kind of optimal luminance control will be implemented. Here the optimal control means a lighting effect without over-illumination or over-darkness. Figure 5 shows the design flow chart of the intelligent luminance control methods. Both a single LEEM-based control and a multiple LEEMs-based control are developed. Here, the optimal luminance is only a relative optimization. When searching the optimal luminance, because no priori of the environment luminance is provided, this system will make a traversal luminance control firstly. For example, the system will implement ten kinds of different luminance controls of LED lamp and record all the images captured under these luminance conditions. Here the LED output of different intensities can be controlled by the corresponding current or voltage. Then the system will compute and evaluate all the LEEMs for each captured image and select only one optimal luminance result as the final control method.
The single LEEM-based method controls the lighting by assessing the single LEEM one by one. After the subjective evaluation of the typical lighting effect image dataset, 4 × 5 image sub-datasets can be accumulated. The objective LEEMs are used to compute the objective evaluation results for each image sub-dataset above; then the corresponding benchmark cluster centers of them can be gotten. The cluster centers can be estimated by the K-means method [27]. When implementing the lighting control, the Euclidean distance [28] and the voting rule are utilized: four objective LEEMs are computed for each image captured from the traversal lighting effects; then the Euclidean distance between each new computed objective LEEM and the corresponding benchmark cluster center is computed; if the Euclidean distance is located in a typical threshold scope, a quantitative score will be assigned. The quantitative method is shown in Table 2. Finally, the optimal lighting control will be that one which has the maximum score sum of four LEEMs.
The multiple LEEMs-based method controls the lighting by the assessment of the multiple LEEMs together. Regarding this method, a high-dimension evaluation vector can be build. For example, this vector can be written as [OMSLDD OMCDDDC OMRCDDC OMEBDDC]. The K-means cluster method is used to find its high-dimension benchmark cluster center for the typical lighting effect image sub-datasets. When carrying out the lighting control, the images with the traversal luminance are captured and their corresponding high-dimension LEEMs vectors can be computed as the candidate control methods; then the Euclidean distance in (15) will be used to compute the distance between the benchmark cluster center and the corresponding LEEMs vector candidate. Then the optimal control will be that one which has the minimum Euclidean distance offset:
Δ D = k = 1 4 ( L F k C L F k O ) 2
where LFkC is the kth component of cluster center of the high-dimension vector; while LFkO is the kth component of the observation data of the high-dimension vector.

4. Experiments and Discussion

4.1. Experiments & Evaluations

4.1.1. Experimental System and Experimental Environment

To evaluate the performance of proposed technique an experimental system was built. This experimental system works in a dark room. Figure 6 shows photos of this experimental system: a computer, a white light LED lamp, a luminance meter, an illuminance meter, and a CMOS camera are shown in (a); a dynamic bracket which is used to control the spatial position of camera is given in (b); a photography tripod with a camera is shown in (c); a luminance meter is shown in (d); a head support which is used to estimate the height of user’s head is given in (e); a PC and a SWCC (see Section 3.2.2) are shown in (f). When implementing the experiments, because the imaging performance of the glass camera is poor, a motion camera which has a better imaging sensor is utilized. The motion camera can be installed in the dynamic bracket or the photography tripod to imitate the observation state of human eyes. It is placed in front of a computer, and the LED lamp is used to provide the lighting source. This experiment system can be used to carry out the subjective lighting effect evaluation experiment and build the corresponding image datasets; it also can be employed to implement other human-involved experiments to assess the lighting tuning effect of proposed system.
Figure 7 shows the sketch maps of the lighting experiment system and its captured image samples: Figure 7a,b are the sketch map of the lighting system, the black points in Figure 7b show the sample position of the illuminance meter. The sample scopes are from −90° to +90° and the sample interval is 5°; thus 37 sample points can be gotten. Figure 7c is the front view of the experiment computer; the black points mark the sample positions. Figure 7d shows the captured image samples and Table 3 gives examples of their lighting effect. In Table 3, the average illuminance means the average lighting intensity of the sample points; the illuminance uniformity means the intensity variance. The illumination scopes of LED lamp are from 1.0 lx to 40,000.0 lx; its color temperature ranges are from 3500 K to 6000 K. Figure 8 shows the illuminance distribution curves of the lighting system of Figure 7a. Without loss of generality, the illuminance degrees of LED lamp are supposed to have 5 degrees: degree 5 has the strongest output effect while degree 1 has the weakest output effect. The sample distances d3 in Figure 7b are: degree 1 is 20 cm, degree 2 is 25 cm, and degree 3 is 30 cm.

4.1.2. Subjects

Eight males (ages from 25 to 35; heights from 165 cm to 175 cm; weights from 60 kg to 75 kg) were selected to participate in the intelligent lighting effect evaluation experiments. After a primary ocular examination, none of the subjects had any ophthalmopathies, such as the myopia, hyperopia, or color blindness, etc. Their uncorrected eyesight should be better than 0.8. Before the subjects participated in any human-involved experiments, the entire experiment procedures were described to them clearly; and after authorization by the Ethics Committee of the School of Biological Science and Medical Engineering, Beihang University, the experiments could be carried out.

4.1.3. Realization Results of the Intelligent Lighting System

The human-involving experiments are used to create the subjective lighting effect evaluation dataset. The subjects are asked to evaluate the lighting effect according to their subjective cognition. Currently, the evaluation experiments only use the single factor evaluation method to accumulate the corresponding dataset. Here the single factor experiment asks the subjects to evaluate the single LEEM one by one for each lighting scene. The subjective evaluation degrees are divided into five levels: very good, good, fair, bad, and worst. Their corresponding quantification degrees are 5, 4, 3, 2, and 1. Each subjective LEEM has its own subjective evaluation criteria. For example, regarding SMSLDD and SMRCDDC, the larger their subjective sense diversities are, the larger their degrees should be; and as for SMCDDDC and SMEBDDC, the smaller their subjective sense diversities are, the larger their scores should be. After the subjective lighting effect evaluation, twenty image sub-datasets can be built. For example, Figure 7d shows the image dataset samples and Table 4 shows the single LEEM subjective evaluation results of Figure 7d((d)-5–(d)-8).
After the subjective evaluation image sub-datasets have been built, the objective LEEMs can be computed. Equations (16) and (17) show the estimation results of the camera luminance calibration, where F = 3.6, T = 2.083 ms. Figure 9 shows the computation results of the objective LEEMs: (a–d) are the results of OMSLDD, OMCDDDC, OMRCDDC, and OMEBDDC in different subjective evaluation degrees, respectively. The K-means method is used to calculate the cluster centers of each objective LEEM under different lighting effect degrees. Table 5 shows the statistic results of these data above. Their means and variances are computed here. When carrying out the optimal lighting control, the voting rule and the quantitation method in Table 2 are used. For example, regarding two images, the objective quantitative scores of the first image are: OMSLDD = 4, OMCDDDC = 4, OMRCDDC = 3, and OMEBDDC = 4; the quantitative scores of second image are: OMSLDD = 2, OMCDDDC = 1, OMRCDDC = 2, and OMEBDDC = 3. Thus the first image has a better lighting effect because of its better voting result: the voting score sums of two images are 4 + 4 + 3 + 4 = 15 and 2 + 1 + 2 + 3 = 8, respectively:
M = [ 0.0687 0.2377 0.0301 ] T
D = 30.537 × lg ( T L / F 2 ) + 66.4003
The high-dimension data samples and their cluster centers of the multiple LEEMs are shown in Table 6. The multiple LEEMs-based control assesses the lighting effect by considering all the objective LEEMs together. In Table 6, three sample data are illustrated for each subjective evaluation degree. The high dimension K-means method is used to compute the benchmark cluster centers. When a new image is captured, first its LEEMs vector is calculated. Then the Euclidean distance between that vector and the corresponding benchmark cluster center will be used to find the optimal lighting control.
For example, if the subjective lighting evaluation degree 5 is defined as the optimal lighting effect; regarding two images, the first one gets its LEEMs vector as [211.0197 19.6238 5.9761 2.1751]; while the second one calculates its vector as [211.6442 9.1758 7.2765 2.7980]; and the objective LEEMs benchmark cluster center of the subjective evaluation degree 5 is [211.6554 9.1785 7.2084 2.7585], thus the second image has a better lighting effect because of its smaller Euclidean distance offset.
Because it is difficult to completely recover the light field from one image, a kind of “local” optimal control is used here: ten kinds of typical traversal lighting controls are implemented to get a series of lighting images. Then the single LEEM-based or the multiple LEEMs-based control is used to select only one control method which has the minimum distance offset between the candidate and the predefined optimal lighting benchmark as the final control. The traversal control of LED is to increase or decrease its luminance output little by little. Figure 10 shows a control result of the proposed system: (a) shows the original lighting effect; (b) is the photo after the implementation of the single LEEM-based control. Obviously, (b) has a better lighting output. Table 7 shows the LEEMs comparison results before and after the lighting control of Figure 10a. Obviously its lighting effect can be improved according to Table 6. The control technique of the multiple LEEMs-based method can get a similar processing result.

4.1.4. Processing Results of Glare

The glare should be avoided for any desk applications because of its negative influences on the visual operation task. Figure 11 shows two glare images and their processing results by the proposed system. In Figure 11, it can be seen the image details are sheltered by the high-lighted regions. If the glare region is large than a threshold an abnormality should be reported; then the user will change his working pose or just move the spatial position of the computer to eliminate the glare. From Figure 11, the glare can be identified if the gray segmentation threshold is set from 210 to 230. Regarding a desk application, the glare mainly comes from the LED source reflection of the screen or other interferential light sources of the environment. Obviously, the automotive recognition of glare has the practical engineering application meaning for the design of intelligent lighting system.

4.1.5. System Application Evaluations

To evaluate the performance of the proposed intelligent lighting system, a human-involved experiment is performed. First, a subjective evaluation experiment of the static lighting is performed. Here the static lighting means the traditional lighting method, i.e., no intelligent lighting control is executed here. Its experiment procedures are: the subjects are asked to stay in front of a display in the dark room; the LED lamp provides a static lighting environment. In that situation the subjects are asked to read the display content (see Figure 7d) and give their subjective assessments of different lighting effects. No intelligent lighting tuning is made during that process. Second, the proposed intelligent lighting system is used to improve the lighting effect. The single LEEM-based control is carried out here. Once the intelligent system stops work, the subjects are required to give their subjective evaluations of the corresponding lighting effects again. Then a comparison of the lighting effect degree before and after the application of the proposed intelligent lighting control can be made. During the experiment procedure above, each display-reading task will last 120 s and the reading content is the modern Chinese article; then the subjects will perform the subjective evaluation mission.
Table 8 shows the comparison results of the lighting effect evaluation experiment. From the table it can be found that each index of the subjective evaluation factor can be improved apparently by using the proposed intelligent lighting control technique. The multiple LEEMs-based control method can get the similar experiment results.
In this paper, another Chinese article-reading task is used to evaluate the performance of proposed system. In this experiment the visual fatigue degree and the reading error rate [29] are used as the evaluation indices. The visual fatigue degree represents the subjective ocular weariness feeling of subject; in physiology, it behaves as the eye itch, the eye pain, the eye dry, the tearing, the photophobia, or the ocular foreign body sensation, etc. A 5-degree score method is used, where a 5 means the visual fatigue is serious and a 1 means it is slight. The reading error rate reflects the visual cognition ability of subject. Its value will increase as the visual function falls. It can be computed by using the number of improperly read characters divided by the total number of characters in an article. The experiment was as follows: a subject is asked to read a Chinese article twice. The subject does not know any information about this article before the experiment. Each experiment lasts 10 min. The reading speed should be > 300 Chinese characters/minute. An old Chinese article is used so that the phrases will not be familiar to the subject. As a result, the subject has to observe every Chinese character carefully before he reads it out. When this experiment is carried out for the first time, the lighting state is static, whereas the second time the intelligent lighting control is applied. The experiment organizer will record the corresponding evaluation results during the experiment course. Table 9 shows the experiment results. From Table 9, it can be seen the proposed lighting system can effectively decrease both the visual fatigue and the reading error rate.

4.2. Discussion

The lighting system plays an important role in practical applications, especially for the case of the hermetic cockpit design or narrow working space applications. In our past research work [7], an ergonomics experiment was designed and implemented to assess the importance of the lighting factor for the desk operation of a hermetic aerospace cockpit. In that experiment, among those factors, including the observation distance, the observation angle, the Chinese character size, the Chinese character color or its background color, and the lighting degree, the lighting degree played the second importance role in that application. Recently, many other research works [30] have also disclosed the distinct relationship between ocular diseases and improper lighting effects. Some ocular diseases such as myopia, glaucoma, or abnormal intraocular pressure, etc. can all be aggravated by improper lighting. Besides, even when the lighting effect is proper, the different working habits of subjects also need some typical lighting output settings. Thus the lighting has become a fatal factor for the evaluation and the design of desk system applications.
A subjective evaluation experiment is used to build the benchmark image datasets for the following objective evaluation computation. When the accumulation of benchmark datasets starts, the subject is asked to watch the display in a dark room; an article reading task is shown in the screen and the lighting effect will be tuned constantly. During that process the subject is requested to give his or her subjective evaluation scores of the lighting effect; a software system guides the experiment procedures and records the corresponding results. Here it is wrong to build that benchmark datasets by showing the subject the camera-captured image in screen and asking them to score the displayed images. Because the camera imaging mechanism is different from the visual cognition mechanism of human; the images captured by camera are only a kind of degraded processing result of the imaging sensor and the optics system. The subjective evaluation results are related with the subject’s eyesight, age, and training state, etc.; thus the proposed intelligent lighting system can also encapsulate and record the ocular habits and the physiological characteristics of system users.
Four objective LEEMs are employed to compute the lighting effect quantitatively: the display luminance, the image color difference, the image contrast, and the image blur. These metrics also correspond to the blind image quality evaluation metrics. That means they are the content-independent [31] image feature describers. The reason that these metrics are selected as the evaluation indices is two-fold. The first fact is the consideration of the practical application. In this paper, the desk experiment is a mesopic vision and photopic vision application case [32]. The luminance scope of mesopic vision is about 0.001 cd/m2 to 10.0 cd/m2; while the luminance scope of photopic vision should be larger than 10.0 cd/m2. At that luminance level, the image contrast and the image blur are fit for representing the imaging quality effectively. The second fact is that the luminance, the color difference, and the contrast also belong to the traditional ergonomic investigation indices used for ocular ability assessment [33]. Thus the display luminance, the image color difference, and the image contrast should also be selected as the evaluation indices to measure the lighting effect from the ergonomics research point of view.
In this paper, the number of subjects was relatively small because of our limited financial resources, thus we only call the related experiments “human-involved” experiments rather than ergonomics experiments. By performing these experiments some primary laws of vision lighting effect can be deduced. Currently, these results cannot be considered universal laws; however they still can illuminate the potential scientific application value of our proposed method and system. Figure 12 shows the primary fitting relationship curves between the subjective evaluation index and the cluster center of the objective evaluation index.
The horizontal axis is the subjective evaluation degree of lighting effect; the vertical axis is the objective evaluation result of lighting effect. In Figure 12, (a) is the fitting curve between the index SMSLDD and the cluster center result of OMSLDD; (b) is the fitting curve between the index SMCDDDC and the cluster center result of OMCDDDC; (c) is the fitting curve between the index SMRCDDC and the cluster center result of OMRCDDC; and (d) is the fitting curve between the index SMEBDDC and the cluster center result of OMEBDDC. The exponential functions are used to fit the results in (a–c), while a Fourier series are employed to fit the result in (d). By using the curves below, the cluster centers of objective indices can be forecast to implement some intelligent lighting system controls in the future.
The glare index is utilized in this paper to find and eliminate the glare-contaminated image blocks from the accumulated image datasets. In the proposed experiments, the glare mainly comes from the screen reflection of the strong lighting source. The improper spatial positions of display and light source will create such kind of phenomena. However, other cases, for example outdoor lightning, external strong lights, or a lamp flash which appears near the user’s visual field will also create glares for the system user. Currently, the OMGDD computation is only implemented within the display image area. In the future, the glare will be computed for the entire image scope captured by the wearable camera; thus other glare cases can be recorded and processed. In some application cases, if the glare cannot be avoided, the degree evaluation of it should be considered. A kind of degree evaluation standard for glare is proposed by CIE 112-1994 [34]. Thus in the future our system can perform quantitative evaluations of glare degree.
Two kinds of control methods are developed to implement the optimal control of lighting systems. After some experimental evaluations, the single LEEM-based control can approach the optimal lighting if the accumulated image dataset is small; differently, the multiple LEEMs-based control method is appropriate for realizing the optimal lighting control if the accumulated dataset is large. This phenomenon can be explained by the fact that the former method uses more man-made control parameters than the latter method. From the engineering application point of view, the use of the single LEEM-based method is obviously more complex than the use of the multiple LEEMs-based method; however, the small dataset requirement of the former method still makes it fit for the development of some low cost industrial lighting products. Other complex systems which demand elaborate lighting control can use the multiple LEEMs-based control. In future, the integrated control method which uses both the single LEEM-based and the multiple LEEMs-based control techniques can be developed.
Figure 13 shows a simulation interface of an airplane cockpit instrument panel; the proposed lighting system can be used to assist the design of that panel. Figure 13a is the simulation interface; (b,c) are the parameter output interfaces which use different character sizes; (d,e) are the progress bars which use different colors; (f,g) are circular dials which use different character sizes and background colors. By using the proposed lighting system, it can be found the luminance intensity requirement of the combined images (b,d,f) is larger than the requirement of the combined images (c,e,g). This result may be explained by the fact that the image details of the former combination mode have a weaker imaging effect than that of the latter combination mode for human eyes. From this application it can be seen the proposed technique can be used for the industrial system design. With the predefined lighting degree, the tuning of LED lamp will always find ways to approach the optimal lighting effect. The LED lamp can be used as the main light source or only the lighting supplement device. And it can improve the task reliability.
To show the scientific background and the rationale of our proposed intelligent lighting system further, Figure 14 shows its feedback control schematic diagram. From Figure 3 and Figure 14 it can be seen that the proposed intelligent lighting control derives from the classic adaptive control model [35]. Our proposed control system has its own characteristics. First, it uses the blind image quality evaluation metrics to perceive the environmental lighting. A well designed blind image quality evaluation metric will be independent to the image content; i.e., it can only represent the essential attributes of an imaging scene. In contrast to other sensors, it also can get abundant detail information. Second, the human factors are encapsulated in this system by some mathematical tools. The human factors at least include the visual function of human eyes and the working habits of users; in many cases they even can reflect the user’s age, occupation, or gender. By the encapsulation of human factors, our proposed system can realize a real human-centric design. Third, the flexible luminance control methods are employed. Here the luminance control depends on the priori information. If the priori information is sufficient, for example the lighting environment and the working space are known (the corresponding information can be evaluated by the simultaneous localization and mapping technique [36] and the 3D reconstruction technique [37], etc.), then the 3D lighting design can be carried out by the precise computation. The global optimal lighting control can be realized in that situation. However, if the priori information are unknown or limited, our proposed local optimal control method can be utilized.
With the rapid development of wearable devices and intelligent product, the intelligent sensor has gradually changed the design concepts of objects used in our daily life. As an important branch, of this field, an intelligent lighting apparatus can realize friendly interactions with users. Intelligent lighting apparatus can have many applications. For example, some ocular diseases such as myopia or glaucoma can be alleviated by such a system; building lighting, medical lighting, and teleoperation lighting, etc., all need the application of the intelligent lighting control technique. As a result, the image analysis-based lighting evaluation and control technique definitely has good development prospects. In the past, the environmental perception abilities of intelligent sensors were limited. The traditional infrared sensor only can get one-dimensional perception information; differently, the imaging sensor can realize two-dimension environmental perception by using the blind image quality evaluation metrics. This innovative technique can expand the design concept of traditional intelligent lighting systems. In the future, the human-centric design will get great attention when developing this kind of system. For example, more image analysis techniques and ergonomics research results can be utilized to improve the system performance; other LEEMs such as the degree of noise contamination of images or the color temperature degree of LED lamps can also be applied, and the reaction time, the eye movement tracking state [38] or even the physiological signal feature [39] can also be studied to improve the integrated system output effect.

5. Conclusions

An intelligent lighting evaluation and control technique is proposed in this paper. Both a human-involved experimental method and the image analysis techniques are used to analyze the lighting effect. The subjective and the objective LEEMs are utilized to realize the optimal lighting effect evaluation: the image datasets of the subjective LEEMs are built; the distribution rules of the objective LEEMs are computed. Single LEEM-based and multiple LEEMs-based control techniques are also developed. The surface luminance degree of a display, the degree of color deviation of content, the region contrast degree of display content, and the edge blur degree of display content, are all used for the computation of the objective lighting effect. The glare can be identified automatically. By using the proposed system, the lighting effect of a desk application can be effectively improved.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant No. 61501016.

Author Contributions

Haoting Liu was the inventor of the proposed system and method, he wrote a patent and this paper, participated in the experimental design and parts of the data analysis tasks. Qianxiang Zhou participated in the design of the human-involved experiments. Jin Yang contributed the data analysis simulations. Ting Jiang, Zhizhen Liu, and Jie Li performed the subjective lighting effect evaluation experiments, and participated in parts of the data analysis tasks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nefedov, E.; Maksimainen, M.; Sierla, S.; Yang, C.-W.; Flikkema, P.; Kosonen, I.; Luttinen, T. Energy efficient traffic-based street lighting automation. In Proceedings of the IEEE 23rd International Symposium on Industrial Electronics, Istanbul, Turkey, 1–4 June 2014; pp. 1718–1723.
  2. Leccese, F. Remote-Control system of high efficiency and intelligent street lighting using a ZigBee network of devices and sensors. IEEE Trans. Power Del. 2013, 28, 21–28. [Google Scholar] [CrossRef]
  3. Pamungkas, D.S.; Ward, K. Tele-Operation of a robot arm with electro tactile feedback. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Vollongong, Australia, 12–15 July 2013; pp. 704–709.
  4. Kar, A.; Kar, A. New generation illumination engineering—An overview of recent trends in science & technology. In Proceedings of the International Conference on Automation, Control, Energy and Systems, West Bengal, India, 1–2 February 2014; pp. 1–6.
  5. Jincheol, P.; Heeseok, O.; Sanghoon, L.; Bovik, A.C. 3D visual discomfort predictor: Analysis of disparity and neural activity statistics. IEEE Trans. Image Process. 2015, 24, 1101–1114. [Google Scholar] [CrossRef] [PubMed]
  6. Liu, H.; Li, J.; Wang, Z.; Cheng, J.; Lu, H.; Zhao, Y. Image quality feedback based adaptive video definition enhancement technique for space manipulation task. Int. J. Image Graph. 2011, 11, 153–175. [Google Scholar] [CrossRef]
  7. Li, J.; Yang, W.; Zhou, S.; Zhou, Q.; Jiang, T. Research on effect factors to function of human-machine system as displaying with LCD. Space Med. Med. Eng. 2009, 22, 122–125. [Google Scholar]
  8. Lavric, A.; Popa, V. Hardware design of a street lighting control system with vehicle and malfunction detection. In Proceedings of the 8th International Symposium on Advanced Topics in Electrical Engineering, Bucharest, Romania, 23–25 May 2013; pp. 1–4.
  9. Feng, P.; Zhou, C. Intelligent lighting system of fuzzy control. J. Guangxi Univ. Technol. 2009, 20, 19–22. [Google Scholar]
  10. Caicedo, D.; Pandharipande, A. Ultrasonic array sensor for indoor presence detection. In Proceedings of the 20th European Signal Processing Conference, Bucharest, Romania, 27–31 August 2012; pp. 175–179.
  11. Li, M.; Jia, G.; Qu, X. Color CCD imaging method for measuring light pollution. Urban Environ. Urban Ecol. 2012, 25, 42–46. [Google Scholar]
  12. Bernardo, M.V.; Pinheiro, A.M.G.; Pereira, M.; Fiadeiro, P.T. Objective evaluation of chromatic quality assessment. In Proceedings of the 2013 IEEE International Conference on Multimedia and Expo, San Jose, CA, USA, 15–19 July 2013; pp. 1–6.
  13. Gao, C.; Panetta, K.; Agaian, S. No reference color image quality measures. In Proceedings of the IEEE International Conference on Cybernetics, Geneva, Switzerland, 13–15 April 2013; pp. 243–248.
  14. Wang, X.; Tian, B.; Liang, C.; Shi, D. Blind image quality assessment for measuring image blur. In Proceedings of the International Congress on Image and Signal Processing, Hainan, China, 27–28 May 2008; pp. 467–470.
  15. Arnal, E.; Anthierens, C.; Bideaux, E. Consideration of glare from daylight in the control of the luminous atmosphere in buildings. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Budapest, Hungary, 3–7 July 2011; pp. 1070–1075.
  16. Virtanen, T.; Nuutinen, M.; Vaahteranoksa, M.; Oittinen, P.; Hakkinen, J. CID2013: A database for evaluating no-reference image quality assessment algorithms. IEEE Trans. Image Process. 2015, 24, 390–402. [Google Scholar] [CrossRef] [PubMed]
  17. Leccese, F.; Cagnetti, M.; Trinca, D. A smart city application: A fully controlled street lighting Isle based on Raspherry-Pi card, a ZigBee sensor network and WiMax. Sensors 2014, 14, 24408–24424. [Google Scholar] [CrossRef] [PubMed]
  18. Leccese, F.; Leonowicz, Z. Intelligent wireless street lighting system. In Proceedings of the 11th International Conference on Environment and Electrical Engineering, Venice, Italy, 18–25 May 2012; pp. 958–961.
  19. Grana, C.; Pellacani, G.; Seidenari, S.; Cucchiara, R. Color calibration for a dermatological video camera system. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 798–801.
  20. Cheong, H.; Chae, E.; Lee, E.; Jo, G.; Paik, J. Fast image restoration for spatially varying defocus blur of imaging sensor. Sensors 2015, 15, 880–898. [Google Scholar] [CrossRef] [PubMed]
  21. Sirisha, B.; Sandhya, B. Evaluation of color spaces for feature point detection in image matching application. In Proceedings of the 3rd International Conference on Advances in Computing and Communications, Kerala, India, 29–31 August 2013; pp. 216–219.
  22. Liu, H.; Li, F.; Lu, H. Imaging air quality evaluation using definition metrics and detrended fluctuation analysis. In Proceedings of the 10th International Conference on Signal Processing, Beijing, China, 24–28 October 2010; pp. 968–971.
  23. Panetta, K.A.; Wharton, E.J.; Agaian, S.S. Human visual system-based image enhancement and logarithmic contrast measure. IEEE Trans. Syst. Man Cybern. B Cybern. 2008, 38, 174–188. [Google Scholar] [CrossRef] [PubMed]
  24. Goto, H.; Aso, H. Screen pattern removal for character pattern extraction from high-resolution color document images. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 490–493.
  25. Liu, W. Based on the color filter array fabric image moiré fringe elimination algorithm. Sci. Technol. Eng. 2013, 13, 3466–3471. [Google Scholar]
  26. Bellia, L.; Cesarano, A.; Iuliano, G.F.; Spada, G. HDR luminance mapping analysis system for visual comfort evaluation. In Proceedings of the International Instrumentation and Measurement Technology Conference, Singapore, 5–7 May 2009; pp. 957–961.
  27. Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical human activity recognition using wearable sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef] [PubMed]
  28. Jung, J.; Yoon, S.; Ju, S.; Heo, J. Development of kinematic 3D laser scanning system for indoor mapping and as-built BIM using constrained SLAM. Sensors 2015, 15, 26430–26456. [Google Scholar] [CrossRef] [PubMed]
  29. Lee, J.-S. On designing paired comparison experiments for subjective multimedia quality assessment. IEEE Trans. Multimed. 2014, 16, 564–571. [Google Scholar] [CrossRef]
  30. Turner, P.L.; Someren, E.J.W.W.; Mainster, M.A. The role of environment light in sleep and health: Effects of ocular aging and cataract surgery. Sleep Med. Rev. 2010, 14, 269–280. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, S.; Deng, C.; Lin, W.; Zhao, B.; Chen, J. A novel SVD-based image quality assessment metric. In Proceedings of the IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 423–426.
  32. Shen, T.; Zhao, J.; Li, H. The development and application of a testing system of reaction time based on mesopic vision. In Proceedings of the International Conference on Mechatronic Science, Electric Engineering and Computer, Jilin, China, 19–22 August 2011; pp. 418–420.
  33. Yao, Q. Ergonomics Research on Application of LED in Civil Cockpit Lighting. Ph.D. Thesis, Dept. Elect. Eng., Fudan Univ., Shanghai, China, 2012. [Google Scholar]
  34. International Commission on Illumination. Glare Evaluation System for Use within Outdoor Sports and Area Lighting; CIE 112; International Commission on Illumination: Vienna, Austria, 1994. [Google Scholar]
  35. Penaloza, C.I.; Mae, Y.; Cuellar, F.F.; Kojima, M.; Arai, T. Brain machine interface system automation considering user preferences and error perception feedback. IEEE Trans. Autom. Sci. Eng. 2014, 11, 1275–1281. [Google Scholar] [CrossRef]
  36. Zou, D.; Tan, P. CoSLAM: Collaborative visual SLAM in dynamic environments. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 354–366. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, K.; Zhang, G.; Bao, H. Robust 3D reconstruction with an RGB-D camera. IEEE Trans. Image Process. 2014, 23, 4893–4906. [Google Scholar] [CrossRef] [PubMed]
  38. Shimizu, S.; Kadogawa, T.; Kikuchi, S.; Hashizume, T. Quantitative analysis of tennis experts’ eye movement skill. In Proceedings of the IEEE 13th International Workshop on Advanced Motion Control, Yokohama, Japan, 14–16 March 2014; pp. 203–207.
  39. Khushaba, R.N.; Kodagoda, S.; Lal, S.; Dissanayake, G. Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm. IEEE Trans. Biomed. Eng. 2011, 58, 121–131. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The different lighting effects of a display: (a) The display lighting effect of a sole light source; (b) the display lighting effect of multiple light sources; (c) an improper display lighting effect with glare.
Figure 1. The different lighting effects of a display: (a) The display lighting effect of a sole light source; (b) the display lighting effect of multiple light sources; (c) an improper display lighting effect with glare.
Sensors 17 00321 g001
Figure 2. The application prototype of proposed lighting system.
Figure 2. The application prototype of proposed lighting system.
Sensors 17 00321 g002
Figure 3. The design of proposed intelligent LED lighting system: (a) The hardware design of the intelligent lighting system; (b) The information processing flow chart of the proposed system.
Figure 3. The design of proposed intelligent LED lighting system: (a) The hardware design of the intelligent lighting system; (b) The information processing flow chart of the proposed system.
Sensors 17 00321 g003
Figure 4. The interface photos of the subjective lighting effect evaluation experiment software: (a) The initial software interface; (b) The interface of the subjective lighting effect evaluation experiment.
Figure 4. The interface photos of the subjective lighting effect evaluation experiment software: (a) The initial software interface; (b) The interface of the subjective lighting effect evaluation experiment.
Sensors 17 00321 g004
Figure 5. The proposed flow chart of the intelligent luminance control methods.
Figure 5. The proposed flow chart of the intelligent luminance control methods.
Sensors 17 00321 g005
Figure 6. The photos of experiment apparatuses: (a) The basic experiment apparatuses; (b) The dynamic bracket; (c) The wearable camera and the photography tripod; (d) The luminance meter; (e) The head support; (f) The experiment computer and the standard white color card.
Figure 6. The photos of experiment apparatuses: (a) The basic experiment apparatuses; (b) The dynamic bracket; (c) The wearable camera and the photography tripod; (d) The luminance meter; (e) The head support; (f) The experiment computer and the standard white color card.
Sensors 17 00321 g006
Figure 7. The experiment system sketch maps and its captured photos in different lighting conditions: (ac) The sketch maps of the experiment system, where α ≈ 80°, d1 ≈ 20 cm, d2 ≈ 80 cm, l1 ≈ 90 cm, l2 ≈ 15 cm, l3 ≈ 1.5 cm, γ ≈ 5°; (d) The captured photo samples using the proposed experiment system.
Figure 7. The experiment system sketch maps and its captured photos in different lighting conditions: (ac) The sketch maps of the experiment system, where α ≈ 80°, d1 ≈ 20 cm, d2 ≈ 80 cm, l1 ≈ 90 cm, l2 ≈ 15 cm, l3 ≈ 1.5 cm, γ ≈ 5°; (d) The captured photo samples using the proposed experiment system.
Sensors 17 00321 g007aSensors 17 00321 g007b
Figure 8. The illuminance distribution curves of LED lamp under different lighting outputs and measurement conditions: (a) The illuminance curves of different sample distances; (b) The illuminance curves of different LED lighting intensities; (c) The illuminance results in different sampling positions of the computer LCD display.
Figure 8. The illuminance distribution curves of LED lamp under different lighting outputs and measurement conditions: (a) The illuminance curves of different sample distances; (b) The illuminance curves of different LED lighting intensities; (c) The illuminance results in different sampling positions of the computer LCD display.
Sensors 17 00321 g008
Figure 9. The computation results of the objective LEEMs: (a) The objective evaluation results of OMSLDD; (b) The objective evaluation results of OMCDDDC; (c) The objective evaluation results of OMRCDDC; (d) The objective evaluation results of OMEBDDC.
Figure 9. The computation results of the objective LEEMs: (a) The objective evaluation results of OMSLDD; (b) The objective evaluation results of OMCDDDC; (c) The objective evaluation results of OMRCDDC; (d) The objective evaluation results of OMEBDDC.
Sensors 17 00321 g009
Figure 10. The lighting effect outputs before and after the single LEEM-based control: (a) The lighting effect output before the application of the single LEEM-based control; (b) The lighting effect output after the application of the single LEEM-based control.
Figure 10. The lighting effect outputs before and after the single LEEM-based control: (a) The lighting effect output before the application of the single LEEM-based control; (b) The lighting effect output after the application of the single LEEM-based control.
Sensors 17 00321 g010
Figure 11. The glare image samples and their segmentation results: (a,b) The glare cases of improper lighting; (c,d) The binary segmentation results of (a,b), respectively.
Figure 11. The glare image samples and their segmentation results: (a,b) The glare cases of improper lighting; (c,d) The binary segmentation results of (a,b), respectively.
Sensors 17 00321 g011
Figure 12. The fitting relationship curves between the subjective evaluation index and the cluster center of objective evaluation index: (a) The fitting result curve between SMSLDD and the cluster center of OMSLDD; (b) The fitting result curve between SMCDDDC and the cluster center of OMCDDDC; (c) The fitting result curve between SMRCDDC and the cluster center of OMRCDDC; (d) The fitting result curve between SMEBDDC and the cluster center of OMEBDDC.
Figure 12. The fitting relationship curves between the subjective evaluation index and the cluster center of objective evaluation index: (a) The fitting result curve between SMSLDD and the cluster center of OMSLDD; (b) The fitting result curve between SMCDDDC and the cluster center of OMCDDDC; (c) The fitting result curve between SMRCDDC and the cluster center of OMRCDDC; (d) The fitting result curve between SMEBDDC and the cluster center of OMEBDDC.
Sensors 17 00321 g012
Figure 13. The airplane cockpit instrument panel application of the proposed system: (a) The simulation interface of instrument panel; (b,c) The parameter output interfaces which use different character sizes; (d,e) The progress bars which use different colors; (f,g) The circle dial plates which use different character sizes and background colors.
Figure 13. The airplane cockpit instrument panel application of the proposed system: (a) The simulation interface of instrument panel; (b,c) The parameter output interfaces which use different character sizes; (d,e) The progress bars which use different colors; (f,g) The circle dial plates which use different character sizes and background colors.
Sensors 17 00321 g013
Figure 14. The schematic diagram of feedback control of the proposed intelligent lighting system.
Figure 14. The schematic diagram of feedback control of the proposed intelligent lighting system.
Sensors 17 00321 g014
Table 1. The color component samples of the standard color card.
Table 1. The color component samples of the standard color card.
Color CoordinateCyanBlueGreenRedYellowOrangeWhite
X14.777.7814.2219.8454.5937.9984.37
Y19.556.1622.8812.4158.4530.4489.25
Z40.0826.8910.085.469.176.7293.71
Table 2. The quantitation processing method of the single LEEM-based control technique.
Table 2. The quantitation processing method of the single LEEM-based control technique.
Threshold Scope[T1, T2][T3, T4][T5, T6][T7, T8][T9, T10] 1
Quantitative score54321
1 Ci (i = 1, 2, …, 5) is the cluster center; Ti (i = 1, 2, …, 10) is the threshold; and T1 < C1 < T2, T3 < C2 < T4, T5 < C3 < T6, T7 < C4 < T8, T9 < C5 < T10.
Table 3. The illuminance measurement results of Figure 7d((d)-5–(d)-8).
Table 3. The illuminance measurement results of Figure 7d((d)-5–(d)-8).
Image NameMaximum Illuminance (×102 lx)Minimum Illuminance (×102 lx)Average Illuminance (×102 lx)Illuminance Uniformity (×102 lx)
Figure 7d((d)-5)6.031.234.942.47
Figure 7 d((d)-6)7.011.545.732.42
Figure 7 d((d)-7)12.744.919.383.62
Figure 7 d((d)-8)14.755.1611.715.30
Table 4. The single factor subjective lighting degree evaluation results of Figure 7d((d)-5–(d)-8).
Table 4. The single factor subjective lighting degree evaluation results of Figure 7d((d)-5–(d)-8).
Image NameSingle LEEM Subjective Evaluation Degree
SMSLDDSMCDDDCSMRCDDCSMEBDDC
Figure 7d((d)-5)2233
Figure 7d((d)-6)2333
Figure 7d((d)-7)3344
Figure 7d((d)-8)4454
Table 5. The statistic results of the objective LEEMs in Figure 9.
Table 5. The statistic results of the objective LEEMs in Figure 9.
MetricOMSLDDOMCDDDC
Degree1234512345
Mean211.0157211.0360211.3514211.4528211.655419.530515.493411.851110.56609.1785
Variance6.0053 × 10−67.3102 × 10−61022.72 × 10−6763.109 × 10−6515.628 × 10−6612.6363 × 10−4709.9228 × 10−42959.6242 × 10−4615.6026 × 10−41331.8018 × 10−4
MetricOMRCDDCOMEBDDC
Degree1234512345
Mean5.99516.81106.98546.85447.20842.14182.04921.85861.91262.7585
Variance28.1307 × 10−449.3477 × 10−460.9873 × 10−48.5301 × 10−429.6785 × 10−412.1872 × 10−44.5413 × 10−410.3775 × 10−410.8874 × 10−414.3658 × 10−4
Table 6. The data samples and their cluster centers of the objective multiple LEEMs.
Table 6. The data samples and their cluster centers of the objective multiple LEEMs.
Subjective Lighting DegreeThe Objective LEMMs Vector [OMSLDD OMCDDDC OMRCDDC OMEBDDC]The Cluster Center
1[211.0122 19.4462 6.0243 2.1990][211.0157 19.5305 5.9951 2.1418]
1[211.0133 19.6238 5.9806 2.1883]
1[211.0134 19.6238 5.9962 2.1404]
2[211.0333 15.2866 6.8128 2.0490][211.0360 15.4934 6.8110 2.0492]
2[211.0342 15.2866 6.7730 2.0295]
2[211.0354 15.4643 6.7668 1.9936]
3[211.3667 12.0009 6.9177 1.8845][211.3514 11.8511 6.9854 1.8586]
3[211.3323 11.9121 6.9079 1.8619]
3[211.3542 12.0009 6.9371 1.8672]
4[211.4667 10.8630 6.8696 1.9042][211.4528 10.5660 6.8544 1.9126]
4[211.4064 10.8630 6.8498 1.8618]
4[211.4557 10.7742 6.8650 1.8713]
5[211.6667 8.5542 7.1380 2.8147][211.6554 9.1785 7.2084 2.7585]
5[211.6655 9.0870 7.1592 2.7632]
5[211.6264 8.5542 7.2227 2.7359]
Table 7. The objective LEEMs comparison before and after the proposed lighting control.
Table 7. The objective LEEMs comparison before and after the proposed lighting control.
Image SourceLighting Control MethodOMSLDDOMCDDDCOMRCDDCOMEBDDC
Initial lighting (no control)211.034215.10906.78092.0575
Figure 10aSingle LEEM control method211.62369.35147.22952.7762
Multiple LEEMs control method211.65659.26267.15732.8662
Table 8. The subjective evaluation results of the lighting effect evaluation degree before and after the application of the intelligent lighting control.
Table 8. The subjective evaluation results of the lighting effect evaluation degree before and after the application of the intelligent lighting control.
Subject IDBefore Intelligent Lighting ControlAfter Intelligent Lighting Control
SMSLDDSMCDDDCSMRCDDCSMEBDDCSMSLDDSMCDDDCSMRCDDCSMEBDDC
112115455
211125454
312124555
421115455
521214454
611124544
712215444
822214444
Table 9. The comparisons of the visual fatigue degree and the reading error rate before and after the application of the proposed intelligent lighting system.
Table 9. The comparisons of the visual fatigue degree and the reading error rate before and after the application of the proposed intelligent lighting system.
Subject IDBefore Intelligent Lighting ControlAfter Intelligent Lighting Control
Visual Fatigue IndexReading Error Rate IndexVisual Fatigue IndexReading Error Rate Index
139.8%25.4%
238.3%14.5%
338.7%24.2%
437.8%24.9%
5410.1%15.3%
649.8%16.2%
7410.3%24.1%
838.7%14.3%

Share and Cite

MDPI and ACS Style

Liu, H.; Zhou, Q.; Yang, J.; Jiang, T.; Liu, Z.; Li, J. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback. Sensors 2017, 17, 321. https://doi.org/10.3390/s17020321

AMA Style

Liu H, Zhou Q, Yang J, Jiang T, Liu Z, Li J. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback. Sensors. 2017; 17(2):321. https://doi.org/10.3390/s17020321

Chicago/Turabian Style

Liu, Haoting, Qianxiang Zhou, Jin Yang, Ting Jiang, Zhizhen Liu, and Jie Li. 2017. "Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback" Sensors 17, no. 2: 321. https://doi.org/10.3390/s17020321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop