Next Article in Journal
Neutron Imaging in Cultural Heritage Research at the FRM II Reactor of the Heinz Maier-Leibnitz Center
Previous Article in Journal
Hot Shoes in the Room: Authentication of Thermal Imaging for Quantitative Forensic Analysis
Article Menu
Issue 1 (January) cover image

Export Article

Journal of Imaging 2018, 4(1), 19; doi:10.3390/jimaging4010019

Application of High-Dynamic Range Imaging Techniques in Architecture: A Step toward High-Quality Daylit Interiors?
Coralie Cauwerts 1,*Orcid and María Beatriz Piderit 2
Architecture & Climat, Faculté D’architecture, D’ingénierie Architecturale, D’urbanisme (LOCI), Université Catholique de Louvain (UCL), 1348 Louvain-la-Neuve, Belgium
Departamento de Diseño y Teoría de la Arquitectura, Facultad de Arquitectura, Construcción y Diseño, Universidad del Bio-Bio, 4030000 Concepción, Chile
Correspondence: Tel.: +32-10-472-142
Received: 29 November 2017 / Accepted: 10 January 2018 / Published: 12 January 2018


High dynamic range (HDR) imaging techniques are nowadays widely used in building research to capture luminances in the occupant field of view and investigate visual discomfort. This photographic technique also makes it possible to map sky luminances. Such images can be used for illuminating virtual scenes; the technique is called image-based lighting (IBL). This paper presents a work investigating IBL in a lighting quality research context for accelerating the development of appearance-driven performance indicators. Simulations were carried out using Radiance software. The ability of IBL to accurately predict indoor luminances is discussed by comparison with luminances from HDR photographs and luminances predicted by simulation in modeling the sky in several other more traditional ways. The present study confirms previous observations that IBL leads to similar luminance values than far less laborious simulations in which the sky is modeled based on outdoor illuminance measurements. IBL and these last methods minimize differences with HDR photographs in comparison to sky modeling not based on outdoor measurements.
daylighting; high dynamic range imaging techniques; image-based lighting; radiance; simulation; renderings; prediction; psychophysics

1. Introduction

In the current context of energy crises and climate change, the work of the architect has become highly complex. One of the current risks is to focus on building energy performance to the detriment of other important aspects participating in architectural quality. In the field of lighting, quality has been defined as conformance to occupants’ needs, to architecture requirements, and to economics and environment matters [1]. As specified in Figure 1 illustrating lighting quality in light of the Vitruvian triad, occupants’ needs should not be understood only as visual needs. Lighting should obviously support occupant’s activities in providing sufficient light and in avoiding glaring situations causing visual discomfort. However, it must also participate in the satisfaction of some non-visual needs (a.o. social interactions, performance, health, and safety matters) [2].
Thanks to the recognized benefits of daylight to the occupants, its energy-efficiency potential, and its added value for buildings, daylight remains the preferred source for lighting. Additionally, in most places, electric lighting is considered as a complement to daylighting. In practice, the most common approach for predicting lighting quality of daylit spaces is to calculate the daylight factor (DF). This static illuminance-based indicator, developed in the Seventies, informs on daylight provision (visual needs) without taking into account either the variation of daylight over the day and the year or the location of the building. Yet, important advances have been made since the Seventies that have led to the development of three new types of lighting quality indicators:
  • Thanks to computer technology, computing power increased dramatically from the 1970s, and the first physically-based lighting simulation systems were developed in the Eighties. These advances favored the development of dynamic climate-based performance indicators such as daylight autonomy (DA) [3] and useful daylight illuminance (UDI) [4]. These metrics are calculated based on weather files describing typical conditions at the building’s location. They take into account daylight variability, target illuminance level, and building occupancy periods. They also inform on the potential to reduce lighting consumption thanks to daylighting. We observe that UDI and DA, developed more than 10 years, have difficulty being adopted by practitioners. To our opinion, potential reasons are (1) calculation tools and software not adapted to architectural design realities and (2) a need for a normative/regulative context specifying targets.
  • In the past decade, HDR photography was increasingly used by lighting researchers as a luminance data acquisition tool. Over spot luminance meters, this technique has the advantage to capture luminances in the human field of view more rapidly and with a higher resolution. It also makes possible statistical analyses of luminances of specific surfaces/areas of interest. The accuracy of luminance measurement with HDR photography is widely influenced by the care taken during acquisition and treatment [5,6,7]. A measurement error of less than 10% can be expected [6]. HDR photography surely accelerated the development of luminance-based metrics predicting visual discomfort caused by glaring situations (e.g., DGP [8]), and will probably facilitate their validation.
  • Last, in recent years, we have observed a growing interest of lighting researchers for circadian matters [9,10]. This interest follows the discovery in the 2000s of a third type of retinal photoreceptor [11,12]. Light is today recognized as the “major synchronizer of circadian rhythms to the 24-h solar day” [13]. To help designers to address the need for the building’s daylight access-supporting circadian regulation, circadian daylight metrics are under development [14,15].
Figure 1 highlights the fact that research efforts presented here above have mainly addressed two of the three aspects of the Vitruvian triad (i.e., utilitas and firmitas). Finally, few works investigate the prediction of visual appearance, atmospheres, and aesthetic matters related to the third dimension of architecture, venustas, which is probably a driving force for the designer. In our opinion, that reveals the fact that lighting research does not take sufficient account of the architectural design process of the architect or designer.
For accelerating the development of appearance-driven performance indicators such those developed in [16], some methodological challenges should be addressed. Indeed, the classical method for exploring the appearance of lit spaces is the psychophysical approach: the relationship between physical measurements of (visual) stimuli and sensations/perceptions that those stimuli evoke to observers is studied. In the context of daylighting, one of the difficulties with such an approach is the control of natural variations of lighting conditions. To overcome this issue, physically-based renderings are interesting and particularly suitable to psychophysics, as they provide both physical data and visual stimuli. First few validation works suggest that such images could serve as reasonable surrogates for real world [17,18,19]. This kind of work investigating the perceptual equivalence between actual and virtual daylit environments should continue. Also, in such a context of validation, image-based lighting (IBL), a process of illuminating virtual scenes with HDR photographs as explained in the tutorial by Debevec [20], presents a great interest, as it could minimize light distribution differences between real and virtual scenes. To the best of our knowledge, the rare published works investigating IBL in lighting research are from Inanici [21,22,23]. Her main conclusions are that image-based lighting renderings predict accurately the luminous indoor conditions and that the method is particularly interesting in urban context or for sites with vegetation.
In the present work, we sought to:
  • Investigate the ability of IBL renderings to accurately predict luminance distributions, in indoor spaces, in comparison to more traditional ways to describe the light source in Radiance [24];
  • Determine how similar are our observations to those reported by Inanici [21,22,23];
  • Quantify the error between actual and rendered luminances.

2. Materials and Methods

To evaluate the accuracy of IBL for predicting luminance distribution, a numerical comparison was done between luminance values extracted from HDR photographs of real rooms and simulated luminances. Four actual (and thus complex) rooms were studied (see Figure 2). They are located in Louvain-la-Neuve, Belgium (50°40′ N, 4°33′ E). They were photographed three times, on 9 March between 11:00 and 14:20.
Simultaneously to HDR luminance acquisition in the real rooms, HDR images of the sky were taken, and outdoor illuminances (horizontal global and horizontal diffuse illuminances) were recorded. Sky images and outdoor illuminances are used for describing the sky in simulations (see Section 2.3).
HDR image processing and renderings were carried out with Radiance, which is a physically-based rendering system developed in the 1980s by Greg Ward for predicting light levels and appearance of yet unbuilt spaces, in a lighting architectural design context [24]. This open-source software supports image-based lighting and is probably the most used software by lighting researchers.

2.1. Outdoor Illuminance Measurements

Table 1 summarizes outdoor illuminance levels recorded simultaneously with the HDR luminance acquisition in the real rooms. Outdoor global horizontal illuminance (E_glob_horiz) and outdoor diffuse horizontal illuminance (E_dif_horiz) were measured with a Hagner EC1-X illuminance meter. For simulations, direct normal illuminance (E_dir_norm) was calculated from outdoor illuminance measurements and the altitude of the sun (theta_sun) as follows:
E d i r n o r m = ( E glob h o r i z E dif h o r i z ) sin θ s u n ,
The altitude of the sun was determined for the given date, time, and location using the solar geometry algorithm given by Szokolay [25]. Table 1 shows that outdoor global horizontal illuminances (E_glob_horiz) varied between 15,300 and 71,700 lx. Sky type is intermediate or overcast.

2.2. Real Rooms Luminance Acquisition

Pictures in indoor spaces were taken with a Canon EFS 17–85 mm IS lens mounted on a Canon 40D camera. In each room, three times, a series of LDR pictures were taken varying the exposure time but keeping constant the aperture of the camera. For easy and automatic bracketing, the camera was controlled from a computer using a USB cable and DSLR Remote Pro software. The white balance of the camera was set to daylight and the lowest sensitivity (ISO100) was chosen to reduce the noise in the HDR picture, as recommended in [6]. A tripod was used to avoid camera shakes and get sharp HDR pictures. A double axis bubble level was placed on the camera to ensure that the device was level. In order to create panoramic images, for each exposure, a series of pictures were taken in rotating the camera around its entrance point. For each exposure, pictures were first stitched into a LDR panorama in PTguiPro.
Merging multiple exposure LDR images into an HDR image requires knowledge of the camera response function that establishes the relation between RGB pixel values and relative radiance values. This camera’s response can be determined once for a given camera and reused for other sequences [26]. The response function of the CANON 40D camera we used in indoor rooms was recovered with the hdrgen program developed by Ward for Linux [27], based on a sequence of seven images of an interior daylit scene with large and smooth gradients. hdrgen uses the Mitsunaga and Nayar’s algorithm [28] to derive the camera’s response and a specific function is determined for each channel. The recovered response is given in Equation (2).
R = 2.3009 x 4 2.28146 x 3 + 0.869445 x 2 + 0.109496 x + 0.00161688 G   =   2.39965 x 4 2.40705 x 3 + 0.902785 x 2 + 0.103059 x   + 0.00154816 B = 2.70252 x 4 2.84321 x 3 + 1.0529 x 2 + 0.0857588 x   + 0.00202927  
All HDR images of indoor spaces were created reusing the camera’s response presented in Equation (2) and the hdrgen program. Output HDR data were stored in Radiance RGBE format. To retrieve real luminance values from HDR image, a photometric calibration has been done. Luminance of several objects in the HDR image (extracted using the pvalue program of the Radiance software [29]) was compared with luminance measurement taken with a Minolta LS100 spot luminance meter. For each scene, a global calibration factor (CF) was determined as follows:
CF i n d o o r _ s c e n e s =   1 n   i = 1 n L s p o t _ l u m _ i L H D R _ i ,
where n is the number of objects, L_spot_lum_i is the luminance measurement of object i taken with a spot luminance meter and L_HDR_i the luminance of the same object from HDR picture. In the present study, the resulting calibration factors for the 12 scenes (4 rooms * 3 times) vary between 1.12 and 1.47.

2.3. Sky Vault Luminance Acquisition

To capture the entire sky vault, a Sigma f/2.8 4.5 mm fisheye lens was mounted on a second Canon 40D camera. This device (camera + fisheye lens) creates circular pictures catching a 180° hemisphere. With fisheye lenses, the vignetting effect (the decrease of brightness observed from the center of the picture to its periphery) is not negligeable and should be corrected. With our device, when large apertures are used, luminance losses superior to 50% are observed at the periphery of the pictures [30]. A tripod and a double axis bubble level were used to ensure the horizontality of the camera. The white balance of the camera was set to daylight and the ISO (sensor sensitivity) to 100.
Acquiring luminances of the sky vault with HDR photography is more challenging than of indoor spaces due to the high luminances of the sun. Nevertheless, as demonstrated by Stumpfel et al. [31], it is possible to avoid the saturation of the camera’s sensor in (1) using neutral density filters and then applying a correction for its presence, and (2) carefully selecting settings (aperture and exposure time) for the capture of LDR images. In the present study, a neutral density filter (Kodak ND 3.00) transmitting 0.1% of the incident light was placed between the lens and the camera. Then, following the best practice [21], to capture the wide range of luminances of sunny skies, two sequences of multiple exposure images were taken. Both were done varying the exposure time but keeping constant the aperture of the camera. A first series of LDR pictures was taken with a large aperture (f/4) to capture the low luminances of the cloud layer. The second series was done with a smaller aperture (f/16) to capture the high luminances of the sun and its corona. For both apertures (f/4 and f/16), the ND filter was present, and the shutter speed varied between 25 s and 1/2500 s, with 2-stop increments.
As the camera response function can vary from a device to another, even between cameras of same model [32], a specific camera response function was determined for this second CANON 40D camera (see Equation (4)). Again, hdrgen was used to determine the curves. A sequence of 11 images of an outdoor scene was used. Figure 3 illustrates the response of the two CANON 40D cameras used in the present work.
R = 2.38624 x 4   2.47617 x 3 + 0.977738 x 2 + 0.111055 x + 0.00113715 G   =   2.42246 x 4   2.47665 x 3 + 0.952092 x 2 + 0.100829 x + 0.00127057 B = 2.51788 x 4   2.61582 x 3 + 1.00131 x 2 + 0.094999 x + 0.00162661
Each sequence of LDR pictures of the sky was merged into a HDR picture using the camera’s response presented in Equation (4) and the hdrgen program. For intermediate skies, both HDR sky images (from f/16 and f/4 aperture series) were combined: luminances higher than 500,000 cd/m2 were extracted from the f/16 HDR picture and luminances inferior to 30,000 cd/m2 were extracted from f/4 HDR sky image. Between these values, luminances were linearly combined. In the case of overcast skies, only the f/4 aperture series was used (no pixel is saturated because of the absence of sun).
The main steps of the calibration process necessary to retrieve real luminance values from HDR sky image are, as illustrated in Figure 4:
  • A neutral density filter correction, determined as proposed by Stumpfel et al. [31] in photographing a Macbeth color chart with and without the ND filter;
  • A vignetting correction for counteracting respectively the 50% and 4% losses of luminance observed at the periphery of the sky image with our device (CANON40D + Sigma 4.5 mm) and a f/16 or a f/4 aperture;
  • A calibration of the resulting (combined) HDR image, based on the measurement of outdoor illuminance. To determine the calibration factor (see Equation (5)), outdoor global horizontal illuminance was compared to illuminance from HDR picture calculated with evalglare (a Radiance-based tool [33]) after modifying the projection type of the image from equisolid to equidistant (using the Radiance file).
    C F s k y _ i m a g e =   E g l o b h o r i z E H D R ,
    where E_glob_horiz is the outdoor global horizontal illuminance measured during the sky image capturing process and E_HDR is the illuminance calculated from the HDR image. Calibration factor vary between 0.95 and 1.41.

2.4. Renderings

The four actual rooms were modelled in Ecotect and rendered in Radiance. As for most rendering tools, the description of a scene in Radiance requires us to describe the geometry, the materials, and the light source(s) (see Figure 5). The description of the geometry was done based on building plans and in situ measurements. Materials were described thanks to in situ colorimetric measurements. Some hypothesis were done regarding the specularity properties and the roughness features of the materials.
A first series of renderings were created using HDR sky images (IBL renderings). The mapping onto the virtual hemispherical vault was done as described in the IBL tutorial by Debevec [20] but using an equisolid projection type. Preliminary, HDR pictures were cropped into a square, and a black border was added around the image circle. For intermediate skies, mksource was used for extracting a direct light source from the HDR pictures. After several tests, the radiance threshold was set to 5586 W sr−1 m−2 and the source diameter to 1.153, as in [23].
In order to evaluate the interest of IBL renderings, more traditional ways to describe the light source (the sky vault), in Radiance, were also investigated. We tested two sky model generator programs included in the software: gensky and gendaylit. Gensky produces sky luminance distributions based either on the uniform luminance model, the CIE overcast sky model, the CIE clear sky model, or the Matsuura intermediate sky model [34]. Gendaylit generates the daylight source using Perez models. The four following ways to describe the daylight source were tested:
  • Gensky in specifying date, time, and location (gensky_def). This way to describe the light source is used by many novice users and practitioners unfamiliar with lighting simulations.
  • Gensky in specifying date, time, location, and sky type (gensky_sky). The sky type was determined based on a subjective evaluation of the cloud layer.
  • Gensky in specifying date, time, location, sky type, and horizontal diffuse and direct irradiances (gensky_br). Horizontal diffuse and direct irradiances were determined based on outdoor measurements.
  • Gendaylit in specifying date, time, location, sky type, and direct normal and diffuse horizontal illuminances (gendaylit). Direct normal and diffuse horizontal illuminances were determined based on outdoor measurements.
Rooms #1 and #2 that have more complex geometries were simulated with slightly higher parameters (-ab 8 -aa 0.08 -ar 512 -ad 2048 -as 512) than Rooms #3 and #4 (-ab 7 -aa 0.15 -ar 512 -ad 2048 -as 512).

3. Results

The comparison between pictures and simulations is challenging. Indeed, given small geometric misalignments between the HDR pictures taken in the real world and renderings, per-pixel comparison would induce a large difference. In the present study, we first visually compared sky luminance distributions generated by gensky and gendaylit programs and HDR sky images used for IBL renderings. Then, luminance maps of real and rendered spaces were compared. Last, a surface-to-surface analysis was done in comparing mean luminance of real surfaces (walls, ceiling, and floor) and rendered ones (see Figure 6 for an illustration of the studied surfaces in Room#3).
In order to quantify the difference between real and virtual luminances, three indicators were calculated:
  • The relative mean bias error (MBE) with respect to the mean luminance by surface in the real space. MBE is a measure of overall bias error and is defined as:
    MBE =   1 n   i = 1 n L i L true L true ,
  • The mean absolute percentage error (MAPE) with respect to the mean luminance by surface in the real space. MAPE is defined as:
    MAPE =   100 n   i = 1 n | L i L true L true |   ( % ) ,
  • The relative root mean square error (RMSE), which gives a relatively high weight to large difference with real luminances, contrary to the other indicators. RMSE is calculated as:
    RMSE =   1 n   i = 1 n ( L i L true L true ) 2 ,
In the three equations, Li is the mean luminance of the surface i in the rendering, Ltrue is the mean luminance of the real surface, and n is the number of studied surfaces.

3.1. Visual Comparison of Sky Maps

As illustrated in Table 2, for most skies, the luminance distribution is similar whatever the method used to produce it. However, sky maps produced by gensky_def and gensky_sky have lower luminances than real skies. Skies generated with gensky and gendaylit programs using outdoor illuminance measurements (gensky_br and gendaylit) are more similar to IBL virtual sky vaults and real sky luminances.

3.2. Visual Comparison of Indoor Spaces

Two groups of renderings can be distinguished based on the indoor luminance maps they produced (see Table 3, Table 4, Table 5 and Table 6):
  • Gensky_def and gensky_sky produce similar luminance maps underestimating real luminances;
  • Gensky_br, gendaylit, and IBL are the second group. They produce luminance maps that seem closer to real luminances than those produced by the first group (gensky_def and gensky_sky). Nevertheless, Table 3 shows a slight underestimation of luminances in Room#1, in comparison with real luminances. This underestimation is slightly larger in Room#2 (see Table 4). In Room#3 at 11:25 (see Table 5), an overestimation by these three kinds of rendering is observed. Also, in Room#4 (see Table 6), the luminance of the ceiling seems overestimated by simulation.

3.3. Surface-to-Surface Comparison

Figure 7 highlights the underestimation of luminances predicted by gensky_def and gensky_sky. Gensky_br, gendaylit, and IBL profiles seem more similar to real luminances despite some large disparities.
Figure 8 illustrates the relative mean bias error calculated by room. It confirms the quasi-systematic underestimation of luminances predicted by gensky_def and gensky_sky. Also, whatever the rendering type, an underestimation is observed in Room#1 and Room#2. Figure 8 also shows that gensky_br, gendaylit, and IBL minimize the error with luminances extracted from HDR photographs in comparison with errors produced with gensky_def and gensky_sky, which are almost double (except in Room#4). In order to evaluate the impact of the misalignment between photographs of real scenes and renderings, a 50-by-50px shift was introduced in the real image (the vertical size of the images is about 3000 pixels). Relative MBE by rooms were calculated between the shifted images and the corresponding original pictures: they vary between −5% and 5%, are always negative in Room#1 and Room#2, and are always positive in Room#3 and Room#4.
Mean relative error for all rooms was estimated with MAPE. MAPE vary between 52% with gendaylit renderings and 72% with gensky_def renderings. Gendaylit renderings minimize thus the error, while gensky_def maximizes the error with real luminances. In between, MAPE are 57%, 62% and 68%, respectively for IBL, gensky_br, and gensky_sky. The shift of 50-by-50px introduces, according to the scene, MAPE between 2 and 42%, which is not negligible.
RMSE were also calculated. Gendaylit and gensky_sky minimize errors with a RMSE of 70%. Errors are 77%, 79%, and 91% for, respectively, IBL, gensky_def, and gensky_br. The shift of 50-by-50px leads to RMSE between 3 and 46%, according to the room.
In a second step, relative MBE were calculated with IBL renderings as the reference. Differences between gendaylit, gensky_br, and IBL renderings are small (see Figure 9).

4. Discussion

We observed in this work that IBL, gendaylit, and gensky_br are three ways to describe the light source in Radiance leading to similar luminance distributions. Moreover, among the five types of simulation we tested, these three types of rendering (which are all three based on physical measurements) minimize the error with luminances extracted from HDR photographs. We also observed that the skies generated with gensky in only specifying date, time, location (gensky_def), or in only specifying the sky type (gensky_sky) underestimate almost systematically luminances in comparison to HDR luminance data. This highlights that a basic use of gensky (gensky_def and gensky_sky) without specifying outdoor irradiances can lead to an important underestimation of luminances.
The similarity between HDR photographs and IBL, as well as the underestimation of the daylight availability with gensky_def in comparison to IBL, gendaylit, and gensky_br, were also observed by Inanici, respectively, in [21] and [23] (note that in this second reference, renderings are not compared to real spaces). Contrary to Inanici’s observation that the skies generated with gensky using outdoor irradiance measurements (gensky_br) are closer to HDR sky pictures than the sky generated with gendaylit, we observed a greater similarity between the skies produced by gendaylit and the HDR sky images.
In the present study, we sought to quantify observed differences. We selected three indicators. Errors calculated between actual and rendered luminances are quite large but similar to those calculated by Karner and Prantl in a study discussing the difficulties of comparing photographs and rendered images [35]. We partially explain these large errors by the fact that, as highlighted by the 50-by-50px shift analysis, our indicators are largely influenced by small misalignments.
We share the point of view of Inanici [21,23] on the interest of IBL renderings when the building is in an urban context or surrounded by vegetation influencing daylight availability. Indeed, these neighbouring elements are difficult and/or time consuming to model. In such environments, illuminating the virtual scene with a light probe image could be useful.
In the present work, we investigated IBL renderings for reducing difference of luminance distribution between real and virtual scenes in a process of validation of tools aiming at the development of new appearance-oriented indicators. Developing such indicators is a way to reduce the existing gap between designers and lighting researchers, and is essential for favouring high-quality, daylit interiors. This is, today, more important than ever as, in industrialized countries, people spend more than 80% of their time indoors. In the present study, the interest of IBL in the frame of validation works investigating the perceptual equivalence between actual and virtual daylit environments such as [18,36] has not been highlighted, as gendaylit and gensky_br give similar results to IBL. Moreover generating skies with gendaylit or gensky is far less laborious than preparing HDR sky images for IBL renderings. As our study cases are not strongly affected by direct sun presence, the investigation of IBL renderings should continue, and further work should be done with more sunny skies and interior spaces with sun.
Last, in a context of seeking alternative environments for investigating visual perception of ambiances, other HDR technologies have to be investigated. Indeed, in the present study, we discussed the interest of IBL renderings, which use HDR sky images for predicting luminance maps of unbuilt spaces (HDR images). In a psychophysical approach, once these visual stimuli (HDR images) are created, they have to be displayed to observers for collecting perceptions (the aim is to better understand human perceptual response to visual environment). Which display devices and which tone-mapping operator to use for ensuring the perceptual accuracy of rendered lit scenes are other recurring issues with this type of approach [19,37,38]. They have still to be tackled to accelerate the development of visual-appearance metrics.


C.C. is a postdoctoral researcher at the Fund for Scientific Research (FNRS), Belgium. She thanks her colleagues for the help in the simultaneous acquisition of HDR sky images and indoor pictures. M.B.P. is researcher of the Sustainable Architecture and Construction Research Group of University of Bio-Bio, Chile. This work was partially supported by a travel grant (SUB/2015/217204) from the WBI-Wallonie Bruxelles International.

Author Contributions

C.C. and M.B.P. analyzed the data and wrote the paper. Part of data was collected in the frame of the Ph.D. work of C.C.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Rea, M.S. Iesna Lighting Handbook: Reference and Application, 9th ed.; Illuminating Engineering Society of North America: New York, NY, USA, 2000. [Google Scholar]
  2. Veitch, J.A.; Newsham, G.R. Determinants of Lighting Quality II: Research and Recommendations. In Proceedings of the 104th Annual Convention of the American Psychological Association, Toronto, ON, Canada, 9–12 August 1996. [Google Scholar]
  3. Reinhart, C.F.; Mardaljevic, J.; Rogers, Z. Dynamic daylight performance metrics for sustainable building design. Leukos 2006, 3, 7–31. [Google Scholar]
  4. Nabil, A.; Mardaljevic, J. Useful daylight illuminances: A replacement for daylight factors. Energy Build. 2006, 38, 905–913. [Google Scholar] [CrossRef]
  5. Cai, H.; Chung, T. Improving the quality of high dynamic range images. Light. Res. Technol. 2011, 43, 87–102. [Google Scholar] [CrossRef]
  6. Inanici, M. Evaluation of high dynamic range photography as a luminance data acquisition system. Light. Res. Technol. 2006, 38, 123–136. [Google Scholar] [CrossRef]
  7. Jakubiec, J.A.; Van Den Wymelenberg, K.; Inanici, M.; Mahic, A. Improving the accuracy of measurements in daylit interior scenes using high dynamic range photography. In Proceedings of the 32nd PLEA Conference, Los Angeles, CA, USA, 11–13 July 2016. [Google Scholar]
  8. Wienold, J.; Christoffersen, J. Evaluation methods and development of a new glare prediction model for daylight environments with the use of ccd cameras. Energy Build. 2006, 38, 743–757. [Google Scholar] [CrossRef]
  9. Pechacek, C.S.; Andersen, M.; Lockley, S.W. Preliminary method for prospective analysis of the circadian efficacy of (day) light with applications to healthcare architecture. Leukos 2008, 5, 1–26. [Google Scholar] [CrossRef]
  10. Webb, A.R. Considerations for lighting in the built environment: Non-visual effects of light. Energy Build. 2006, 38, 721–727. [Google Scholar] [CrossRef]
  11. Provencio, I.; Rodriguez, I.R.; Jiang, G.; Hayes, W.P.; Moreira, E.F.; Rollag, M.D. A novel human opsin in the inner retina. J. Neurosci. 2000, 20, 600–605. [Google Scholar] [PubMed]
  12. Berson, D.M.; Dunn, F.A.; Takao, M. Phototransduction by retinal ganglion cells that set the circadian clock. Science 2002, 295, 1070–1073. [Google Scholar] [CrossRef] [PubMed]
  13. Acosta, I.; Leslie, R.; Figueiro, M. Analysis of circadian stimulus allowed by daylighting in hospital rooms. Light. Res. Technol. 2017, 49, 49–61. [Google Scholar] [CrossRef]
  14. Konis, K. A novel circadian daylight metric for building design and evaluation. Build. Environ. 2017, 113, 22–38. [Google Scholar] [CrossRef]
  15. Amundadottir, M.L.; Rockcastle, S.; Sarey Khanie, M.; Andersen, M. A human-centric approach to assess daylight in buildings for non-visual health potential, visual interest and gaze behavior. Build. Environ. 2017, 113, 5–21. [Google Scholar] [CrossRef]
  16. Rockcastle, S.; Ámundadóttir, M.L.; Andersen, M. Contrast measures for predicting perceptual effects of daylight in architectural renderings. Light. Res. Technol. 2016. [Google Scholar] [CrossRef]
  17. Newsham, G.; Richardson, C.; Blanchet, C.; Veitch, J. Lighting quality research using rendered images of offices. Light. Res. Technol. 2005, 37, 93–112. [Google Scholar] [CrossRef]
  18. Cauwerts, C. Influence of Presentation Modes on Visual Perceptions of Daylit Spaces. Ph.D. Thesis, Université Catholique de Louvain, Louvain-la-Neuve, Belgium, 2013. [Google Scholar]
  19. Murdoch, M.J.; Stokkermans, M.G.; Lambooij, M. Towards perceptual accuracy in 3d visualizations of illuminated indoor environments. J. Solid State Light. 2015, 2, 12. [Google Scholar] [CrossRef]
  20. Debevec, P. Image-based lighting. IEEE Comput. Gr. Appl. 2002, 22, 26–34. [Google Scholar] [CrossRef]
  21. Inanici, M. Evaluation of high dynamic range image-based sky models in lighting simulation. Leukos 2010, 7, 69–84. [Google Scholar]
  22. Inanici, M. Applications of image based rendering in lighting simulation: Development and evaluation of image based sky models. In Proceedings of the International IBPSA Conference, Glasgow, UK, 27–30 July 2009. [Google Scholar]
  23. Inanici, M.; Hashemloo, A. An investigation of the daylighting simulation techniques and sky modeling practices for occupant centric evaluations. Build. Environ. 2017, 113, 220–231. [Google Scholar] [CrossRef]
  24. Ward, G.J. The radiance lighting simulation and rendering system. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA, 24–29 July 1994; ACM: New York, NY, USA, 1994; pp. 459–472. [Google Scholar]
  25. Szokolay, S.V. Solar Geometry; PLEA(Passive and Low Energy Architecture International in assoc. with Department of Architecture, University of Queensland): Brisbane, Australia, 1996. [Google Scholar]
  26. Reinhard, E.; Ward, G.; Pattanaik, S.; Debevec, P. High Dynamic Range Imaging: Acquisition, Display and Image-Based Lighting; Elsevier: San Francisco, CA, USA, 2006; Volume 4. [Google Scholar]
  27. Greg Ward Anyhere Software. Available online: (accessed on 20 December 2017).
  28. Mitsunaga, T.; Nayar, S.K. Radiometric self calibration. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999; IEEE: Piscataway, NJ, USA, 1999; pp. 374–380. [Google Scholar]
  29. Radiance software. Available online: (accessed on 20 December 2017).
  30. Cauwerts, C.; Bodart, M.; Deneyer, A. Comparison of the vignetting effects of two identical fisheye lenses. Leukos 2012, 8, 181–203. [Google Scholar]
  31. Stumpfel, J.; Jones, A.; Wenger, A.; Tchou, C.; Hawkins, T.; Debevec, P. Direct HDR capture of the sun and sky. In Proceedings of the 3rd International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, Stellenbosch, South Africa, 3–5 November 2004; pp. 145–149. [Google Scholar]
  32. Jacobs, A. High dynamic range imaging and its application in building research. Adv. Build. Energy Res. 2007, 1, 177–202. [Google Scholar] [CrossRef]
  33. Wienold, J. Evalglare 2.0—New features, faster and more robust HDR-image evaluation. In Proceedings of the 15th International Radiance Workshop, Padova, Italy, 29–31 August 2016. [Google Scholar]
  34. Mardaljevic, J. Daylight Simulation: Validation, Sky Models and Daylight Coefficients. Ph.D. Thesis, Loughborough University, Loughborough, UK, 2000. [Google Scholar]
  35. Karner, K.F.; Prantl, M. A Concept for Evaluating the Accuracy of Computer Generated Images. In Proceedings of the 12th Spring Conference on Computer Graphics (SCCG’96), Budmerice, Slovakia, 5–7 June 1996; Available online: (accessed on 29 November 2017). [Google Scholar]
  36. Chamilothori, K.; Wienold, J.; Andersen, M. Adequacy of immersive virtual reality for the perception of daylit spaces: Comparison of real and virtual environments. Leukos 2018, accepted. [Google Scholar]
  37. Villa, C.; Parent, E.; Labayrade, R. Calibrating a display device for subjective visual comfort tests: Selection of light simulation programs and post-production operations. In Proceedings of the International Commission on Illumination CIE 2010 Lighting Quality and Energy Efficiency, Vienna, Austria, 14–17 March 2010. [Google Scholar]
  38. Cauwerts, C.; Bodart, M.; Labayrade, R. Assessing perception of light level, glare and contrast in daylit rooms using pictures: Influence of tone-mapping parameters and lighting conditions in the visualization room. In Proceedings of the Cleantech for Smart Cities & Buildings: From Nano to Urban Scale (CISBAT), Lausanne, Switzerland, 4–6 September 2013. [Google Scholar]
Figure 1. The lighting quality definition in light of the Vitruvian triad.
Figure 1. The lighting quality definition in light of the Vitruvian triad.
Jimaging 04 00019 g001
Figure 2. The four studied rooms (Room#1, Room#2, Room#3, and Room#4).
Figure 2. The four studied rooms (Room#1, Room#2, Room#3, and Room#4).
Jimaging 04 00019 g002
Figure 3. Camera response curves for the two CANON 40D cameras used in the present study, as determined with hdrgen.
Figure 3. Camera response curves for the two CANON 40D cameras used in the present study, as determined with hdrgen.
Jimaging 04 00019 g003
Figure 4. Creation of the light probe image. For overcast skies, only the f/4 aperture series is used.
Figure 4. Creation of the light probe image. For overcast skies, only the f/4 aperture series is used.
Jimaging 04 00019 g004
Figure 5. The virtualization of a real scene with Radiance requires describing the geometry, the materials, and the light source.
Figure 5. The virtualization of a real scene with Radiance requires describing the geometry, the materials, and the light source.
Jimaging 04 00019 g005
Figure 6. Zones for the surface-to-surface comparison of Room#3.
Figure 6. Zones for the surface-to-surface comparison of Room#3.
Jimaging 04 00019 g006
Figure 7. Mean luminance by surface, in real (REAL) and virtual spaces (gensky_def, gensky_sky, gensky_br, gendaylit, IBL).
Figure 7. Mean luminance by surface, in real (REAL) and virtual spaces (gensky_def, gensky_sky, gensky_br, gendaylit, IBL).
Jimaging 04 00019 g007
Figure 8. Relative mean bias error, by room. The reference is the real world.
Figure 8. Relative mean bias error, by room. The reference is the real world.
Jimaging 04 00019 g008
Figure 9. Relative mean bias error, by room. The reference is IBL.
Figure 9. Relative mean bias error, by room. The reference is IBL.
Jimaging 04 00019 g009
Table 1. Description of the sky conditions during HDR photographs of indoor spaces.
Table 1. Description of the sky conditions during HDR photographs of indoor spaces.
TimeE_glob_horiz (lx)E_dif_horiz (lx)theta_sun (Degree)E_dir_norm (lx)Sky Type
Table 2. Sky luminance maps in false colors (Room#4).
Table 2. Sky luminance maps in false colors (Room#4).
Time (Sky)REALGensky_defGensky_skyGensky_brGendaylitIBL
11:50 (i 1) Jimaging 04 00019 i001 Jimaging 04 00019 i002 Jimaging 04 00019 i003 Jimaging 04 00019 i004 Jimaging 04 00019 i005 Jimaging 04 00019 i006 Jimaging 04 00019 i019
13:05 (i) Jimaging 04 00019 i007 Jimaging 04 00019 i008 Jimaging 04 00019 i009 Jimaging 04 00019 i010 Jimaging 04 00019 i011 Jimaging 04 00019 i012
14:20 (o 2) Jimaging 04 00019 i013 Jimaging 04 00019 i014 Jimaging 04 00019 i015 Jimaging 04 00019 i016 Jimaging 04 00019 i017 Jimaging 04 00019 i018
1 Intermediate sky, 2 overcast sky.
Table 3. Luminance maps in false colors, in the real spaces (Room#1), and by simulation.
Table 3. Luminance maps in false colors, in the real spaces (Room#1), and by simulation.
Time (Sky)REALGensky_defGensky_skyGensky_brGendaylitIBL
11:00 (i) Jimaging 04 00019 i020 Jimaging 04 00019 i021 Jimaging 04 00019 i022 Jimaging 04 00019 i023 Jimaging 04 00019 i024 Jimaging 04 00019 i025 Jimaging 04 00019 i038
12:25 (i) Jimaging 04 00019 i026 Jimaging 04 00019 i027 Jimaging 04 00019 i028 Jimaging 04 00019 i029 Jimaging 04 00019 i030 Jimaging 04 00019 i031
13:40 (o) Jimaging 04 00019 i032 Jimaging 04 00019 i033 Jimaging 04 00019 i034 Jimaging 04 00019 i035 Jimaging 04 00019 i036 Jimaging 04 00019 i037
Table 4. Luminance maps in false colors, in the real spaces (Room#2), and by simulation.
Table 4. Luminance maps in false colors, in the real spaces (Room#2), and by simulation.
Time (Sky)REALGensky_defGensky_skyGensky_brGendaylitIBL
11:10 (i) Jimaging 04 00019 i039 Jimaging 04 00019 i040 Jimaging 04 00019 i041 Jimaging 04 00019 i042 Jimaging 04 00019 i043 Jimaging 04 00019 i044 Jimaging 04 00019 i057
12:35 (i) Jimaging 04 00019 i045 Jimaging 04 00019 i046 Jimaging 04 00019 i047 Jimaging 04 00019 i048 Jimaging 04 00019 i049 Jimaging 04 00019 i050
13:50 (o) Jimaging 04 00019 i051 Jimaging 04 00019 i052 Jimaging 04 00019 i053 Jimaging 04 00019 i054 Jimaging 04 00019 i055 Jimaging 04 00019 i056
Table 5. Luminance maps in false colors, in the real spaces (Room#3), and by simulation.
Table 5. Luminance maps in false colors, in the real spaces (Room#3), and by simulation.
Time (Sky)
11:25 (i)REALGensky_defGensky_sky Jimaging 04 00019 i076
Jimaging 04 00019 i058 Jimaging 04 00019 i059 Jimaging 04 00019 i060
Jimaging 04 00019 i061 Jimaging 04 00019 i062 Jimaging 04 00019 i063
12:50 (i)REALGensky_defGensky_sky
Jimaging 04 00019 i064 Jimaging 04 00019 i065 Jimaging 04 00019 i066
Jimaging 04 00019 i067 Jimaging 04 00019 i068 Jimaging 04 00019 i069
14:00 (o)REALGensky_defGensky_sky
Jimaging 04 00019 i070 Jimaging 04 00019 i071 Jimaging 04 00019 i072
Jimaging 04 00019 i073 Jimaging 04 00019 i074 Jimaging 04 00019 i075
Table 6. Luminance maps in false colors, in the real spaces (Room#4), and by simulation.
Table 6. Luminance maps in false colors, in the real spaces (Room#4), and by simulation.
Time (Sky)REALGensky_defGensky_skyGensky_brGendaylitIBL
11:50 (i) Jimaging 04 00019 i077 Jimaging 04 00019 i078 Jimaging 04 00019 i079 Jimaging 04 00019 i080 Jimaging 04 00019 i081 Jimaging 04 00019 i082 Jimaging 04 00019 i095
13:05 (i) Jimaging 04 00019 i083 Jimaging 04 00019 i084 Jimaging 04 00019 i085 Jimaging 04 00019 i086 Jimaging 04 00019 i087 Jimaging 04 00019 i088
14:20 (o) Jimaging 04 00019 i089 Jimaging 04 00019 i090 Jimaging 04 00019 i091 Jimaging 04 00019 i092 Jimaging 04 00019 i093 Jimaging 04 00019 i094

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
J. Imaging EISSN 2313-433X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top