Next Article in Journal
Preventing Wine Counterfeiting by Individual Cork Stopper Recognition Using Image Processing Technologies
Previous Article in Journal
Contribution of Remote Sensing on Crop Models: A Review
Previous Article in Special Issue
Application of High-Dynamic Range Imaging Techniques in Architecture: A Step toward High-Quality Daylit Interiors?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Dynamic-Range Spectral Imaging System for Omnidirectional Scene Capture

Department of Imaging Sciences, Graduate School of Engineering, Chiba University, Chiba 263-8522, Japan
*
Author to whom correspondence should be addressed.
J. Imaging 2018, 4(4), 53; https://doi.org/10.3390/jimaging4040053
Submission received: 15 December 2017 / Revised: 20 February 2018 / Accepted: 21 March 2018 / Published: 23 March 2018
(This article belongs to the Special Issue Theory and Practice of High-Dynamic Range Imaging)

Abstract

:
Omnidirectional imaging technology has been widely used for scene archiving. It has been a crucial technology in many fields including computer vision, image analysis and virtual reality. It should be noted that the dynamic range of luminance values in a natural scene is quite large, and the scenes containing various objects and light sources consist of various spectral power distributions. Therefore, this paper proposes a system for acquiring high dynamic range (HDR) spectral images for capturing omnidirectional scenes. The system is constructed using two programmable high-speed video cameras with specific lenses and a programmable rotating table. Two different types of color filters are mounted on the two-color video cameras for six-band image acquisition. We present several algorithms for HDR image synthesis, lens distortion correction, image registration, and omnidirectional image synthesis. Spectral power distributions of illuminants (color signals) are recovered from the captured six-band images based on the Wiener estimation algorithm. In this paper, we present two types of applications based on our imaging system: time-lapse imaging and gigapixel imaging. The performance of the proposed system is discussed in detail in terms of the system configurations, acquisition time, artifacts, and spectral estimation accuracy. Experimental results in actual scenes demonstrate that the proposed system is feasible and powerful for acquiring HDR spectral scenes through time-lapse or gigapixel omnidirectional imaging approaches. Finally, we apply the captured omnidirectional images to time-lapse spectral Computer Graphics (CG) renderings and spectral-based relighting of an indoor gigapixel image.

Graphical Abstract

1. Introduction

Omnidirectional imaging is a useful technology which has been widely used in daily life. For instance, omnidirectional images are useful for landscape archiving applications such as Google Street View [1] and the “Aftermath of the 2011 Tohoku earthquake and tsunami” (A project by The University of Tokyo and Tohoku University) [2]. Omnidirectional images are also currently applied to virtual reality (VR) technologies through head-mounted display systems [3,4,5]. According to the popularization of these applications, high-quality omnidirectional imaging system is required in terms of color information, dynamic range information and resolutions.
The dynamic range of luminance values in natural scenes is quite large and the scenes also include various spectral power distributions. The technology of an omnidirectional spectral imaging system for such a high dynamic range (HDR) natural scene is crucial in recent computer vision, computer graphics, and color imaging.
Until now, imaging systems have been developed separately as spectral imaging, omnidirectional imaging, and HDR imaging with different applications. For instance, with regards to spectral imaging, various types of multiband camera systems were proposed for the purposes of digital archiving, medicine, and natural scene analysis [6,7,8,9,10,11,12]. Omnidirectional imaging systems were proposed mainly for surveillance, intelligent transport systems, scene archiving [13,14,15,16,17,18]. Recently, video streaming services such as YouTube handle omnidirectional video streams since consumer-grade omnidirectional cameras have been released [19,20,21]. The basic technology of HDR imaging has been already developed and the systems are widely used in current digital cameras and computer graphics rendering [22,23].
Hybrid systems with two functions were also proposed for specific purposes. The LadyBug camera system is available for capturing red-green-blue (RGB) images of omnidirectional HDR scenes [16]. Hirai et al. proposed an imaging method to recover spectral information from HDR scenes [24]. Martínez-Domingo et al. recently proposed an HDR spectral imaging system for material classifications [25]. However, there was only a system based on the integration of spectral imaging, omnidirectional imaging, and HDR imaging technologies [26].
The integrated system of spectral, omnidirectional, and HDR imaging is useful not only for natural scene archiving and analysis [27], but also multispectral rendering using environment mapping [28]. Tominaga et al. developed an HDR spectral imaging system for capturing an omnidirectional natural scene [26]. This system was constructed with an RGB digital camera, a fisheye lens, two-color filters and a rotating table. Given that the system used a three-band RGB camera and two additional color filters attached to the camera, it realized a multiband imaging system with six spectral bands in the visible wavelength range. This system works well for static scenes where objects and light sources are unchanged over time. However, the essential drawback is that the system is not available for time-lapse scenes where objects and light sources move with time. The conventional system of one color camera and two-color filters is based on manual operation. Operators have to exchange the color filters and rotate the camera table manually, which renders the image acquisition time-consuming. Therefore, if objects move during image acquisition, misregistration occurs between the spectral channels. As a result, the combined multiband image includes ghosting and color artifacts. Figure 1 demonstrates a color image combined from the multiband images with misregistration when a cloudy sky was captured by the same system. Color artifacts and ghosts are caused by the random movement of clouds.
It is difficult to align or compensate for the spectral images of randomly-moving objects. Reliable image data of dynamic scenes are valuable for many fields including natural scene analysis [27,29], image rendering [28], and time-lapse image editing [30]. A solution method for reliably capturing dynamic natural scenes is to construct a one-shot multi-band imaging system. In practice, the system can be constructed using a stereo system [31,32] or multispectral filter array [33,34]. However, these proposed multiband imaging systems are neither omnidirectional nor HDR.
In this paper, we propose a new system for acquiring HDR spectral images for capturing omnidirectional scenes. In order to implement quick image acquisition without color artifacts due to the filter exchanges, the system is constructed using two programmable high-speed video cameras with specific lenses and a programmable rotating table. Two different types of color filters are mounted on the two-color video cameras for six-band image acquisition. We present several algorithms for HDR image synthesis, lens distortion correction, image registration, and omnidirectional image synthesis. We reconstruct spectral power distributions (color signals) from the six-band images based on the Wiener estimation algorithm. Based on this imaging system, we present two types of applications: time-lapse imaging and gigapixel imaging. These imaging techniques can be realized by replacing the mounted lenses. In other words, our imaging system is applied to the extensions of either temporal resolution or spatial resolution. The performances of the proposed system are evaluated in detail in terms of the system configurations, acquisition time, color artifacts, and spectral estimation accuracy. Finally, we present the experimental results for acquiring natural scenes based on time-lapse or gigapixel imaging approaches. Additionally, as applications of the proposed imaging system, we implement time-lapse spectral computer graphics renderings and spectral-based gigapixel image relighting of an indoor archived scene.
Compared with the previous manual system [26] for acquiring HDR omnidirectional spectral images, the time-lapse imaging by the proposed system realizes the short-time image acquisition without any color artifacts thanks to the automatic-controlled stereo-based omnidirectional imaging system. In addition, we apply the proposed system to gigapixel imaging. The conventional system was developed for scene color analysis and image-based lighting. The resolution of the conventional system is poor for the high-quality VR reproduction. On the other hand, the gigapixel imaging by the proposed system provides high-resolution 360° VR images with accurate color reproduction. In Section 5, we also discuss the performance comparison between the proposed and the conventional systems.

2. Related Studies

Multi-band imaging is a useful technology that is now widespread in all fields related to visual information. Research covers a broad range of areas including multispectral image acquisition, spectral reflectance estimation, illuminant estimation, accurate color reproduction, and computer graphics rendering. The application fields of multi-band imaging include medicine, digital archiving, wide gamut technology and natural scene analysis. Thus far a variety of multi-band imaging systems and methods have been proposed for acquiring spectral information from a scene. A well-known technique for realizing multi-band imaging is to combine a monochrome camera and color filters with different spectral bands [7,8]. For acquiring accurate colors of art paintings, Haneishi et al. proposed an algorithm to select optimal color filters in a multi-band system that consisted of a monochrome camera and color filters [35]. Tominaga et al. reproduced accurate colors of CG art paints by using such a six-band camera system [36]. These systems usually require a time-consuming process for manually changing the color filters. Another well-known technique is to employ a camera system with six or more bands by combining one or two RGB digital cameras with additional color filters. The conventional HDR omnidirectional multi-band imaging system used a three-band RGB camera and two additional color filters for realizing six-band imaging [26]. Shrestha et al. introduced a stereo-based imaging system that consisted of two RGB cameras with two different color filters [31]. Tsuchida et al. developed a stereo-based eleven-band imaging system with one RGB camera and eight monochromatic cameras with eight different color filters [32]. Given that these multi-band imaging systems are typically based on commercial digital cameras and color filters, the equipment costs can be reduced. Multi-band imaging techniques with unique optical systems have been also proposed. Ohsawa et al. presented a six-band imaging system using two three-band image sensors and a beam splitter [37]. Yata et al. proposed twelve-band imaging by mounting four color filters on the optical axis between a lens and an RGB sensor [33]. Similarly, Manakov et al. developed a 27-band imaging technique using such an optical system and nine color filters [34]. In addition, several studies have addressed to modulate color filter arrays on image sensors for developing multi-band imaging systems. Tanida et al. mounted nine color filters on an image sensor with a microlens array [38]. Monno et al. used an image sensor with a five-colored filter instead of a conventional RGB Bayer filter [39]. Sajadi et al. proposed an RGB-CMY imaging technique by using two switchable CMY filters [40]. Shrestha et al. simulated six-band imaging by placing chromagenic filters on top of conventional RGB filters [41]. Multi-band imaging has been often realized based on multi-color illuminations. Park et al. proposed a multi-band imaging system based on an RGB camera and five different color LEDs [42]. Hirai et al. developed a system using a monochromatic camera and six types of color LEDs for archiving art paints [43]. Nakahata et al. used a spectral light source for capturing multi-band images [44]. In general, to recover the spectra from multi-band images, spectral estimation methods can be applied [24,35,45].
A variety of omnidirectional imaging systems have also been developed. As a simple way, fisheye lenses have been used for taking 180° images. Recently, omnidirectional stereo imaging systems based on fisheye (or wide-angle) lenses have been also developed for studying scene depth analysis and stereo VR technologies [15,46,47]. In addition, camera systems with mirrors have been proposed. Nayar et al. presented a catadioptric omnidirectional camera by using hemisphere mirrors [13]. Omnidirectional stereo imaging using multi-mirrors has been also proposed [48,49,50]. In the research field of computer graphics, a spherical mirror ball is often used to apply environment mapping [18]. Since only one image is sufficient for recovering a hemisphere image, these mirror-based imaging techniques provide the short-time image acquisition. However, the recovered omnidirectional image generally has low resolution with some mirror distortions. In addition, the scene region occluded by the camera is not recovered. As another approach for omnidirectional imaging, image stitching algorithms are well known [51]. The stitching technique is often applied to gigapixel panorama imaging. For example, an omnidirectional gigapixel imaging system has been developed based on image stitching techniques [17]. This gigapixel imaging system takes a huge amount of images by using a telescopic camera and an automatically-controlled rotating table. As shown in such examples, stitching-based omnidirectional imaging provides a high-resolution omnidirectional image with less distortion. However, the image acquisition time of stitching-based imaging is longer than that of mirror-based imaging due to the number of captured images. For reducing the acquisition time in the stitching-based techniques, imaging systems with multi-directional cameras have been developed. Akin et al. proposed a hemispherical multiple camera system for high-resolution omnidirectional imaging [52]. Perazzi et al. presented a panorama imaging system using an unstructured camera array [53]. However, these imaging systems does not cover a 360° spherical field of view. For synthesizing a 360° spherical image, recently, commercial-grade omnidirectional cameras with two or more image sensors have been released [19,20]. The popularization of these reasonable commercial cameras, we can easily upload omnidirectional videos on video streaming services and develop VR scenes [4,21].
As described above, various multi-band and omnidirectional imaging systems have been proposed. However, there are few systems for capturing both multi-band and omnidirectional images simultaneously. Tominaga et al. had developed a multi-band omnidirectional imaging system for archiving and analyzing natural scenes [26]. The main contribution of this system is to realize a simple and reasonable system for 360° omnidirectional images with spectral information. In addition, the spatial resolution of the images captured is much better than the mirror-based systems thanks to the uses of several images. However, as described in the Introduction, this system had a problem with the time consumed for capturing a scene. In addition, this system encountered a loss and a non-uniformity of spatial resolutions because of the use of a fisheye lens.
Until now, various imaging systems have been developed for capturing omnidirectional, HDR or spectral images. Some conventional systems have been proposed for satisfying two of the three functions. The imaging system in Ref. [26] is the only conventional system developed for capturing HDR omnidirectional spectral images. However, the conventional system requires much acquisition time due to the manual control. There are no practical imaging systems to concurrently satisfy these three functions. In addition, though time-lapse and gigapixel imaging technologies will be required by users for next-generation omnidirectional scene archiving, these issues have not been addressed.

3. Proposed System for HDR Omnidirectional Spectral Imaging

Figure 2 shows the proposed imaging system for acquiring HDR spectral images of time-varying omnidirectional scenes. The system consists of two programmable high-speed RGB video cameras with two wide-angle (or telephoto) lenses, and a programmable rotating table. Two different types of color filters are mounted on the two video cameras, respectively.
The video camera is a Baumer HXG20 (Baumer Electric AG: Frauenfeld, Switzerland) with the image size of 2048 × 1088 pixels and the bit depth of 12 bits, which can capture images over 100 frames per second (fps). The cameras also have linear response characteristics. These camera specifications are enough for reducing the acquisition time. The lens is a Spacecom wide-angle lens for time-lapse imaging, which covers a 95° horizontal angle and a 61° vertical angle. In addition, we employ a Kowa telephoto lens for gigapixel imaging, which covers a horizontal angle of 14.5° and a vertical angle of 10.8°. The rotating table is a CLAUSS RODEON VR station HD (Dr. Clauss: Zwönitz, Germany), which can be automatically controlled by computer. The table allows for a 360° movement in the horizontal plane and 180° movement in the vertical plane, respectively. The blue broken lines shown in Figure 2 indicate the rotational axes of the horizontal and vertical rotations. The two cameras are mounted on the table by rotating 90 degrees. The optical axis of the proposed imaging system is set to the optical axis of the camera (A).
Optimal filter selections are important for recovering accurate spectra. The proposed six-band imaging system consists of two RGB cameras and two different color filters. One of practical and effective filter selections in six-band imaging based on the combination of RGB cameras and two-color filters is to use following two-color filters [6,26,35]: one is for shifting the RGB spectral sensitivities to the short and long wavelength, and the other is for shifting the sensitivities to the middle wavelength. Based on the above observations, we collected 36 commercially-available color filters and measured their spectral transmittance. Figure 3 shows the transmittance of the collected color filters. Then, through simulation and actual experiments of spectral recoveries, the two additional color filters of KODAK No.34A (Kodak: Rochester, USA) and FUJIFILM SP-18 (Fujifilm: Tokyo, Japan) were selected for applying to our image acquisition. By combining these color filters to the camera sensitivities, we can obtain effective six channel-spectral sensitivity functions. Figure 4 shows the overall spectral sensitivity functions of the proposed imaging system. The KODAK No.34A filter is effective for shifting the spectral sensitivities to the short and long wavelength in the visible range as shown in the solid curve of Figure 4. The FUJIFILM SP-18 filter is effective for shifting the spectral sensitivities to the middle wavelength as shown in the broken curve. To generate six-band images from the stereo cameras shown in Figure 2, we align the captured images based on a Phase Only Correlation algorithm [54] (Please refer to the next section as well).

4. Synthesizing HDR Omnidirectional Spectral Images

Figure 5 shows the flow of all of the computational algorithms for estimating HDR omnidirectional spectral images. The details of the algorithms are described below.

4.1. HDR Image Synthesis

A natural scene has a wide luminance range from dark shadowed areas to a highly bright sky. The HDR imaging technique by Debevec et al. [55] was adopted for capturing a natural scene. In our system, an HDR image is created by combining fifteen low dynamic range (LDR) images captured at different exposure times from 252 ms. to 0.015 ms. An HDR image can be synthesized by:
I i j H D R = k = 1 N w ( I i j k ) I i j k / t k k = 1 N w ( I i j k ) ,
w ( I ) = { I i f I 127 , 255 I i f I > 127 ,
where I i j H D R is the pixel value at a coordinate (i, j) of the synthesized HDR image, t k is k-th exposure time, I i j k is a pixel value at (i, j) of the LDR image at the k-th exposure time, and w(I) is a weighting function for the pixel values of the LDR images.
Note that a ghost removal technique is not applied in this study [22,51,56]. It is known that the above multi-exposure synthesizing technique often causes motion artifacts called “ghost”. However, in our experimental cases, the synthesized HDR images include no noticeable ghost artifacts, thanks to the short image acquisition time, i.e., approximately 503 ms. in total. If a synthesized HDR image includes ghost artifacts, a ghost removal technique will be required.

4.2. Correction of Lens Distortion

For creating accurate omnidirectional images, it is crucial to correct the lens distortion of captured images. We measured several lens distortion characteristics by using Zhang’s method [57] in advance. Then we corrected the captured images based on the measured distortion characteristics.

4.3. Image Registration

Given that the original images are captured by the stereo camera system, there is a disparity (displacement) between one image with a KODAK No.34A filter and the other with a Fujifilm SP-18 filter. Then, we have to completely align the two images for the combined six-band images with no registration error. We have implemented the Phase Only Correlation (POC) technique [54] for the image registration. The POC calculates the displacement by using the correlation between Fourier transformed images. The algorithm is given as:
R ( u , v ) = F ( u , v ) G ¯ ( u , v ) | F ( u , v ) G ¯ ( u , v ) | ,
where F ( u , v ) and G ( u , v ) are the Fourier transformation of two input images, G ¯ ( u , v ) is a complex conjugation of G ( u , v ) . The displacement can be obtained from the spatial coordinates with a maximum value in the inverse Fourier transformation of R ( u , v ) . As described in the previous section, the optical axis of the proposed imaging system is standardized by one of the cameras (A) in Figure 2. Therefore, the images captured by camera (A) are used as the reference image for the image registration. Figure 6 shows an example of the image registration results in creating a six-band image from original two-color images.

4.4. Polar Coordinate Transformation

We use the polar coordinate system for representing omnidirectional images. All captured images are transformed into the polar coordinate system by the following equation [58].
θ = tan 1 ( x / f ) , ϕ = tan 1 ( y / x 2 + f 2 ) ,
where f is a focal length, (x, y, f) indicates the 3D coordinates and (θ, φ) are 2D spherical coordinates. Figure 7 displays an example of the transformed polar coordinate image. In the present system for time-lapse imaging, the polar coordinate image in Figure 7 covers 61° horizontal and 95° vertical angles.

4.5. Omnidirectional Image Synthesis

For synthesizing an omnidirectional image, we utilize a total of 36 images (twelve images in the horizontal plane and three images in the vertical plane) for time-lapse imaging and 1260 images (sixty images in the horizontal plane and twenty-one images in the vertical plane) for gigapixel imaging. The number of images were decided based on the lens specifications, which were described in Section 2. The wide-angle lens covers 61° horizontal and 95° vertical angles, and the telephoto lens does 14.5° horizontal and 10.8° vertical angles. In order to avoid significant vignetting effects and leave image stitching regions with keeping the synthesized image quality, we empirically decided the number of images (directions). The use of the programmable rotating table allows us to control and record the horizontal and vertical directional angles of each captured image. In other words, we are able to identify the relative polar coordinates among the polar coordinate images. In addition, for achieving accurate spherical projections and image stitching results, we set the imaging system on the ground by using a spirit level. Then, we can synthesize an omnidirectional image from 36 or 1260 polar coordinate images based on the recorded directional angles and an image stitching algorithm [59]. Given that each polar coordinate image covers wider angles, the overlapping regions between the neighboring images are blended linearly. Figure 8 shows an example of the six-band omnidirectional images synthesized from the 36 images.

4.6. Spectral Estimation

The input spectral power distributions of the camera consist of two types of color signals: (1) direct illumination from the sky and (2) indirect illumination of the reflected light from object surfaces in the scene. The camera outputs for the color signals are then described with noisy observations such as:
ρ i ( x ) = E ( λ ) R i ( x , λ ) d λ + σ i ( x ) = s i ( x ) + σ i ( x ) , i = 1 , , 6 ,
where E(λ) denotes the incident color signal with a continuous spectrum in the visible range [400, 700 nm], and Ri(λ), as the spectral sensitivity functions, si are the signal components, and σi are the noise components of the i-th sensor. x denotes the spatial coordinates on an image. We can rewrite Equation (5) in a matrix form:
ρ = R e + σ = s + σ ,
where ρ is the 6-dimensional sensor output, e denotes the n-dimensional vector representing the spectrum E(λ) when sampling spectral functions at n points in the visible wavelength range. R is a diagonal matrix with the size of 6 × n for the spectral sensitivity function, σ is the six-dimensional noise vector, and s is the six-dimensional signal.
The statistical estimation theory is adopted for recovering spectra from multi-band images [60]. Our spectral estimation method is based on the Wiener estimator [24,35], which has been often utilized for recovering spectral information from noisy observations. However, we note that this estimator requires prior statistical knowledge such as the covariance matrix of spectral distributions and the covariance matrix of observation noises. Here, we present the Wiener estimation and our parameter settings. When e and σ are statistically uncorrelated, the estimated signal e ^ can be given by:
e ^ = e ¯ + W ( ρ R e ¯ ) , W = C ss R t ( RC ss R t + σ 2 I ) 1 ,
where e ¯ is an average spectrum of the database, σ2 is the noise variance, I is a 6 × 6 unit matrix, Css is the covariance matrix of a spectral database that is statistically determined as C ss = E [ ( e e ¯ ) ( e e ¯ ) t ] . In this study, we preliminarily determined the noise variance by minimizing the estimation error between the estimates and measured spectral power distributions of the Macbeth Color Checker under various light sources. To determine Css, we used a spectral database which is generated by multiplying 1378 surface spectral reflectances of various natural and artificial objects by nine spectral power distributions of different light sources.

5. Experimental Results

5.1. Performances of the Proposed Imaging System

Table 1 and Table 2 summarize the proposed system configurations and the performance. Six-band images were acquired using two-color video cameras with two different types of color filters. In our image acquisition, we require no filter exchange by employing a multiband imaging based on a stereo one-shot technique. For time-lapse imaging, an omnidirectional image is synthesized by thirty-six images (twelve images in the horizontal plane and three images in the vertical plane), and the pixel size is 7260 × 3630. For gigapixel imaging, an omnidirectional image is synthesized by 1260 images (sixty images in the horizontal plane and twenty-one images in the vertical plane), and the pixel size is 60,000 × 32,000. The acquisition time for each directional image is approximately 5 s which includes both the capturing and the rotating processes. Then, it takes approximately 180 s for the time-lapse imaging and approximately 2 h for the gigapixel imaging, in total, to capture the multiband HDR images for an omnidirectional scene.
As a performance comparison, we used a conventional integrated system proposed in Ref. [26]. We note that the conventional system requires a color filter exchange. The system control is based on manual operation. Then, the conventional system requires at least 10 min to acquire all images in the pixel sizes of 6000 × 3000. Moreover, the proposed system needs no filter exchange and is fully-automatic. Hence, the image acquisition can become much shorter than that of a conventional system in time-lapse imaging, which is slightly superior in spatial resolution.

5.2. Color and Spectral Accuracy

To validate the accuracy of the recovered spectral power distributions, we compare the estimated results with the ground truth. In this experiment, we estimated the spectra of the X-rite ColorChecker under D65. The ground truth data was measured by a spectrophotometer. Figure 9 displays the estimation results of the spectral power distributions. The average normalized RMSE of 24 colors in the ColorChecker is 0.0148, and the average color difference ΔEab is 1.49. Compared with the conventional multi-band imaging systems and spectral estimation results [26], our results demonstrate that the proposed system can estimate the spectral power distribution accurately. We also estimated spectral power distributions using a same RGB camera shown in Section 2. The average normalized RMSE and the average color difference ∆Eab of the 24 colors are 0.0530 and 6.12, respectively. As shown in these estimation results, the proposed six-band imaging system can recover accurate spectra compared with an RGB imaging system.
Figure 10 shows the comparative results of evaluating color artifacts around a moving cloud. In Figure 10a, the multi-band images of the conventional system were synthesized using the images captured at different times, where the time interval between one image capturing and the other capturing was approximately 6 min. In this scene, clouds in the sky moved quickly. For these reasons, the misregistration and color artifacts caused around the clouds are as shown in Figure 10a. Conversely, the image captured by the proposed system has no color artifacts around the moving clouds.
Figure 11 shows the estimated color signal e ^ (solid line) from our six-band imaging and the direct measurements e m (broken line) for a sky area and a green tree in Figure 8b. The measurements were obtained by using a spectro-radiometer. In these figures, the spectral power distributions are normalized as e m 2 = e ^ 2 = 1 . The estimation accuracy of color signals was examined at five points in the scene of Figure 8b. The average RMSE in the proposed system was 0.0149. The accuracy would be within an acceptable range in the spectral estimation. In addition, we compared the estimation results from the proposed six-band imaging with those from a same RGB camera. The average RMSE of RGB imaging was 0.0319. Even in an actual scene, the proposed six-band imaging is superior to the RGB imaging in terms of the spectral recovery.

5.3. Spectral Image Characterization of a Time-Lapse Scene

Figure 12 shows the HDR spectral image obtained for a time-lapse omnidirectional scene of our university campus. Note that the sun regions in Figure 12 are saturated, so that the colors of the sun are incorrectly reproduced. The image sequence was captured successively from 6:00 a.m. to 5:00 p.m. at 5-min intervals. The entire capturing procedure could be automatically carried out. Thus, the present system is effective in obtaining a time-lapse omnidirectional spectral image sequence.
Figure 13 represents the time-variations of the spectral power distributions observed in daytime from the light source of the fixed sky region and the reflected surface of the building wall, which are shown as red squares in Figure 12, respectively. The spectral power distributions of the sky vary dramatically throughout the day. In particular, we can confirm the morning sky produces a reddish color. In addition, it is shown that the spectral power distributions of the white wall as a reflectance object are influenced by those of the sky.

5.4. Spatial Resolution Characterization of Gigapixel Imaging

As a resolution comparison, we compare the proposed system based on the gigapixel imaging with the one based on time-lapse imaging. Figure 14 displays the comparative results. The resolution based on the gigapixel imaging is approximately 1.9 gigapixels. Whereas the resolution based on the time-lapse imaging is approximately 26 megapixels, because the images based on time-lapse imaging have been developed to acquire time-lapse spectral images of omnidirectional natural scenes. As shown in Figure 14, we can observe the details of distant objects. If immersive virtual experiences through head mounted display systems are required, omnidirectional images with such gigapixel resolutions will be one of the solutions.

5.5. Trade-Off between the Number of Captured Images, Acquisition Time and Image Quality

The proposed system requires a lot of directional images for synthesizing an omnidirectional image. In particular, the proposed gigapixel imaging requires approximately 2 h and 17 min in the image capturing step. As a practical implementation for reducing the number of captured images and the acquisition time, we consider reducing pixels of overlapping regions that are used in the image synthesis (image stitching). Figure 15a shows the relationship between overlapping pixels and the relative acquisition time (or the relative number of captured images, or capturing directions). The red circle in Figure 15a represents the current setting of the gigapixel imaging shown in Table 2. The blue circle in Figure 15a shows the image capturing setting with no overlapping regions. In this case, it takes approximately 46 min in the image acquisition. However, no overlapping regions cause significant fails in the image stitching. We empirically found that at least 12% overlapping pixels were required for achieving the image synthesis (the green circle in Figure 15a). In this case, it takes approximately 1 h and 19 min in the image acquisition. However, the case of 12% overlapping still causes some miss-alignments in the image stitching. Figure 15b,c show the image stitching results under the red and green circle settings of Figure 15a, respectively. Compared with the current setting (the red circle of Figure 15a and a synthesized result shown in Figure 15b), the case of 12% overlapping (the green circle of Figure 15a and a result in Figure 15c) often distorts the image quality around stitching regions. As shown in these results, we need to determine the capturing setting by considering the trade-off between the number of images, the acquisition time and the image quality.

6. Applications

6.1. Time-Lapse Spectral Computer Graphics Rendering

Environmental mapping is a significant rendering technique in computer graphics. For realistic image rendering, HDR omnidirectional images are used as environmental lighting [18,61]. Recently omnidirectional multi-spectral images also have been applied for accurate color reproduction [27,62]. However, time-lapse omnidirectional spectral images have not been applied for spectral CG rendering yet. Then we implemented spectral CG rendering using our time-lapse omnidirectional spectral images.
Figure 16 shows the spectral CG rendering results using time-lapse image sequence in Figure 12. In this scene rendering, a mirror ball on a gray plate is used as a computer graphics object. As shown in these results, our imaging system provides time-lapse spectral CG rendering with accurate color information.

6.2. Gigapixel Image Relighting of Indoor Arichived Scene

Gigapixel images have been applied to scene archiving. In particular, gigapixel imaging technologies are significant for archiving insides of cultural heritages and museums. Then we tested our system for indoor scene archiving. In addition, we relighted an omnidirectional indoor scene, since spectral imaging is useful for illumination simulation [42,44,63,64].
Figure 17 shows the experimental results of an indoor scene and gigapixel image relighting. In this experiment, we measured illumination spectra by using a white reference which was located in the scene. Then, we implemented the image relighting under an incandescent illumination (CIE standard illuminant A). We also show the close-up images of an art painting which exists at the left-side in the scene. The close-up images have the resolution of approximately 2400 × 1600 pixels. If our system is applied to omnidirectional indoor scene archiving such as museums, high-quality digital museums under various illumination conditions will be provided through VR technologies.

7. Conclusions and Discussion

This paper has proposed an HDR spectral imaging system for omnidirectional natural scenes. The system was constructed using two programmable high-speed video cameras with wide-angle (or telephoto) lenses and a programmable rotating table. Two different color filters were mounted on each video camera to acquire six-band images. We presented algorithms for HDR image synthesis, lens distortion correction, image registration, omnidirectional image synthesis, and spectral recovery. The proposed system was applied to the extensions of either time resolution or spatial resolution for time-lapse imaging or gigapixel imaging. The performances of the proposed system were discussed in terms of the system configurations, acquisition time, spatial resolution, artifacts, and spectral estimation accuracy. Experimental results in an outdoor natural scene showed that the proposed system was feasible and powerful for acquiring time-lapse HDR spectral images or high-resolution omnidirectional scenes. Finally, we applied the captured omnidirectional images to time-lapse spectral CG renderings and spectral-based gigapixel image relighting.
As shown in the experimental results, the proposed system can provide accurate spectra and colors compared with the conventional system. However, the proposed system of HDR omnidirectional six-band imaging still has two problems in the spectral and color recovery. One is a ghost artifact. As described in Section 4.1, we do not implement a ghost removal technique. Therefore, if a target scene includes fast-moving objects (such as a car, bicycle, or running people), the proposed system will cause ghost artifacts in a synthesized image. The other is an occlusion artifact. The stereo-based omnidirectional imaging causes color artifacts due to the scene occlusions by the stereo cameras. As shown in the bottom side of Figure 8b, we can see color artifacts around a computer which existed within one meter from the cameras. The color artifacts are caused by the occlusions around close-range objects and the image registration fails. This is the disadvantage of stereo-based six-band imaging compared with a single camera system. Similarly, recent stereo-based VR technologies using two omnidirectional RGB cameras have carefully handled such occlusion problems [46,47]. The occlusion problem in omnidirectional six-band imaging is a remaining challenge for improving spectral and color reproductions.
In addition, the acquisition time of the proposed system based on gigapixel imaging should be reduced. Similar to conventional gigapixel imaging systems [65,66], we should carefully discuss and address the considerable acquisition time. This causes changes in spectral power distributions of the sunlight. In this paper, we discussed the relationship between the acquisition time, capturing directions, overlapping regions, and the synthesized image quality in Section 5.5. If further acquisition-time reduction is required, it is necessary to develop a novel hardware configuration and an image capturing algorithm for realizing the short-time and high-quality gigapixel imaging.

Acknowledgments

We would like to acknowledge the JSPS Grant-in-Aid for Young Scientists (B) No. 25730104 for partially supporting this work. The authors would like to thank Shingo Dozaki for help in capturing and creating omnidirectional images.

Author Contributions

Keita Hirai and Naoto Osawa conceived, designed and performed the imaging system and the experiments; Motoki Hori designed and performed the experiments; Keita Hirai wrote the paper; Takahiko Horiuchi and Shoji Tominaga advised the research and refined the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anguelov, D.; Dulong, C.; Filip, D.; Frueh, C.; Lafon, S.; Lyon, R.; Ogale, A.; Vincent, L.; Weaver, J. Google Street View: Capturing the World at Street Level. Computer 2010, 43, 32–38. [Google Scholar] [CrossRef]
  2. Sakurada, K.; Okatani, T.; Deguchi, K. Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-mounted Camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 137–144. [Google Scholar]
  3. Hale, S.; Stanney, M. Handbook of Virtual Environments: Design, Implementation, and Applications, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2015; ISBN-10: 1466511842. [Google Scholar]
  4. Linowes, J. Unity Virtual Reality Projects; Packt Publishing: Birmingham, UK, 2015; ISBN-10: 178398855X. [Google Scholar]
  5. Matsuda, N.; Fix, A.; Lanman, D. Focal Surface Displays. ACM Trans. Graph. 2017, 36, 86. [Google Scholar] [CrossRef]
  6. Tominaga, S. CIC@20: Multispectral Imaging. In Proceedings of the IS&T Twentieth Color and Imaging Conference, Los Angeles, CA, USA, 12–16 November 2012; pp. 177–184. [Google Scholar]
  7. Imai, F.H.; Berns, R.S. High-resolution Multi-Spectral Image Archives: A Hybrid Approach. In Proceedings of the IS&T/SID’s Sixth Color Imaging Conference, Scottsdale, AZ, USA, 17–20 November 1998; pp. 224–227. [Google Scholar]
  8. Tominaga, S. Multichannel Vision System for Estimating Surface and Illuminant Functions. J. Opt. Soc. Am. A 1996, 13, 2163–2173. [Google Scholar] [CrossRef]
  9. Maitre, H.; Schmitt, F.; Crettez, J.P.; Wu, Y.; Hardeberg, J.Y. Spectrophotometric Image Analysis of Fine Art Paintings. In Proceedings of the IS&T/SID’s Fourth Color Imaging Conference, Scottsdale, AZ, USA, 19–22 November 1996; pp. 50–54. [Google Scholar]
  10. Miyake, Y.; Kouzu, T.; Takeuchi, S.; Yamataka, S.; Nakaguchi, T.; Tsumura, N. Development of New Electronic Endoscopes Using the Spectral Images of an Internal Organ. In Proceedings of the IIS&T/SID’s Thirteenth Color Imaging Conference, Scottsdale, AZ, USA, 7–11 November 2005; pp. 261–263. [Google Scholar]
  11. Tominaga, S.; Kato, K.; Hirai, K.; Horiuchi, T. Spectral Image Analysis of Mutual Illumination between Florescent Objects. J. Opt. Soc. Am. A 2016, 33, 1476–1487. [Google Scholar] [CrossRef] [PubMed]
  12. Ozawa, K.; Yamaguchi, M.; Sato, I. Hyperspectral Photometric Stereo for Single Capture. J. Opt. Soc. Am. A 2017, 34, 384–394. [Google Scholar] [CrossRef] [PubMed]
  13. Nayar, S. Catadioptric Omnidirectional Camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 482–488. [Google Scholar]
  14. Yagi, Y. Omnidirectional Sensing and Its Applications. IEICE Trans. Inf. Syst. 1999, 82, 568–579. [Google Scholar]
  15. Couture, V.; Roy, S. The Omnipolar Camera: A New Approach to Stereo Immersive Capture. In Proceedings of the IEEE International Conference on Computational Photography, Cambridge, MA, USA, 19–21 April 2013; pp. 1–9. [Google Scholar] [CrossRef]
  16. The Ladybug5 Spherical Imaging System. Available online: https://www.ptgrey.com/ (accessed on 30 November 2017).
  17. Gigapixel Panorama Photo. Available online: http://360gigapixels.com/ (accessed on 30 November 2017).
  18. Debevec, P. Rendering Synthetic Objects into Real Scenes. In Proceedings of the ACM SIGGRAPH, Orlando, FL, USA, 19–24 July 1998; pp. 189–198. [Google Scholar]
  19. 3D Cameras and Virtual Reality. Available online: http://www.giganti.co/3D-VR-Cameras (accessed on 20 January 2018).
  20. Ultimate 360 Camera Comparison Tool by 360 Rumors. Available online: http://360rumors.com/360-camera-comparison-tool (accessed on 20 January 2018).
  21. Gaddam, V.R.; Riegler, M.; Eg, R.; Griwodz, C.; Halvorsen, P. Tiling in Interactive Panoramic Video: Approaches and Evaluation. IEEE Trans. Multimed. 2016, 18, 1819–1831. [Google Scholar] [CrossRef]
  22. Ward, G.; Reinhard, E.; Pattanaik, S.; Debevec, P. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting, 2nd ed.; Morgan Kaufmann Publisher: Burlington, MA, USA, 2010; ISBN-10: 012374914X. [Google Scholar]
  23. Bandoh, Y.; Guoping, Q.; Okuda, M.; Daly, S.; Aach, T.; Au, O.C. Recent Advances in High Dynamic Range Imaging Technology. In Proceedings of the IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 3125–3128. [Google Scholar] [CrossRef]
  24. Hirai, K.; Tominaga, S. A LUT-based Method for Recovering Color Signals from High Dynamic Range Images. In Proceedings of the IS&T Twentieth Color and Imaging Conference, Los Angeles, CA, USA, 12–16 November 2012; pp. 88–93. [Google Scholar]
  25. Martínez-Domingo, M.A.; Valero, E.M.; Hernández-Andrés, J.; Tominaga, S.; Horiuchi, T.; Hirai, K. Image Processing Pipeline for Segmentation and Material Classification based on Multispectral High Dynamic Range Polarimetric Images. Opt. Express 2017, 25, 30073–30090. [Google Scholar] [CrossRef]
  26. Tominaga, S.; Fukuda, T.; Kimachi, A. A High-resolution Imaging System for Omnidirectional Illuminant Estimation. J. Imaging Sci. Technol. 2008, 52, 040907-1–040907-9. [Google Scholar] [CrossRef]
  27. Tominaga, S.; Matsuura, A.; Horiuchi, T. Spectral Analysis of Omnidirectional Illumination in a Natural Scene. J. Imaging Sci. Technol. 2010, 54, 040502-1–040502-9. [Google Scholar] [CrossRef]
  28. Darling, B.A.; Ferwerda, J.A.; Berns, R.S.; Chen, T. Real-Time Multispectral Rendering with Complex Illumination. In Proceedings of the IS&T/SID’s Nineteenth Color and Imaging Conference, San Jose, CA, USA, 7–11 November 2011; pp. 345–351. [Google Scholar]
  29. Kawakami, R.; Zhao, H.; Tan, R.T.; Ikeuchi, K. Camera Spectral Sensitivity and White Balance Estimation from Sky Images. Int. J. Comput. Vis. 2013, 105, 187–204. [Google Scholar] [CrossRef]
  30. Shih, Y.; Paris, S.; Durand, F.; Freeman, W.T. Data-driven Hallucination for Different Times of Day from a Single Outdoor Photo. ACM Trans. Graph. 2013, 32, 200. [Google Scholar] [CrossRef] [Green Version]
  31. Shrestha, R.; Mansouri, A.; Hardeberg, J.Y. Multispectral Imaging using a Stereo Camera: Concept, Design and Assessment. EURASIP J. Adv. Signal Process. 2011, 2011, 57. [Google Scholar] [CrossRef]
  32. Tsuchida, M.; Kashino, K.; Yamato, J. An Eleven-band Stereoscopic Camera System for Accurate Color and Spectral Reproduction. In Proceedings of the IS&T Twenty First Color and Imaging Conference, Albuquerque, NM, USA, 4–8 November 2013; pp. 14–19. [Google Scholar]
  33. Yata, N.; Miwa, R.; Manabe, Y. An Estimation Method of Spectral Reflectance from a Multi-band Image using Genetic Programming. In Proceedings of the 12th International AIC Color Conference, Newcastle, UK, 8–12 July 2013; pp. 1773–1776. [Google Scholar]
  34. Manakov, A.; Restrepo, J.F.; Klehm, O.; Hegedus, R.; Eisemann, E.; Seidel, H.P.; Ihrke, I. A Reconfigurable Camera Add-On for High Dynamic Range, Multispectral, Polarization, and Light-Field Imaging. ACM Trans. Graph. 2013, 32, 47. [Google Scholar] [CrossRef]
  35. Haneishi, H.; Hasegawa, T.; Hosoi, A.; Yokoyama, Y.; Tsumura, N.; Miyake, Y. System Design for Accurately Estimating the Spectral Reflectance of Art Paintings. Appl. Opt. 2000, 39, 6621–6632. [Google Scholar] [CrossRef] [PubMed]
  36. Tominaga, S.; Tanaka, N. Spectral Image Acquisition, Analysis, and Rendering for Art Paintings. J. Electron. Imaging 2008, 17, 043022. [Google Scholar] [CrossRef]
  37. Ohsawa, K.; Ajito, T.; Fukuda, H.; Komiya, Y.; Haneishi, H.; Yamaguchi, M.; Ohyama, N. Six-band HDTV Camera System for Spectrum-based Color Reproduction. J. Imaging Sci. Technol. 2004, 48, 85–92. [Google Scholar]
  38. Tanida, J.; Shogenji, R.; Kitamura, Y.; Yamada, K.; Miyamoto, M.; Miyatake, S. Color Imaging with an Integrated Compound Imaging System. Opt. Express 2003, 11, 2109–2117. [Google Scholar] [CrossRef] [PubMed]
  39. Monno, Y.; Kitao, T.; Tanaka, M.; Okutomi, M. A Practical One-Shot Multispectral Imaging System Using a Single Image Sensor. IEEE Trans. Image Process. 2015, 24, 3048–3059. [Google Scholar] [CrossRef] [PubMed]
  40. Sajadi, B.; Majumder, A.; Hiwada, K.; Maki, A.; Raskar, R. Switchable Primaries Using Shiftable Layers of Color Filter Arrays. ACM Trans. Graph. 2011, 30, 65. [Google Scholar] [CrossRef]
  41. Shrestha, R.; Hardeberg, J.Y. CFA based Simultaneous Multispectral Imaging and Illuminant Estimation. Lect. Notes Comput. Sci. 2013, 7786, 158–170. [Google Scholar] [CrossRef]
  42. Park, J.I.; Lee, M.H.; Grossberg, M.D.; Nayar, S.K. Multispectral Imaging using Multiplexed Illumination. In Proceedings of the IEEE Eleventh International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
  43. Hirai, K.; Tanimoto, T.; Yamamoto, K.; Horiuchi, T.; Tominaga, S. An LED-Based Spectral Imaging System for Surface Reflectance and Normal Estimation. In Proceedings of the 9th International Conference on Signal-Image Technology & Internet-Based Systems, Kyoto, Japan, 2–5 December 2013; pp. 441–447. [Google Scholar] [CrossRef]
  44. Nakahata, R.; Hirai, H.; Horiuchi, T.; Tominaga, S. Development of a Dynamic Relighting System for Moving Planar Objects with Unknown Reflectance. Lect. Notes Comput. Sci. 2015, 9016, 81–90. [Google Scholar] [CrossRef]
  45. Murakami, Y.; Yamaguchi, M.; Ohyama, N. Piecewise Wiener Estimation for Reconstruction of Spectral Reflectance Image by Multipoint Spectral Measurement. Appl. Opt. 2009, 48, 2188–2202. [Google Scholar] [CrossRef] [PubMed]
  46. Anderson, R.; Gallup, D.; Barron, J.T.; Kontkanen, J.; Snavely, N.; Esteban, C.H.; Agarwal, S.; Seitz, S.M. Jump: Virtual Reality Video. ACM Trans. Graph. 2016, 35, 198. [Google Scholar] [CrossRef]
  47. Matzen, K.; Cohen, M.; Evans, B.; Kopf, J.; Szeliski, R. Low-Cost 360 Stereo Photography and Video Capture. ACM Trans. Graph. 2017, 36, 148. [Google Scholar] [CrossRef]
  48. Yi, S.; Ahuja, N. An Omnidirectional Stereo Vision System Using a Single Camera. In Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, China, 20–24 August 2006. [Google Scholar] [CrossRef]
  49. Taguchi, Y.; Agrawal, A.; Veeraraghavan, A.; Ramalingam, S.; Raskar, R. Axial-Cones: Modeling Spherical Catadioptric Cameras for Wide-angle Light Field Rendering. ACM Trans. Graph. 2010, 29, 172. [Google Scholar] [CrossRef]
  50. Li, W.; Li, Y.F. Single-camera Panoramic Stereo Imaging System with a Fisheye Lens and a Convex Mirror. Opt. Express 2011, 19, 5855–5867. [Google Scholar] [CrossRef] [PubMed]
  51. Szeliski, R. Computer Vision: Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2010; ISBN-10: 1848829345. [Google Scholar]
  52. Akin, A.; Cogal, O.; Seyid, K.; Afshari, H.; Schmid, A.; Leblebici, Y. Hemispherical Multiple Camera System for High Resolution Omni-Directional Light Field Imaging. IEEE J. Emerg. Sel. Top. Circuits Syst. 2013, 3, 137–144. [Google Scholar] [CrossRef]
  53. Perazzi, F.; Sorkine-Hornung, A.; Zimmer, H.; Kaufmann, P.; Wang, O.; Watson, S.; Gross, M. Panoramic Video from Unstructured Camera Arrays. Comput. Graph. Forum 2015, 34, 57–68. [Google Scholar] [CrossRef]
  54. Takita, K.; Aoki, T.; Sasaki, Y.; Higuchi, T.; Kobayashi, K. High-accuracy Subpixel Image Registration based on Phase-only Correlation. IEICE Trans. Fundam. 2003, 86, 1925–1934. [Google Scholar]
  55. Debevec, P.E.; Malik, J. Recovering High Dynamic Range Radiance Maps from Photographs. In Proceedings of the ACM SIGGRAPH, Los Angels, CA, USA, 3–8 August 1997; pp. 369–378. [Google Scholar]
  56. Srikantha, A.; Désiré Sidibé, D. Ghost Detection and Removal for High Dynamic Range Images: Recent Advances. Signal Process. Image Commun. 2012, 27, 650–662. [Google Scholar] [CrossRef]
  57. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  58. Szeliski, R.; Shum, H.Y. Creating Full View Panoramic Image Mosaics and Environment Maps. In Proceedings of the ACM SIGGRAPH, Los Angels, CA, USA, 3–8 August 1997; pp. 251–258. [Google Scholar]
  59. Brown, M.; Lowe, D. Recognizing Panoramas. In Proceedings of the IEEE Ninth International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 1218–1225. [Google Scholar] [CrossRef]
  60. Kay, S. Liner Bayesians Estimators. In Fundamentals of Statistical Signal Processing: Estimation Theory; Prentice-Hall: Upper Saddle River, NJ, USA, 1993; ISBN-10: 0133457117. [Google Scholar]
  61. Blinn, J.F.; Newell, M.E. Texture and Reflection in Computer Generated Images. Commun. ACM 1976, 19, 542–547. [Google Scholar] [CrossRef]
  62. LeGendre, C.; Yu, X.; Liu, D.; Busch, J.; Jones, A.; Pattanaik, S.; Debevec, P. Practical Multispectral Lighting Reproduction. ACM Trans. Graph. 2016, 35, 32. [Google Scholar] [CrossRef]
  63. Fu, Y.; Lam, A.; Sato, I.; Okabe, T.; Sato, Y. Separating Reflective and Fluorescent Components using High Frequency Illumination in the Spectral Domain. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 965–978. [Google Scholar] [CrossRef] [PubMed]
  64. Tominaga, S.; Hirai, K.; Horiuchi, T. Estimation of Fluorescent Donaldson Matrices using a Spectral Imaging System. Opt. Express 2018, 26, 2132–2148. [Google Scholar] [CrossRef] [PubMed]
  65. Kopf, J.; Uyttendaele, M.; Deussen, O.; Cohen, M.F. Capturing and Viewing Gigapixel Images. ACM Trans. Graph. 2007, 26, 93. [Google Scholar] [CrossRef]
  66. Brady, D.J.; Gehm, M.E.; Stack, R.A.; Marks, D.L.; Kittle, D.S.; Golish, D.R.; Vera, E.M.; Feller, S.D. Multiscale Gigapixel Photography. Nature 2012, 486, 386–389. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Color artifacts in the captured multispectral image of moving clouds.
Figure 1. Color artifacts in the captured multispectral image of moving clouds.
Jimaging 04 00053 g001
Figure 2. Proposed HDR omnidirectional spectral imaging system.
Figure 2. Proposed HDR omnidirectional spectral imaging system.
Jimaging 04 00053 g002
Figure 3. Spectral transmittance of collected color filters.
Figure 3. Spectral transmittance of collected color filters.
Jimaging 04 00053 g003
Figure 4. Spectral sensitivity functions of the proposed six-band imaging system.
Figure 4. Spectral sensitivity functions of the proposed six-band imaging system.
Jimaging 04 00053 g004
Figure 5. Flowchart for estimating HDR omnidirectional spectral images.
Figure 5. Flowchart for estimating HDR omnidirectional spectral images.
Jimaging 04 00053 g005
Figure 6. Image registration results for a natural scene, where the right image represents an aligned six-band image. (a) Before image registration; (b) After image registration.
Figure 6. Image registration results for a natural scene, where the right image represents an aligned six-band image. (a) Before image registration; (b) After image registration.
Jimaging 04 00053 g006
Figure 7. Effective images of the polar coordinate transformation. (a) Before transformation; (b) After transformation.
Figure 7. Effective images of the polar coordinate transformation. (a) Before transformation; (b) After transformation.
Jimaging 04 00053 g007
Figure 8. Generated omnidirectional image from thirty-six directional images. (a) Thirty-six polar coordinate images; (b) Synthesized omnidirectional image.
Figure 8. Generated omnidirectional image from thirty-six directional images. (a) Thirty-six polar coordinate images; (b) Synthesized omnidirectional image.
Jimaging 04 00053 g008
Figure 9. Estimation results of spectral power distributions of the selected six colors: blue, green, red, yellow, magenta, and white. (a) ColorChecker No.13: blue; (b) ColorChecker No.14: green; (c) ColorChecker No.15: red; (d) ColorChecker No.16: yellow; (e) ColorChecker No.17: magenta; (f) ColorChecker No.19: white.
Figure 9. Estimation results of spectral power distributions of the selected six colors: blue, green, red, yellow, magenta, and white. (a) ColorChecker No.13: blue; (b) ColorChecker No.14: green; (c) ColorChecker No.15: red; (d) ColorChecker No.16: yellow; (e) ColorChecker No.17: magenta; (f) ColorChecker No.19: white.
Jimaging 04 00053 g009
Figure 10. Comparison of color images with moving clouds produced by the two systems. (a) Conventional system; (b) Proposed system.
Figure 10. Comparison of color images with moving clouds produced by the two systems. (a) Conventional system; (b) Proposed system.
Jimaging 04 00053 g010
Figure 11. Spectral estimation results of an actual scene. (a) Sky; (b) Green tree.
Figure 11. Spectral estimation results of an actual scene. (a) Sky; (b) Green tree.
Jimaging 04 00053 g011
Figure 12. Time-lapse omnidirectional image sequence.
Figure 12. Time-lapse omnidirectional image sequence.
Jimaging 04 00053 g012
Figure 13. Time-lapse spectral power distributions in Figure 11.
Figure 13. Time-lapse spectral power distributions in Figure 11.
Jimaging 04 00053 g013
Figure 14. Resolution comparison between the two imaging techniques. (a) Time-lapse imaging (7260 × 3630 pixels); (b) Gigapixel imaging (60,000 × 32,000 pixels); (c) Close-up of the red square in (a); (d) Close-up of the red square in (b).
Figure 14. Resolution comparison between the two imaging techniques. (a) Time-lapse imaging (7260 × 3630 pixels); (b) Gigapixel imaging (60,000 × 32,000 pixels); (c) Close-up of the red square in (a); (d) Close-up of the red square in (b).
Jimaging 04 00053 g014
Figure 15. Trade-off between acquisition time, overlapping pixels and image quality. (a) Relationship between acquisition time and overlapping pixels; (b) Synthesized image under the red circle setting of (a); (c) Synthesized image under the green circle setting of (a).
Figure 15. Trade-off between acquisition time, overlapping pixels and image quality. (a) Relationship between acquisition time and overlapping pixels; (b) Synthesized image under the red circle setting of (a); (c) Synthesized image under the green circle setting of (a).
Jimaging 04 00053 g015
Figure 16. Time-lapse spectral computer graphics rendering. We used time-lapse omnidirectional images shown in Figure 12. (a) 06:30 am; (b) 10:00 am; (c) 01:30 pm; (d) 04:30 pm.
Figure 16. Time-lapse spectral computer graphics rendering. We used time-lapse omnidirectional images shown in Figure 12. (a) 06:30 am; (b) 10:00 am; (c) 01:30 pm; (d) 04:30 pm.
Jimaging 04 00053 g016
Figure 17. Experimental result of indoor scene capture and gigapixel image relighting using illuminant A (incandescent illumination). (a) Captured gigapixel indoor scene; (b) Close-up of an art painting in (a); (c) Relighted gigapixel image of (a); (d) Close-up of an art painting in (c).
Figure 17. Experimental result of indoor scene capture and gigapixel image relighting using illuminant A (incandescent illumination). (a) Captured gigapixel indoor scene; (b) Close-up of an art painting in (a); (c) Relighted gigapixel image of (a); (d) Close-up of an art painting in (c).
Jimaging 04 00053 g017
Table 1. Basic Hardware Configuration.
Table 1. Basic Hardware Configuration.
CameraTwo High-Speed Programmable
Video Cameras
Color FilterTwo types
LensWide-angle lens for time-lapse imaging
Telephoto lens for gigapixel imaging
Rotating TableProgrammable
Table 2. Performances of the Proposed Imaging System.
Table 2. Performances of the Proposed Imaging System.
Time-Lapse ImagingGigapixel Imaging
Num of ChannelSix-bands
Filter ExchangeNon-requirement
Multi-Band TechniqueStereo one-shot
Capturing Direction36 directions:
12 horizontal and 3 vertical
1260 directions:
60 horizontal and 21 vertical
Pixel Size7260 × 3630Approx. 1.9 gigapixels
(60,000 × 32,000)
Total Acquisition TimeApprox. 180 sApprox. 2 h 17 min
Acquisition Time for Each Directional ImageApprox. 5 s
(including both capturing and rotating processes)

Share and Cite

MDPI and ACS Style

Hirai, K.; Osawa, N.; Hori, M.; Horiuchi, T.; Tominaga, S. High-Dynamic-Range Spectral Imaging System for Omnidirectional Scene Capture. J. Imaging 2018, 4, 53. https://doi.org/10.3390/jimaging4040053

AMA Style

Hirai K, Osawa N, Hori M, Horiuchi T, Tominaga S. High-Dynamic-Range Spectral Imaging System for Omnidirectional Scene Capture. Journal of Imaging. 2018; 4(4):53. https://doi.org/10.3390/jimaging4040053

Chicago/Turabian Style

Hirai, Keita, Naoto Osawa, Motoki Hori, Takahiko Horiuchi, and Shoji Tominaga. 2018. "High-Dynamic-Range Spectral Imaging System for Omnidirectional Scene Capture" Journal of Imaging 4, no. 4: 53. https://doi.org/10.3390/jimaging4040053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop