Next Article in Journal
Functional Electrostimulation in Patients Affected by the Most Frequent Central Motor Neuron Disorders—A Scoping Review
Previous Article in Journal
Real-Time Path Planning of Driverless Mining Trains with Time-Dependent Physical Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Near-Eye Holographic 3D Display and Advanced Amplitude-Modulating Encoding Scheme for Extended Reality

1
Hyper-Reality Metaverse Research Laboratory, Electronics and Telecommunications Research Institute, Daejeon 34129, Republic of Korea
2
Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
3
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3730; https://doi.org/10.3390/app13063730
Submission received: 14 February 2023 / Revised: 13 March 2023 / Accepted: 14 March 2023 / Published: 15 March 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Electronic holographic displays can reconstruct the optical wavefront of object light, exhibiting the most realistic three-dimensional (3D) images, in contrast to conventional stereoscopic displays. In this paper, we propose a novel, near-eye holographic 3D display (NEHD) applicable to AR/MR/holographic devices and experimentally demonstrate the proposed module’s performance with 360° full-viewed holographic 3D movie at 30 fps. To realize high-quality of reconstructed holographic 3D (H3D) images, we also propose an advanced amplitude-modulating (AM) encoding scheme suited for the proposed amplitude-modulating NEHD. We experimentally verify that the new hologram-encoding approach can improve the image quality of H3D reconstructions through quantitative statistical analyses, by using evaluation methods for H3D images that are suggested in the paper. Two holograms at different viewing directions of the same 3D scene are designed to be displayed onto the proposed NEHD prototype for two eyes of an observer, respectively. The presented techniques for the proposed NEHD enable the observer to experience the depth cue, a realistic accommodation effect, and high-quality H3D movies at each eye.

1. Introduction

Electronic holographic display (EHD) [1,2,3] is a light interference-based 3D technique that can provide non-glasses 3D scenes because it can reconstruct the natural wavefront of light emitted from objects. A hologram, including computer-generated hologram (CGHs), is an interference fringe pattern in a semi-transparent medium that is generated using the interference between the object light and the reference light [3,4]. In contrast, a stereoscopic 3D display technique is based on multiple two-dimensional images, resulting in accommodation–convergence conflict [2,5]. In general, the viewing angle depends on the pixel pitch of the spatial light modulator (SLM) used as a holographic 3D display [4,5]. To reconstruct a holographic 3D (H3D) image with a moderate wide-viewing angle, current technologies of holographic 3D displays cannot satisfy demand, the state-of-the-art for SLM of which the pixel pitch goes down into about 1   μ m or smaller [2,3,4]. Alternatively, either temporally or spatially multiplexed SLMs have been applied to increase this viewing angle in utilizing these conventional displays as well as the scalability of 3D image sizes [6,7,8,9], but these large-sized, complicated EHDs require further great advances in display technology towards a single integrated hardware device. In addition, a single conventional SLM cannot easily modulate both the amplitude and phase of an object’s light wave [8,9]. Recently, although a proposed, complex-modulating SLM including eye-tracking, referred to as a complex modulator, achieved conceptual improvement, it suffers from mechanically unsterilized alignment among micro-pixels between two stacked SLMs and also requires bulky optical components [10]. In some cases, an optical filtering system must be applied to remove the unwanted diffraction orders as well as background noise, resulting in enlarged display systems [6,9,10,11,12]. Alternatively, complex-to-phase or complex-to-amplitude encoding methods can be applied to convert the complex-number hologram into a phase-only modulator or an amplitude-only modulator [10,13,14,15,16,17]. With these existing approaches, the image quality of holographic 3D displays highly depends on the selected encoding types, and, therefore, alternative encoding methods should be further studied.
Recently, the near-eye displays for extended reality (XR) including augmented reality (AR)/virtual reality (VR) and mixed reality (MR) are providing potential to bring users into new worlds where they can experience instant access to stereoscopic and multi-viewed visual information [17,18,19]. As for EHD devices, the key requirements, such as enlarging the viewing angle of compact holographic device and representing the complex-field’s hologram in a 3D scene with high quality, can be easily overcome by merging a near-eye-typed display device and digitally encoded holographic content [17,18,19]. We began to pursue the very hybrid approach in the amplitude-modulating SLM to be embodied by using a novel near-eyed holographic display and an advanced encoding method that we present in the study. In this paper, we propose an advanced efficient encoding scheme, which we refer to as modified amplitude-only encoding (MAO), and from which a pair of encoded holograms per viewpoint are prepared through the image data processing for holographic 3D content, as depicted in Figure 1a. In addition, we demonstrate a prototype of the near-eye holographic 3D display (NEHD) suited for prepared 360° holographic content, as shown in Figure 1b. The module can split and upload a pair of MAO-encoded holograms simultaneously, and then optically reconstruct two H3D images which can be observed by the left and right eyes of an observer, respectively. The proposed dual-view NEHD architecture and MAO encoding scheme are aimed at compact and lightweight applications, thereby allowing direct integration into existing wearable mobile devices towards realizing a holographic 3D display for the user to experience immersive, realistic, and high-quality H3D movies.

2. Methods

2.1. Near-Eye Holographic 3D Display for Dual-View CGH Content

For the demonstrational environment of the proposed near-eye holographic 3D display (NEHD), we used the amplitude-modulating SLM module that consists of two micro-display panels (MDPs). Here, each panel (Model: RDP551F) is a kind of LCoS (Liquid Crystal on Silicon)-based SLM, as shown in Figure 1b; the MDP unit is composed of two LCoS-SLMs for the left and right eyes. Both LCoS panels can handle the resolution of full high definition (FHD, 1920 × 1080 pixels) with an effective diagonal size of 0.55 inches and pixel pitch of 6.3 μ m × 6.3 μ m . In the holographic 3D display system, the image data of CGHs prepared by a master PC are delivered into the display module via a board of display control (BDC) as a control board, which has an interface of HDMI and USB ports that are directly connected to the PC. The MDP unit is connected to the BDC via 32-pin-FFC cables for data transfer. The BDC that we used in the study is a specifically designed control board to split an input image of resolution 2 × FHD into two independent images, each of which has the FHD resolution, and then transfer these two split images into independent LCoS-SLM panels, as depicted in Figure 1a. Figure 1b shows a photograph of the NEHD module for dual-view holographic content, in which a pair of amplitude-modulating hologram data are split to be uploaded on each LCoS-SLM for the left and right eyes.

2.2. Optical Components for the NHD System

We discuss optical components for the near-eye holographic display system, for which a schematic architecture is illustrated in Figure 2a. Collimated light from the green laser with 532 nm wavelength (CNI optoelectronics, MGL-S-532) is generated by using a first positive lens located just after the fiber-coupled laser source as a coherent illumination. To upload holographic 3D content, a flat LCoS-SLM panel that modulates an 8-bit grey-scaled amplitude is modified as a direct-viewing SLM type. First, the polarization direction of the illuminating light is parallel to that of a transparent, film-typed PBS (polarizing beam splitter), which is positioned at the input surface of the LCoS. Second, near the surface of the SLM, a second convex lens called a field lens is placed against the SLM to converge the output light toward the eye box of the observer. The active area of the LCoS-SLM is 12.1 mm (V) × 6.80 mm (H). It supports a pixel arrangement of a nearly rectangular-shaped matrix geometry. This pixel geometry generates an extension of the zeroth diffraction order in the Fourier plane, i.e., a virtual observation window (VW), which corresponds to the area of the eye box of the observer [10,16,17]. The size of VW along the i-axis (i = x, y) is approximately given by W i = λ f / p i , where p i is the pixel pitch of the SLM along the i-axis, f the focal length of the field lens, which is also related to the viewing distance of the observer from the SLM, and λ the wavelength of illumination light. The size of the wavelength-dependent VW is designed to become larger than the pupil size of the human eye [18,19,20]. Thus, f = 500 mm is chosen for λ = 532 nm to obtain W x = 42.2 mm (along the horizontal direction) and W y = 42.2 mm (along the vertical direction). The optical wave field emerging from a 3D object or scene can be directly observed within the region of VW with the help of both the transparent PBS film and the field lens, thus leading to holographic augmented reality (AR), enabling the viewer to experience genuine depth cues and an accommodation effect. Further, optical observation and an experimental evaluation of the optically reconstructed holographic 3D (H3D) images are performed near the area of VW at the eye position (see Figure 2a) and demonstrated by a camera-captured, holographic 3D (H3D) scene reconstructed from CGHs. As shown in Figure 2, the proposed near-eye holographic 3D display (NEHD) system consists of two split main components so that binocular observation can be achieved; after a collimated beam is split into two parts, its left-side part is designed to focus an incident beam towards the left-eye and the other part towards the right-eye.
Passing through each SLM, the light wave fields then are guided into two independent focal points corresponding to the two eyes’ observational positions. The separation between these two positions is equal to 65 mm in the eye plane of the observer, where the distance corresponds to the average binocular distance of a human [21,22] and is indicated as a binocular distance (BD) in Figure 2a.

2.3. Acquisition of 360° Multi-Viewed Holographic Content from a 3D Model

A depth map image, representing information on the distance between the surface of an object and the observation point in the unit of the greyscale, is useful to make 3D content modeling for hologram synthesis. It is necessary that a set of RGB color and depth map images per view be extracted from a 3D object, so that we may calculate CGH by using the algorithm of FFT-based cascaded Fresnel transform (see Equation (1)). In this study, instead of choosing specific camera sensors to capture depth information, such as Kinect or Real sensors, we use MAYA 2018TM software [23] to design a 3D scene to give depth cues for holographic content. The software has the added advantage of rendering to capture the 3D object with a full range of 360 degrees. To experimentally observe the accommodation effect from holographic content, we design a 3D scene that consists of two Rubik’s cubes in green, each positioned at sufficiently. different distance to generate dynamic occlusion and to differentiate between focal depths.
Figure 3 shows the layout where two objects and a virtual camera are placed in the same plane, representing (a) a perspective view and (b) an in-plane view observed near the position of the camera eye. The camera can be programmed to orbit only along a virtual circular tract with the desired speed. While taking a turn on the circular orbit, the camera keeps shooting the 3D scene to obtain a total of 1024 frames, each shot with a constant increment of 0.352°, capturing a set of RGB images and depth maps simultaneously. Then both the RGB color and depth map images are stored in an 8-bit greyscale with BMP format. The distance from the center-of-mass of two objects to the camera’s orbit is measured and varied through a menu bar called a Distance Tool in MAYATM, and the depth value from the camera eye can then be manually adjusted by the user to determine the optimal depth range.
After the process of rendering in MAYATM is completed, a set of RGB and depth map images per frame, each holding FHD (1920 × 1080) resolution, is extracted. Thus, after a circular loop of the camera, the database where images are saved is composed of 1024 views. Through this process, we obtain RGB images and depth images of the target scene for generating 360° moving, holographic content.

2.4. Synthesis of CGH

The CGH synthesis for near-eye holographic content is explained on the basis of the wave optics and Fresnel’s holographic model in the system schematically illustrated in Figure 4 [10,11,12]. We assume that the distance ( D 1 ) between the holographic display and the eye lens is equal to the focal length ( f 1 ) of the field lens. When the size of eyeball and the focal length of eye lens are given as D 2 and f 2 , we also assume that the hologram plane ( x 1 , y 1 ) near the field lens and the retinal plane ( x 2 , y 2 ) of the user are located in parallel with the eye lens plane (u, υ ). The optical field at the display panel with size of X × Y and that of retinal domain of the observer are denoted as F x 1 ,   y 1 and G x 2 ,   y 2 , respectively. These fields are related to the following equation, called a cascaded Fresnel transform [24,25]:
G x 2 , y 2 = a x 2 ,   y 2 e i π λ 1 D 1 + 1 D 2 1 f 2 u 2 + v 2 h x 1 , y 1 , u , v F x 1 , y 1 d x 1 d y 1 × e 2 i π λ D 2 x 2 u + y 2 v d u d v ,
where a x 2 ,   y 2 = e 2 i π D 1 + D 2 λ e i π x 2 2 + y 2 2 λ D 2 D 1 D 2 λ 2 , and h x 1 , y 1 , u , v = r e c t x 1 X , y 1 Y × e i π λ D 1 x 1 2 + y 1 2 e 2 i π λ D 1 x 1 u + y 1 v .
While an image formed at the retinal plane is analyzed using the direct cascaded Fresnel transform, as given in Equation (1), making the desired image formation the optical field input at the display plane ( x 1 , y 1 ) is calculated using an inverse cascaded Fresnel transform. On the basis of Equation (1), the desired hologram field H x 1 ,   y 1 in the display domain is synthesized after a process of superposition of a reference wave field with the initially calculated optical field   F x 1 ,   y 1 . It is assumed that the reference wave field is the complex amplitude of the illumination wave from the backlight unit that is collimated, normally incident with a constant amplitude. In addition to the reference wave, we superimpose an off-axis carrier wave C x 1 ,   y 1 on the calculated optical field F x 1 ,   y 1 so that the desired signal function may shift from the noise terms, such as the DC term, and the twin-conjugate image. Through direct observation tests of each eye from H3D reconstructions, we experimentally derived the optimized conditions for the carrier wave format of   C x 1 ,   y 1 = e π i a x 1 + b y 1 , with a = 0.048 and b = 0.128, in applying the current-considered optical geometry (see Figure 4). Therefore, we could finally synthesize the signal function for the digital hologram H x ,   y ; the signal function contains complex numbers in general, and can be expressed by H x ,   y = H x ,   y e i Φ x ,   y , where H x ,   y is the amplitude of H x , y and Φ x , y is its phase. Here, the CGH’s original function must be converted into an appropriate representation, that is, amplitude-modulating hologram data suitable for the study, because we use the amplitude-modulating SLM in our system.

2.5. Encoding Methods for the Amplitude-Modulating Device

In this section, we first explore the established approaches to convert the complex-valued hologram to express the amplitude-modulating SLM (AM-SLM). We then propose our novel encoding scheme, i.e., the modified amplitude-only encoding (MAO) approach.
Among methods to circumvent the non-availability of commercial complex-modulating SLMs, a conventional amplitude-only encoding (CAO) [16] has been used. The CAO approach is to extract only the real part of a complex-valued optical field at the pixel coordinates of the hologram plane, and subsequently to bias the real-valued space into nonnegative, real-valued space by level shifting, which can be written as
H C A O x ,   y = H R x ,   y + H o f f ,
where H R x ,   y = H x ,   y cos Φ x ,   y is the real part of the original complex field, and H o f f is an offset to make H C A O x ,   y non-negative and is equal to the absolute value of a minimum of H R x ,   y for the CAO, i.e., H R = m i n H R x ,   y .
On the other hand, as an alternative representation format of computer-generated holograms (CGHs) available for the AM-SLM, there are two kinds of effective solutions from an analytic perspective: The first analytic representation scheme is Lee’s method, which decomposes a complex-valued field into four real and non-negative components [13]. The decomposition by Lee’s method is expressed as
H L x ,   y = L 1 x ,   y e i 0 + L 2 x ,   y e i π / 2 + L 3 x ,   y e i π + L 4 x ,   y e i 3 π / 2 ,
where at least two among four coefficients L m x ,   y   m = 1 ~ 4 are equal to zero. The relative phase values for Lee’s method are physically implemented by lateral displacement within a single macro-pixel that consists of four sub-pixels. Alternatively, Burckhardt’s method, which is a simplified version of Lee’s method, uses the feature that any complex-valued function can be analytically decomposed into three real and nonnegative components. The decomposition by Burckhardt’s method can be expressed by
H B x ,   y = B 1 x ,   y e i 0 + B 2 x ,   y e i 2 π / 3 + B 3 x ,   y e i 4 π / 3 ,
where at least one among the three coefficients B n x ,   y   n = 1 ~ 3 is equal to zero [14,15]. The relative phase values for Burckhardt’s method are realized by lateral displacement within one macro-pixel that comprises three sub-pixels.
Finally, the equational representation of the modified amplitude-only encoding (MAO) method we propose is described in the following formula, provided that the minimum of H R x ,   y is negative:
H M A O x ,   y = H i n t x ,   y m i n H i n t x ,   y
where H i n t x ,   y is an intermediate function and is defined by
H i n t x ,   y = H R x ,   y                                                                     f o r   H R 0 , ε H R x ,   y m i n H R x ,   y                                                       f o r   H R < 0 .
This formula of the MAO scheme has two key features: (i) For the pixel of H R x ,   y 0 , the intermediate function H i n t x ,   y is not changed from H R x ,   y and retains the original form of H R x ,   y . The result of MAO encoding is thus equivalent to that of conventional amplitude-only encoding, that is, H M A O x ,   y = H C A O x ,   y . (ii) For the pixel of H R x ,   y < 0 , a scale-down transform process is first performed before H R x ,   y is divided by the minimal value of H R x ,   y . Its effect reduced with a fixed rate is determined by a parametric factor ε , where 0 < ε 1   is assumed. When we prepare the intermediate function H i n t x ,   y , we finally obtain H M A O x ,   y just after subtracting H i n t x ,   y by its minimum [see Equation (5)]. Using the above-explained approaches, we establish the framework that consist of four different kinds of amplitude holograms, each of which is converted from the original, complex-valued hologram calculated by the FFT algorithm and is prepared for experimental comparison. Figure 5 shows the results of each encoding using the same original CGH function generated from a set of RGB and depth map images as shown in Figure 3c,d, with the MAO encoding processed under the condition of ε = 1 / 50 .
For the observer to experience an enhanced three-dimensional effect, two different viewpoints about a given 3D scene are required. Hence, for the dual-view holographic display, we made a set of CGH frames by selectively sorting two images per CGH frame among 1024 prepared images to provide 360° full views of the scene. We now explain that the viewing-point difference per CGH frame has a gap of 8 numbers from a series of 1024 views with a time-sequential difference. For example, the first view image for the left eye and the ninth view image for the right eye can be chosen as the first CGH content for a single frame. The reason is as follows: when the radius of the circular sector, i.e., the distance between the eyes’ plane and the object’s plane, is 1.4 m, the arc, i.e., the average distance (BD) between the eyes, is 65 mm [21,22], and thus their central angle should be 2.66°. Because the central angle increment per image is 0.352°, the view-number difference between the view of left eye and the view of right eye is equal to 8. A brief recapitulation of preparing CGH content fit with the NEHD system is as follows: (i) We obtained two complex hologram matrices through the FFT algorithm from the first set that consists of one RGB-depth image for the left eye and the other for the right eye (see Figure 6). (ii) Then we computed the four types of CGH content by using CAO, MAO, Lee’s, and Burckhardt’s encoding methods. For each encoding, a pair of calculated CGHs are combined in a side-by-side arrangement and stored as an image file with a resolution of 3840 × 1080. Figure 6 shows an exemplary process in the case of MAO encoding to prepare the first CGH frame for our dual-view holographic display. Here, we chose two sets of RGB-depth images with different viewpoints; one is for the first viewpoint and the other is for the ninth viewpoint. (iii) We repeated the above process about the 360° full range to create the final holographic content fit for the binocular geometry. (iv) Finally, we converted the time-sequential 360° views’ CGH set into an uncompressed video file, which can show a holographic 3D movie with a speed of 30 frames per second (fps) [see the Supplementary video (a)~(d)].

3. Experimental Results

3.1. Experimental Conditions

Since the simulation results can visualize a clear accommodation effect from the CGH image, we first tested observations derived from numerical computation by directly using the encoded CGH database prepared via the aforementioned process. Figure 7 shows the numerically reconstructed images with a focus on the fore-positioned object in the cases of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d). Another example of numerical reconstruction under the same conditions, except focused on the back-positioned object, is shown in Figure 8. Through these simulations, we could expect that the four encoded kinds of CGH content will support both the depth cue and accommodation effect. The experimental observations of optical reconstruction from these four kinds of encoded holograms were taken in a darkroom by using a DSLR camera (Canon EOD 5D Mark III and Canon EF 70–200 mm F2.8 IS USM Lens). Photographs of holographic reconstructions were captured with the camera’s lens positioned near the focal position of the field lens, i.e., the eye position of the observer at the VW (see Figure 2a). The camera settings for this experiment are as follows: shutter speed is 1/30 s, aperture size f 2.8, ISO 8000, and the focal length of the optical lens (Canon EF 70–200 mm F2.8) attached to the camera to take the pictures is about 1.45 m. Figure 9 and Figure 10 show photographs of optically reconstructed scenes obtained by the NHD, which displays four kinds of encoded holograms and which manifest clear depth discrimination.
We observed that background light and speckle patterns prominently appear in the 3D scene reconstructed from the CAO-encoded hologram, as shown in Figure 9a. On the other hand, we found that non-diffracted background light is efficiently suppressed in the holograms of the other three encoding methods (see Figure 9b–d). We noticed that the diffraction efficiency of Burckhardt’s hologram and Lee’s hologram are much lower than that of the MAO hologram. Furthermore, Figure 9b indicates that both background noise and speckle noise are dominantly attenuated in the case of MAO encoding in comparison with the rest of the encodings. Figure 11 shows the comparison result with a surface intensity map that plots the surface of the grey-level, g x i ,   y i , according to each encoding within the bright area to be scanned on a selected, focused object. We supposed that an object with an ideally speckle-free region has a surface intensity map with a regularly uniform feature. Hence, as an alternative approach for a speckle-noise analysis, the work to plot the surface intensity of the holographically reconstructed object indicates that better uniformity corresponds to less speckle noise. Through an examination of the surface-intensity indicator (see Figure 11), we found that the surface quality of the H3D scene regenerated by MAO encoding is the best among the four different encodings.
For a quantitative evaluation of the image quality of the holographic 3D reconstruction, we select a scanned measurement area with the same region on photographic images to capture the optically reconstructed H3D scenes with respect to four different encodings. The scanned test area that we chose for each encoding consists of 27 × 38 pixels, as shown in Figure 9. By a statistical analysis within the scanned areas, we calculate the corresponding holographic 3D image’s brightness (HIB) and holographic contrast ratio (HCR). Here, we define the HIB as the peak value of a grey-level curve along each selected line of the scanned areas. The HCR is defined as
HCR i j = G W i G O i / G B j G O j ,
where G W i and G O i represent the grey level and the dark-noise level (compensation term of G W i ), respectively, measured at a target pixel point x i ,   y i in the bright state condition. G B j and G O j represent the grey level and the dark-noise level (compensation term of G B j ), respectively, measured at a target pixel point x j ,   y j in the black state condition. In general, the HCR value is equal to the result of dividing the representative grey-value of the bright object area by that of the dark background area on the given holographic 3D reconstruction. In order to calculate the HCR, each representative value of the bright area and dark area is required. In the study, we take a size of 27 × 38 pixels (row × column) as a rectangular sample area within the bright/dark region for the HCR.
The sampling areas in Figure 9 are marked with white lines, but all pixels’ values of the rectangular area are taken for the statistical analysis. We set the mean value of grey levels within the scanning area as a representative value because the measuring unit of the brightness (HIB) in the holographic 3D experiment is 8-bit greyscale. Figure 12a,b show the brightness distribution of the bright and dark regions by using the boxplot method. Here, the blue asterisk represents the mean (average) value of the samples and the red cross represents the median value of the samples.

3.2. Results and Analysis

We evaluated the performance of the hologram-encoding methods based on the aforementioned three image quality measurement metrics, average holographic 3D image’s brightness (HIB), holographic contrast ratio (HCR), and holographic uniformity of surface (HUS). For the performance evaluation, we set each rectangular area sample (unit area’s size: 27 × 38 pixels) of a bright state area and dark state area in the reconstructed H3D scene as the scanning region (see Figure 9). Figure 11 shows a typical surface intensity map in the bright state area of a reconstructed H3D scene according to each encoding method. Figure 12 shows the box-whisker plot that maps the surface intensity of bright/dark area samples to be scanned on each H3D scene optically reconstructed from four encoding types. From both the holographic surface uniformity (HUS) characteristic and the holographic intensity (HIB) measurement in the bright areas, we first found that the surface uniformity of H3D reconstruction is highest in the order of MAO, LEE/BUR, and CAO. Moreover, from the HIB test using samples of bright areas, we found that although the average brightness of MAO is about 10.4% lower than that of CAO, MAO’s average brightness is about 11.6% higher than that of BUR or LEE. Second, Figure 13 presents a HCR-HIB diagram of the optically reconstructed H3D scenes with regard to the four encoding methods. The average brightness of Burckhardt’s and Lee’s encoding methods is approximately 19.6% lower than that of CAO encoding. However, the HCR values of these two encodings are at least two times higher than that of the CAO encoding. Above all, the diagram shows that the HCR of the MAO method is three times greater than that of the CAO method at the cost of only a 10% decrease in brightness in comparison with the brightness of the CAO encoding. Furthermore, the HCR of the MAO method is still 30% higher than that of Burckhardt’s encoding or Lee’s encoding. Thus, the analysis through the HCR-HIB diagram and the brightness graph in Figure 12a indicates that the MAO approach proposed in this study not only provides the most effective eliminator of DC background noise but also achieves refined performance in holographic contrast ratio (HCR) in comparison to Burckhardt’s and Lee’s methods.
Third, we also assessed the surface uniformity of the reconstructed H3D object in brightness distribution by the method of a box-whisker plot, where a parameter of IQR (Inter Quartile Range) is defined as the range of 50% data centered on a median value, equal to each box’s length at the boxplot; the smaller the IQR value is, the more uniform the surface of a holographic reconstructed object is. Through observation of each sample in the local bright area, we found that CAO’s IQR is 20.5, MAO’s IQR is 14, Burckhardt’s IQR is 9, and Lee’s IQR is 11. The results are derived from consideration of only 50% of the surface intensity data, and thus Burckhardt’s encoding has the smallest IQR. In addition, we checked the characteristic behavior of outliers in the box-whisker plot; in fact, all encoding methods except for the MAO encoding case have outlier values, where data of brightness are positioned out of the boxplot’s whisker (see Figure 12a). CAO encoding has 20 outliers, Burckhardt’s encoding has 23 outliers, and Lee’s encoding has 14 outliers, whereas MAO encoding has no outlier. Here, it is necessary to note that these outliers correlate with the extent of intensity irregularity in the measured local surface. A wider fluctuation of the intensity distribution means that the chosen surface in the given holographic 3D scene has less uniformity. Thus, both the IQR and the outliers of the boxplot indicate that the intensity distribution of MAO encoding has the narrowest range of spectrum among the four kinds of encodings within the scanned bright area, which is a convenient guides by which to measure the quality of surface homogeneity on the given holographic 3D reconstruction.

4. Conclusions

In conclusion, a novel near-eye holographic 3D (H3D) display prototype, applicable to holographic AR/MR devices, is proposed and experimentally demonstrated with 360° full-viewed holographic content to be optically reconstructed as an H3D movie at 30 fps. To achieve a high quality of optical H3D reconstruction, we also proposed an advanced amplitude-modulating encoding scheme (MAO encoding) that is well suited for AM-SLMs, and experimentally verified, for the first time, that the proposed scheme can clearly improve the image quality of monochromatic H3D reconstruction, through quantitative statistical analyses, in comparison with other competitive encodings such as Burckhardt’s encoding and Lee’s encoding. The presented techniques for NEHD we proposed enable an observer to experience the depth cue, a realistic accommodation effect, and high-quality holographic 3D movies at each eye of the user.
A restriction on the NEHD prototype demonstrated in this study is to cover the monochromatic H3D reconstruction. For the full-color implementation of the NEHD, it is necessary to develop RGB color-matching technologies in real space, including the optical component modification of RGB waveguiding and the RGB image processing of CGHs. Additionally, a field-sequential full color implementation within the presented operation schemes will be accomplished as the next step in the near future. The scopes that we demonstrated in this study did not include searching for the optimum of the parametric factor ( ε ) on the H3D image reconstructed from the proposed novel MAO encoding. For the parametric scaling factor (ε), we chose between 1/50 and 1/100, leading to the same behavior results under the sensitivity limit of the DSLR camera we used for H3D image monitoring. To find out the optimal condition of the (ε) factor remains another research work, which could be performed by using highly precise tool-based apparatus and measuring techniques. Holographic content treated in this research consists only of 3D objects from a pair of cube shapes without a background image. Thus, to support a hyper-realistic impact on XR environments such as the holographic AR/MR and metaverse applications, researchers are trying to combine holographic 3D object content with a variety of background images.
We expect to improve the proposed NEHD system towards commercial XR realization through a series of further studies, including the optimization of the parametric factor ( ε ) on the real-like H3D scenes reconstructed from the MAO encoding approach and the holographic measure metrics which we define in the study.

Author Contributions

Conceptualization, M.Y.; methodology and validation, M.Y., H.L.; formal analysis and visualization, M.Y., H.L. and M.K.; writing—original draft preparation, M.Y., H.L; writing—review and editing, H.L., M.K., M.Y., W.S. and Y.Y.; supervision, M.Y., Y.Y.; project administration and funding acquisition, W.S., M.Y. and Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by an Institute of Information and Communications Technology Planning and Evaluation (IITP) grant funded by the Korean government (MSIT) (No.2022-0-00137, XR User Interaction Evaluation and Employing Its Technology). This work was also partly supported by the Korea Research Institute for Defense Technology Planning and Advancement (KRIT) grant funded by the Defense Acquisition Program Administration (DAPA) and Daejeon Metropolitan City (Daejeon Defense Industry Innovation Cluster Project, No. DC2022RL).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gabor, D. A new microscopic principle. Nature 1948, 161, 777–778. [Google Scholar] [CrossRef] [PubMed]
  2. Blanche, P. Holography, and the future of 3D display. Light Adv. Manuf. 2021, 2, 1–14. [Google Scholar] [CrossRef]
  3. Slinger, C.W.; Cameron, C.D.; Coomber, S.D.; Miller, R.J.; Payne, D.A.; Smith, A.P.; Smith, M.G.; Stanley, M.; Watson, P.J. Recent developments in computer-generated holography: Toward a practical electroholography system for interactive 3D visualization. Pract. Hologr. XVIII Mater. Appl. 2004, 5290, 27–41. [Google Scholar]
  4. Zhang, J.; Pégard, N.; Zhong, J.; Adesnik, H.; Waller, L. 3D computer-generated holography by non-convex optimization. Optica 2017, 4, 1306–1313. [Google Scholar] [CrossRef]
  5. Reichelt, S.; Häussler, R.; Fütterer, G.; Leister, N. Depth Cues in Human Visual Perception and Their Realization in 3D Displays. In Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV; SPIE: Bellingham, WA, USA, 2010; Volume 7690. [Google Scholar]
  6. Sasaki, H.; Yamamoto, K.; Wakunami, K.; Ichihashi, Y.; Oi, R.; Senoh, T. Large size three-dimensional video by electronic holography using multiple spatial light modulators. Sci. Rep. 2014, 4, 6177. [Google Scholar] [CrossRef]
  7. Wakunami, K.; Hsieh, P.-Y.; Oi, R.; Senoh, T.; Sasaki, H.; Ichihashi, Y.; Okui, M.; Huang, Y.-P.; Yamamoto, K. Projection-type see-through holographic three-dimensional display. Nat. Commun. 2016, 7, 12954. [Google Scholar] [CrossRef] [PubMed]
  8. Tsang, P.; Liu, J.-P.; Poon, T.-C. Compressive optical scanning holography. Optica 2016, 2, 476–483. [Google Scholar] [CrossRef]
  9. Kakue, T.; Nishitsuji, T.; Kawashima, T.; Suzuki, K.; Shimobaba, T.; Ito, T. Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors. Sci. Rep. 2015, 5, 11750. [Google Scholar] [CrossRef] [PubMed]
  10. Häussler, R.; Gritsai, Y.; Zschau, E.; Missbach, R.; Sahm, H.; Stock, M.; Stolle, H. Large real-time holographic 3D displays: Enabling components and results. Appl. Opt. 2017, 56, F45–F52. [Google Scholar] [CrossRef]
  11. Reichelt, S.; Häussler, R.; Fütterer, G.; Leister, N.; Kato, H.; Usukura, N.; Kanbayashi, Y. Full-range, complex spatial light modulator for real-time holography. Opt. Lett. 2012, 37, 1955–1957. [Google Scholar] [CrossRef]
  12. Reichelt, S.; Leister, N. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging. J. Phys. Conf. Ser. 2013, 415, 012038. [Google Scholar] [CrossRef]
  13. Lee, W.H. Sampled Fourier Transform Hologram Generated by Computer. Appl. Opt. 1970, 9, 639–643. [Google Scholar] [CrossRef] [PubMed]
  14. Burckhardt, C.B. A Simplification of Lee’s Method of Generating Holograms by Computer. Appl. Opt. 1970, 9, 1949. [Google Scholar] [CrossRef]
  15. Burckhardt, C.B. A Simplification of Lee’s Method of Generating Holograms. 2: Erratum. Appl. Opt. 1970, 9, 2813. [Google Scholar] [CrossRef] [PubMed]
  16. Yoon, M.S.; Oh, K.-J.; Choo, H.-G.; Kim, J. A spatial light modulating LC device applicable to amplitude-modulated holographic mobile devices. Int. Conf. Ind. Inform. (INDIN) 2015, 2015, 677–681. [Google Scholar]
  17. Kim, H.; Lim, H.; Jee, M.; Lee, Y.; Yoon, M.; Kim, C. High-Precision depth map estimation from missing viewpoints for 360-degree digital holography. Appl. Sci. 2022, 12, 9432. [Google Scholar] [CrossRef]
  18. Maimone, A.; Georgiou, A.; Kollin, J.S. Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. 2017, 36, 1–16. [Google Scholar] [CrossRef]
  19. Chang, C.; Bang, K.; Wetzstein, G.; Lee, B.; Gao, L. Toward the next-generation VR/AR optics: A review of holographic near-eye displays from a human-centric perspective. Optica 2020, 7, 1563–1578. [Google Scholar] [CrossRef]
  20. Franssen, L.; Tabernero, J.; Coppens, J.E.; Berg, T.J.T.P.V.D. Pupil Size and Retinal Straylight in the Normal Eye. Pupil Size Retin. Straylight Norm. Eye 2007, 48, 2375–2382. [Google Scholar] [CrossRef]
  21. Gantx, L.; Shneor, E.; Doron, R. Accuracy and repeatability of self-measurement of interpupillary distance. J. Optom. 2021, 14, 299–314. [Google Scholar] [CrossRef]
  22. Lee, H.; Kim, J.; Son, J.; Kim, I.; Noh, J.; Yoon, Y.; Yoon, M. Evaluation of eye response using a wearable display with automatic interpupillary distance adjustment. Opt. Express 2022, 30, 8151–8164. [Google Scholar] [CrossRef] [PubMed]
  23. Maya; Autodesk: San Rafael, CA, USA, 2018; Available online: https://www.autodesk.com/products/maya/overview (accessed on 1 January 2021).
  24. Kim, H.; Yang, B.; Lee, B. Iterative Fourier transform algorithm with regularization for the optimal design of diffractive optical elements. J. Opt. Soc. Am. A 2004, 21, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  25. Roh, J.; Kim, K.; Moon, E.; Kim, S.; Yang, B.; Hahn, J.; Kim, H. Full-color holographic projection display system featuring an achromatic Fourier filter. Opt. Express 2017, 25, 14774–14782. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) Block diagram that shows the image data processing for dual-view holographic content (CGH). (b) Mockup prepared to show the module of dual-view near-eye holographic 3D display (NEHD).
Figure 1. (a) Block diagram that shows the image data processing for dual-view holographic content (CGH). (b) Mockup prepared to show the module of dual-view near-eye holographic 3D display (NEHD).
Applsci 13 03730 g001
Figure 2. (a) Schematic of the dual-view near-eye holographic 3D display (NEHD). (b) Prototype setup to demonstrate the proposed NEHD system.
Figure 2. (a) Schematic of the dual-view near-eye holographic 3D display (NEHD). (b) Prototype setup to demonstrate the proposed NEHD system.
Applsci 13 03730 g002
Figure 3. Geometry for extracting RGB and depth map information and examples of RGB and depth map images: geometry in a perspective view (a) and in a camera-lens view (b). (c,d) are a set for the left eye, and (e,f) are for the right eye. Each image of FHD resolution is extracted at a given view from a virtual camera set to capture the RGB color and depth map images.
Figure 3. Geometry for extracting RGB and depth map information and examples of RGB and depth map images: geometry in a perspective view (a) and in a camera-lens view (b). (c,d) are a set for the left eye, and (e,f) are for the right eye. Each image of FHD resolution is extracted at a given view from a virtual camera set to capture the RGB color and depth map images.
Applsci 13 03730 g003
Figure 4. Geometry of the optical system for each eye in the near-eye holographic 3D display. The eye of the observer lens on the u-v coordinate is located at the position of the focal length of the field lens, corresponding to the center of the Fourier plane of the holographic display.
Figure 4. Geometry of the optical system for each eye in the near-eye holographic 3D display. The eye of the observer lens on the u-v coordinate is located at the position of the focal length of the field lens, corresponding to the center of the Fourier plane of the holographic display.
Applsci 13 03730 g004
Figure 5. Four kinds of encoded holograms with FHD resolution. Encoding methods to be used are CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d).
Figure 5. Four kinds of encoded holograms with FHD resolution. Encoding methods to be used are CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d).
Applsci 13 03730 g005
Figure 6. Process to prepare CGH content of a frame fit for the dual-view holographic display made up of two AM SLMs.
Figure 6. Process to prepare CGH content of a frame fit for the dual-view holographic display made up of two AM SLMs.
Applsci 13 03730 g006
Figure 7. Numerical reconstructions using holograms in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the left eye of the observer is focused on the front-positioned 3D cube. The object focused is indicated by a blue arrow mark.
Figure 7. Numerical reconstructions using holograms in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the left eye of the observer is focused on the front-positioned 3D cube. The object focused is indicated by a blue arrow mark.
Applsci 13 03730 g007
Figure 8. Numerical observations using holograms in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the left eye of the observer is focused on the back-positioned 3D cube. The object focused is indicated by a blue arrow mark.
Figure 8. Numerical observations using holograms in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the left eye of the observer is focused on the back-positioned 3D cube. The object focused is indicated by a blue arrow mark.
Applsci 13 03730 g008
Figure 9. Experimental optical observations reconstructed from each hologram in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the eye of the observer is focused on the fore-positioned object. Each arrow indicates the focused object between a pair of 3D objects in a camera-captured image from optically reconstructed H3D scenes (716 × 450 pixels). Here, each scanned area (27 × 38 pixels) appears in the shape of multiple lines in white color near the front-positioned 3D cube [see also the Supplementary video (a)~(d)].
Figure 9. Experimental optical observations reconstructed from each hologram in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the eye of the observer is focused on the fore-positioned object. Each arrow indicates the focused object between a pair of 3D objects in a camera-captured image from optically reconstructed H3D scenes (716 × 450 pixels). Here, each scanned area (27 × 38 pixels) appears in the shape of multiple lines in white color near the front-positioned 3D cube [see also the Supplementary video (a)~(d)].
Applsci 13 03730 g009
Figure 10. Experimental optical observations reconstructed from each hologram in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the left eye of the observer is focused on the back-positioned 3D cube that each arrow indicates.
Figure 10. Experimental optical observations reconstructed from each hologram in the case of CAO encoding (a), MAO encoding (b), Burckhardt’s encoding (c), and Lee’s encoding (d) where the left eye of the observer is focused on the back-positioned 3D cube that each arrow indicates.
Applsci 13 03730 g010
Figure 11. Surface intensity map of the bright area to be scanned on the optically reconstructed H3D object according to each encoding; (a) CAO encoding, (b) MAO encoding, (c) Burckhardt’s encoding (BUR), and (d) Lee’s encoding (LEE).
Figure 11. Surface intensity map of the bright area to be scanned on the optically reconstructed H3D object according to each encoding; (a) CAO encoding, (b) MAO encoding, (c) Burckhardt’s encoding (BUR), and (d) Lee’s encoding (LEE).
Applsci 13 03730 g011
Figure 12. Intensity measurement of (a) bright area and of (b) dark area in H3D images optically reconstructed using four kinds of encodings.
Figure 12. Intensity measurement of (a) bright area and of (b) dark area in H3D images optically reconstructed using four kinds of encodings.
Applsci 13 03730 g012
Figure 13. Diagram to plot holographic contrast ratio (HCR) and brightness analyzed from each scanned bright area on images of optical H3D reconstructions with respect to four different encodings. Blue color of the left vertical axis corresponds to data for contrast ratio, and red color of the right vertical axis corresponds to data for brightness.
Figure 13. Diagram to plot holographic contrast ratio (HCR) and brightness analyzed from each scanned bright area on images of optical H3D reconstructions with respect to four different encodings. Blue color of the left vertical axis corresponds to data for contrast ratio, and red color of the right vertical axis corresponds to data for brightness.
Applsci 13 03730 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, H.; Son, W.; Kim, M.; Yoon, Y.; Yoon, M. Near-Eye Holographic 3D Display and Advanced Amplitude-Modulating Encoding Scheme for Extended Reality. Appl. Sci. 2023, 13, 3730. https://doi.org/10.3390/app13063730

AMA Style

Lee H, Son W, Kim M, Yoon Y, Yoon M. Near-Eye Holographic 3D Display and Advanced Amplitude-Modulating Encoding Scheme for Extended Reality. Applied Sciences. 2023; 13(6):3730. https://doi.org/10.3390/app13063730

Chicago/Turabian Style

Lee, Hyoung, Wookho Son, Minseok Kim, Yongjin Yoon, and MinSung Yoon. 2023. "Near-Eye Holographic 3D Display and Advanced Amplitude-Modulating Encoding Scheme for Extended Reality" Applied Sciences 13, no. 6: 3730. https://doi.org/10.3390/app13063730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop