Next Article in Journal
La(Ca)CrO3-Filled SiCN Precursor Thin Film Temperature Sensor Capable to Measure up to 1100 °C High Temperature
Previous Article in Journal
Investigation of Performance Parameters of Spherical Gold Nanoparticles in Localized Surface Plasmon Resonance Biosensing
Previous Article in Special Issue
Diffractive Achromat with Freeform Slope for Broadband Imaging over a Long Focal Depth
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Depth-Enhanced Holographic Super Multi-View Display Based on Depth Segmentation

1
National Engineering Laboratory of Special Display Technology, National Key Laboratory of Advanced Display Technology, Academy of Photoelectric Technology, Hefei University of Technology, Hefei 230009, China
2
School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
*
Author to whom correspondence should be addressed.
Micromachines 2023, 14(9), 1720; https://doi.org/10.3390/mi14091720
Submission received: 7 July 2023 / Revised: 21 August 2023 / Accepted: 29 August 2023 / Published: 31 August 2023
(This article belongs to the Special Issue 3D Holographic Displays)

Abstract

:
A super multi-view (SMV) near-eye display (NED) effectively provides depth cues for three-dimensional (3D) display by projecting multiple viewpoint or parallax images onto the retina simultaneously. Previous SMV NED have suffered from a limited depth of field (DOF) due to a fixed image plane. In this paper, a holographic SMV Maxwellian display based on depth segmentation is proposed to enhance the DOF. The proposed approach involves capturing a set of parallax images and their corresponding depth maps. According to the depth maps, the parallax images are segmented into N sub-parallax images at different depth ranges. These sub-parallax images are then projected onto N image-recording planes (IRPs) of the corresponding depth for hologram computation. The wavefront at each IRP is calculated by multiplying the sub-parallax images with the corresponding spherical wave phases. Then, they are propagated to the hologram plane and added together to form a DOF-enhanced hologram. The simulation and experimental results are obtained to validate the effectiveness of the proposed method in extending the DOF of the holographic SMV displays, while accurately preserving occlusion.

1. Introduction

Augmented reality (AR) technologies offer users an immersive three-dimensional (3D) experience, making them a crucial component in unlocking the full potential of the metaverse. Head-mounted displays (HMDs), also known as near-eye displays (NEDs), are popular visual interface devices used for AR due to their compact and portable design [1,2,3,4]. However, conventional NEDs often encounter the issue of vergence-accommodative (VAC) conflicts [5]. When viewing left and right images through corresponding eyes, the eyes rotate to observe three-dimensional images, and the vergence function accurately perceives depth based on the rotation information. However, the accommodation function is affected because the eyes focus on the image plane of the HMD imaging system instead of the actual 3D images. This discrepancy between vergence and accommodation leads to visual fatigue.
To overcome the VAC problem, several methods have been proposed. The integral imaging method [6,7,8,9,10] reproduces 3D images with an array of microlenses. The holographic NEDs try to provide the most natural depth cues by reconstructing the accurate optical wave field [11,12,13,14,15,16]. The multi-plane display [17,18,19,20,21] generates 3D images by displaying the 2D slices consisting of a 3D scene volume along the visual axis. The vari-focal plane method varies the distance of a single virtual image plane to different distances. While these methods provide good results, they are limited by the limited range of video sources and they are not very efficient. The Maxwellian display [22,23,24,25,26,27,28,29,30,31] partially alleviates the VAC problem by increasing the depth of field. However, it is unable provide accurate depth cues for monocular vision.
A super multi-view (SMV) display creates multiple viewpoints that are densely packed with an interval smaller than the eye’s pupil. It allows the rays from multiple viewpoints to be projected on the retina together, enabling the eye to focus on 3D images without the VAC problem. SMV NED’s video sources are more readily available because they are parallax images; it has a mature generation technique and a rich source of imagery. In addition, it has higher efficiency because the light only enters the pupil; there is no redundancy of 3D information.
In SMV displays, the DOF is a crucial parameter, which refers to the range within which the 3D images can be perceived clearly by the viewer. It is influenced by various factors, including the optical properties of the display system and the characteristics of the light rays emitted by individual pixels. By ensuring a wide DOF, the display system can accommodate the varying depth cues present in the 3D content. This helps to reduce visual discomfort and enhance the overall quality of the 3D images. The narrower the light ray is, the larger the DOF will be. Conventional SMV is based on geometric optics; however, it has a limited DOF [32,33,34]. To increase the DOF, an effective method is to limit the width of the light beam entering the pupil. Light-emitting diode (LED) light sources are usually used and collimated to illuminate the spatial light modulator (SLM) [35,36,37]. The finite size of the LED source influences the DOF of the system. In Ref. [38], an SMV Maxwellian display based on a collimated laser source was proposed where the limited light ray enhanced the DOF. The Maxwellian display converged the light ray into the pupil, which enlarges the DOF of the SMV display due to its small exit pupil. In ref. [39], a holographic SMV Maxwellian display was used to improve the DOF of the SMV display. This display offered the advantage of being compact and free from lens aberrations. It only required an SLM without the need for additional optical components. By leveraging the characteristics of Maxwellian displays, where a retinal image is still formed on the retina even when the eye is focused on the 3D image, the DOF of the SMV display can be expanded. However, it is important to note that Maxwellian displays do not possess an infinite DOF. They have a fixed virtual image plane, and the DOF range is typically centered around this image plane. As the 3D image moves away from the image plane, the image quality begins to deteriorate. Although numerical aperture filtering can be employed to limit the width of the light beam and increase the DOF, this approach may result in a loss of high-frequency components and a subsequent decrease in the image quality [40]. In other words, there exists a trade-off between the image quality and DOF.
In this paper, a holographic SMV Maxwellian display based on depth maps is proposed to enhance the DOF. The proposed approach involves capturing a set of parallax images and their corresponding depth maps. According to the depth maps, the parallax images are partitioned into N sub-parallax images at different depth ranges. Each sub-parallax image is then projected onto an image-recording plane (IRP) of the corresponding depth for hologram computation. In the hologram calculation process, the depth values of each sub-parallax are transformed into distances for Fresnel diffraction. The wavefront at each IRP is computed by multiplying the sub-parallax image with the corresponding spherical wave phase. These wavefronts are then propagated to the hologram plane and superimposed to generate the hologram that enhances the DOF. The simulation and experimental results are obtained to validate the effectiveness of the proposed method in extending the DOF of holographic SMV displays, while accurately preserving occlusion.

2. Conventional Holographic SMV Maxwellian Display with Limited Depth of Field

Figure 1 presents the principle of two or more viewpoints existing in the pupil to provide depth cues [32,33,34]. Figure 1a,b show that the human eye focuses on the display panel and a point of the 3D image, respectively. When the eye focuses on the display panel, the rays do not converge on the retina. In contrast, when the eye focuses on a point of the 3D image, as shown in Figure 1b, the rays converge on the retina. Thus, eyes can focus on 3D images, thereby preventing the VA conflict.
As depicted in Figure 2a, the previous holographic SMV Maxwellian display uses the implementation of multiple cameras to capture the three-dimensional scene, resulting in a set of parallax images [39]. This method was used to improve the DOF of the SMV display. In fact, the real scene could also be scanned with multiple cameras to obtain the depth information of the image in the way of the point cloud. In this experiment, two or more cameras with spacing p were used to capture the parallax images of the 3D scene while maintaining an identical field of view (FOV). To ensure that each object was clear, a camera with an infinite DOF was used. Following the capture process, Figure 2b illustrates the subsequent hologram generation process. At the same image-recording plane, each parallax image  I m (x,y) is multiplied with a specific convergent spherical wave phase and added together to form the complex amplitude distribution U(x,y):
U ( x , y ) = m M I m ( x , y ) exp { j k [ ( x x m ) 2 + ( y y m ) 2 ] L }
where k = 2π/λ is the wave number, λ denotes the wavelength of the light, L is the distance from the IRP to the camera, and  z 1  is the distance between the recording image plane and the hologram. Additionally, the position of the m-th viewpoint or camera is denoted as ( x m y m ). The complex distribution at the hologram plane is obtained by a Fresnel diffraction based on a single fast Fourier transform:
H ( x 1 , y 1 ) = exp [ j k ( x 1 2 + y 1 2 ) 2 z 1 ] { U ( x , y ) exp [ j k ( x 2 + y 2 ) 2 z 1 ] }
Then, this complex distribution can be encoded into an amplitude-type hologram by introducing a carrier wave:
A ( x 1 , y 1 ) = 2 Re [ H ( x 1 , y 1 ) exp ( j k sin θ y 1 ) ] + C
where θ represents the inclined angle of the carrier wave and C denotes a constant real value used to ensure a non-negative intensity distribution. It is important to note that the parallax images must be suitable for the hologram calculation, and this requires that the FOV of the camera satisfies certain conditions:
F O V = 2 tan 1 [ λ z 1 2 dL ]
where d is the pixel pitch of the hologram.
Figure 3a,b shows the simulation and experimental results of a conventional holographic SMV Maxwellian display. A 3D scene was constructed using 3Ds Max 2012 modeling software, incorporating three objects positioned at different depths. Two cameras with an FOV of 6.49° were utilized to capture the parallax images. The distances of the three objects from the cameras were 400, 1000, and 1950 mm, respectively. We used an amplitude-type SLM (pixel pitch: 3.6 µm; resolution: 4096 × 2160, 180 Hz). Figure 3c presents the experimental system, where L is 400 mm,  z 1  is 130 mm, and the pitch of the viewpoints p is 1 mm. In this experimental system, a collimated green laser beam passed through a polarization beam splitter (PBS) before illuminating the SLM. To eliminate interference fringes, we proposed a time-division method to destroy the temporal coherence of the laser source [39]. The SLM rapidly switched holograms to display these viewpoints, and these viewpoints entered the human eye at different times. The coherence length of the solid-state laser was approximately tens of meters; considering the speed of light, the coherence time of the laser was very short. As long as the time interval between two adjacent viewpoints is longer than the laser coherence time, they will not interfere with each other. However, due to the visual persistence, the brain perceives these two viewpoints simultaneously to form a 3D vision. In this experiment, we used spherical waves instead of the traditional random phase. Speckle is usually caused by an increase in the random phase, and this approach results in the better suppression of speckle and noise as well [28].
This reconstruction method effectively provided precise depth cues for monocular vision. Figure 3a,b present the results of the 3D-scene reconstruction at distances of 400, 1000, and 1950 mm. At each distance, only objects in focus were clear, while objects in other positions appeared out of focus and blurred. Notably, the simulation and experimental results exhibit the best image quality for object 1 when the focus is set at 400 mm. However, at reconstruction distances of 1000 and 1950 mm, the reconstruction quality of objects 2 and 3 noticeably deteriorated. This degradation can be attributed to the proximity of object 1 to the IRP compared to objects 2 and 3, which were situated at greater distances from the IRP. The limited DOF of the IRP significantly impacted the reconstructed image quality. In summary, the reconstruction of the holographic SMV display was accurate only around the IRP.
The underlying cause of the limited DOF can be observed in Figure 4. At the IRP, two identical pixels emit light rays towards each of the two viewpoints within the pupil. These light rays intersect in space, forming a real image point. The size of the image point p can be calculated as follows:
p = Δ z cos 2 β α
where Δz is the distance between the image point and the IRP, β is the inclined angle, and α is the divergence angle of the light ray. It can be seen that, as Δz increases, the image point size increases, resulting in a degraded image quality. This is the reason for the limited DOF.

3. Depth-Enhanced Holographic SMV Maxwellian Display Based on Depth Segmentation

We proposed a multi-IRP method based on depth segmentation to enhance the DOF. In this method, each IRP recorded a portion of the 3D scene. As depicted in Figure 5a, in addition to capturing the parallax images, the cameras also need to capture corresponding depth maps. The depth map represents the change in depth of the 3D scene as a change in the gray scale. Depending on the depth value, the depth position of each point in the 3D scene can be determined. Quantify the continuous depth into N depth ranges [41,42,43]. Based on Equations (6) and (7), the parallax images are segmented into N sub-parallax images at different depth ranges and an IRP is created for each depth range:
I m n = I m M a s k n ( x , y )
where  I m  denotes the parallax image of the m-th viewpoint,  I m n  denotes the n-th sub-parallax image of the m-th viewpoint, and the mask represents the process of segmenting the parallax image, which satisfies the following conditions:
M a s k n ( x , y ) = { 1 i f D e p t h ( x , y ) [ 255 N ( n 1 ) , 255 N n ] 0 else  
Then, N sub-parallax images were projected onto N corresponding IRPs. As shown in Figure 5b, the overlapping planes FOV captured by the two cameras are defined as the Image capture plane (ICP) for ease of calculation. The distance from the ICP to the camera is  L ICP . During the hologram calculation, the depth value of each sub-parallax image is converted to the corresponding Fresnel diffraction distance, and the wavefront at each IRP multiplies the parallax image by the corresponding spherical wave phase. Figure 5c shows that the complex hologram propagates the wavefront of each IRP to the hologram plane and adds them together:
H ( x 1 , y 1 ) = n N Pr o p { m M I m n ( x , y ) exp { j k [ ( x x m ) 2 ] + [ ( y y m ) 2 ] L n } }
where  L n  is the depth value of n-th sub-parallax images, which is transformed into the Fresnel diffraction distance, and  Prop { }  represents the Fresnel propagation calculation.
During the projection process, the sub-parallax images on each IRP still have some mismatch problems, which may lead to inaccurate reconstructions. There are two influencing factors, and their solutions are proposed separately. The first problem is the position offset of the parallax images on the IRP. As shown in Figure 6a, since the IRP and the ICP are no longer in the same plane, the sub-parallax images do not exactly overlap on the IRP and must be adjusted and positioned for each set of sub-parallax images on each IRP. Taking the example of two viewpoints, as shown in Figure 6b, assume the offset of the sub-parallax images on the IRP as v. Based on the triangle relationship, we can derive the following equation:
v = ( L I C P L n ) p 2 L ICP
In order to facilitate calculations during the matching process, the resulting offset v was converted to an offset pixel value and corrected, which was then extended to multiple viewpoints.
As show in Figure 7, the second problem is that, in the IRP, the size of the Fresnel diffraction field does not match the size of the sub-parallax image obtained from the projection of the parallax image. During the Fresnel diffraction calculation, when using the FFT, the effective diffraction field size  S i  is limited by the pixel pitch of the SLM and calculated by:
S i = N d x 1 = λ z 1 d x 2
where N denotes the resolution of the SLM,  d x 1  denotes the sample spacing on the IRP,  z 1  is the distance from the SLM to the IRP, and  d x 2  is the pixel pitch on the SLM. Let  S i  be the size of the sub-parallax image obtained from the projection of the parallax image according to the triangular relationship between the IRP and the ICP. The projection zone size is:
S i = L n S ICP L ICP
where  S ICP  is the size of the parallax image on the ICP. In order to reconstruct clear images, it is necessary to adjust the projection zone size until it completely overlaps with the Fresnel diffraction field size. The fitting process can be discussed in two ways. If    S i < S i , the IRP size is used as the base, the overlap is retained, and the redundant part of the sub-parallax image is removed using the rectangular function. If    S i > S i , the missing part is filled with the zero-padding function. The fitting process is as follows:
I = { r e c t ( x S i , y S i ) S i < S i Z e r o p a d d i n g ( I ) S i > S i
Figure 8 shows the simulation and experimental results. Two viewpoints were created and simultaneously entered the pupil to project images onto the retina. Only three sub-parallax images were divided in the simulation and experiment for the sake of simplicity.
In this experiment, the maximum depth value  d max  was 2000 mm and the minimum value  d min  was 350 mm; the depth map was divided into three depth ranges. It can be seen that the three objects have equal reconstruction qualities from the experimental and simulation results. The reconstruction of objects 2 and 3 are better than the conventional method, confirming the enhanced DOF of the proposed method, and there is a clear occlusion relationship among the three objects in the front, middle, and back.
In order to quantitatively illustrate the enhancement of the proposed method for achieving multi-depth image quality, we performed PSNR calculations for both the simulation results of this method and the simulation results of the conventional method mentioned in Section 2, as shown in Figure 9. The conventional method [39] improved the DOF value of the SMV display using only the holographic SMV Maxwell display. However, the DOF of a single IRP was still limited. We constructed multiple image-recording planes based on the depth segmentation method, each with a DOF range, thus expanding the DOF. The image quality was significantly improved when compared with the conventional method. This method can also be extended to color, which requires the recording of information from all three RGB wavelengths into the same channel and the simultaneous use of a tri-color laser [24,44].
The proposed method in this paper is similar to the holographic stereogram (HS) [45,46,47,48], which also uses multi-view images to calculate the hologram. However, they have the following differences. First, the proposed method usually has a higher spatial resolution than the HS. In the HS, the hologram is usually divided into many hogels, each hogel recording the specific perspective information of the 3D scene. Thus, it has a trade-off between the spatial and angular resolutions, while the proposed method is non-hogel based, and multiple view images with full resolution are projected onto the eye. Second, the proposed method requires less information than the HS. In the HS, abundant parallax images from different viewpoints are obtained first and converted to hogels, while in the proposed method, only a few parallax images are captured corresponding to the viewpoints existing in the eye’s pupil. That is to say, the proposed method only needs to recover the light field information within the eye’s pupil.

4. Conclusions

A holographic SMV Maxwellian display based on depth segmentation was proposed to enhance the DOF. The difference from the traditional method was that we captured the depth map of the scene along with the parallax image, quantified the continuous depth into N depth ranges, and the complete parallax image was partitioned into N sub-parallax images at different depth ranges. These sub-parallax images were then projected onto the IRPs of the corresponding depths for the hologram computation. Through this projection process, this method created multiple IRPs to solve the problem of limited DOFs that exist. During the hologram calculation, the wavefront at each IRP was calculated by multiplying the sub-parallax images with the corresponding spherical wave phases. Then, they were propagated to the hologram plane and added together to form the DOF-enhanced hologram. This paper also solved the mismatch problem in the projection process by calculation and restored the accurate reconstruction. Simulations and experiments were conducted to validate the effectiveness of the method, demonstrating significant improvements in the DOF while preserving occlusion. The hologram calculation was simple and the video sources were easy to access and generate. The proposed method is promising for realizing holographic near-eye displays with a large DOFs.

Author Contributions

Conceptualization, Z.W. and Y.S.; Methodology, Z.W., Y.S. and G.L.; Software, Z.W. and Y.S.; Validation, Y.S. and Y.P.; Formal analysis, Z.W. and G.L.; Investigation, Z.W., Y.S. and Y.P.; Resources, Z.W.; Data curation, Z.W., Y.S. and G.L.; Writing—original draft, Z.W. and Y.S.; Writing—review & editing, Z.W., Y.S. and G.L.; Visualization, Z.W. and Y.S.; Supervision, Q.F. and G.L.; Project administration, Q.F. and G.L.; Funding acquisition, Z.W., Q.F. and G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China, grant number 61805065, 62275071; Major Science and Technology Projects in Anhui Province, grant number 202203a05020005; and Fundamental Research Funds for the Central Universities, grant number JZ2021HGTB0077.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Cheng, D.; Wang, Q.; Liu, Y.; Chen, H.; Ni, D.; Wang, X.; Yao, C.; Hou, Q.; Hou, W.; Luo, G.; et al. Design and Manufacture AR Head-Mounted Displays: A Review and Outlook. Light Adv. Manuf. 2021, 2, 350–369. [Google Scholar] [CrossRef]
  2. Xiong, J.; Hsiang, E.-L.; He, Z.; Zhan, T.; Wu, S.-T. Augmented Reality and Virtual Reality Displays: Emerging Technologies and Future Perspectives. Light Sci. Appl. 2021, 10, 216. [Google Scholar] [CrossRef] [PubMed]
  3. Chang, C.; Bang, K.; Wetzstein, G.; Lee, B.; Gao, L. Toward the Next-Generation VR/AR Optics: A Review of Holographic near-Eye Displays from a Human-Centric Perspective. Optica 2020, 7, 1563–1578. [Google Scholar] [CrossRef] [PubMed]
  4. Zhao, W.-X.; Wang, Q.-H.; Wang, A.-H.; Li, D.-H. Autostereoscopic Display Based on Two-Layer Lenticular Lenses. Opt. Lett. 2010, 35, 4127–4129. [Google Scholar] [CrossRef]
  5. Zhou, Y.; Zhang, J.; Fang, F. Vergence-Accommodation Conflict in Optical See-through Display: Review and Prospect. Results Opt. 2021, 5, 100160. [Google Scholar] [CrossRef]
  6. Hua, H.; Javidi, B. A 3D Integral Imaging Optical See-through Head-Mounted Display. Opt. Express 2014, 22, 13484–13491. [Google Scholar] [CrossRef]
  7. Wang, Q.-H.; Ji, C.-C.; Li, L.; Deng, H. Dual-View Integral Imaging 3D Display by Using Orthogonal Polarizer Array and Polarization Switcher. Opt. Express 2016, 24, 9–16. [Google Scholar] [CrossRef]
  8. He, M.-Y.; Zhang, H.-L.; Deng, H.; Li, X.-W.; Li, D.-H.; Wang, Q.-H. Dual-View-Zone Tabletop 3D Display System Based on Integral Imaging. Appl. Opt. 2018, 57, 952–958. [Google Scholar] [CrossRef]
  9. Xing, Y.; Wang, Q.-H.; Ren, H.; Luo, L.; Deng, H.; Li, D.-H. Optical Arbitrary-Depth Refocusing for Large-Depth Scene in Integral Imaging Display Based on Reprojected Parallax Image. Opt. Commun. 2019, 433, 209–214. [Google Scholar] [CrossRef]
  10. Song, W.; Wang, Y.; Cheng, D.; Liu, Y. Light f Ield Head-Mounted Display with Correct Focus Cue Using Micro Structure Array. Chin. Opt. Lett. 2014, 12, 060010. [Google Scholar] [CrossRef]
  11. He, Z.; Sui, X.; Jin, G.; Cao, L. Progress in Virtual Reality and Augmented Reality Based on Holographic Display. Appl. Opt. 2019, 58, A74–A81. [Google Scholar] [CrossRef] [PubMed]
  12. Peng, Y.; Choi, S.; Kim, J.; Wetzstein, G. Speckle-Free Holography with Partially Coherent Light Sources and Camera-in-the-Loop Calibration. Sci. Adv. 2021, 7, eabg5040. [Google Scholar] [CrossRef] [PubMed]
  13. Wu, J.; Liu, K.; Sui, X.; Cao, L. High-Speed Computer-Generated Holography Using an Autoencoder-Based Deep Neural Network. Opt. Lett. 2021, 46, 2908–2911. [Google Scholar] [CrossRef]
  14. Wang, D.; Liu, C.; Shen, C.; Xing, Y.; Wang, Q.-H. Holographic Capture and Projection System of Real Object Based on Tunable Zoom Lens. PhotoniX 2020, 1, 6. [Google Scholar] [CrossRef]
  15. Pi, D.; Liu, J.; Wang, Y. Review of Computer-Generated Hologram Algorithms for Color Dynamic Holographic Three-Dimensional Display. Light Sci. Appl. 2022, 11, 231. [Google Scholar] [CrossRef]
  16. Song, W.; Liang, X.; Li, S.; Moitra, P.; Xu, X.; Lassalle, E.; Zheng, Y.; Wang, Y.; Paniagua-Domínguez, R.; Kuznetsov, A.I. Retinal Projection Near-Eye Displays with Huygens’ Metasurfaces. Adv. Opt. Mater. 2023, 11, 2202348. [Google Scholar] [CrossRef]
  17. Liu, S.; Li, Y.; Zhou, P.; Chen, Q.; Su, Y. Reverse-Mode PSLC Multi-Plane Optical See-through Display for AR Applications. Opt. Express 2018, 26, 3394–3403. [Google Scholar] [CrossRef]
  18. Lee, Y.-H.; Peng, F.; Wu, S.-T. Fast-Response Switchable Lens for 3D and Wearable Displays. Opt. Express 2016, 24, 1668–1675. [Google Scholar] [CrossRef]
  19. Hu, X.; Hua, H. Design and Assessment of a Depth-Fused Multi-Focal-Plane Display Prototype. J. Display Technol. 2014, 10, 308–316. [Google Scholar] [CrossRef]
  20. Liu, S.; Li, Y.; Zhou, P.; Li, X.; Rong, N.; Huang, S.; Lu, W.; Su, Y. A Multi-Plane Optical See-through Head Mounted Display Design for Augmented Reality Applications: Head Mounted Display for Augmented Reality. J. Soc. Inf. Disp. 2016, 24, 246–251. [Google Scholar] [CrossRef]
  21. Chen, H.-S.; Wang, Y.-J.; Chen, P.-J.; Lin, Y.-H. Electrically Adjustable Location of a Projected Image in Augmented Reality via a Liquid-Crystal Lens. Opt. Express 2015, 23, 28154–28162. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, S.; Zhang, Z.; Liu, J. Adjustable and Continuous Eyebox Replication for a Holographic Maxwellian Near-Eye Display. Opt. Lett. 2022, 47, 445–448. [Google Scholar] [CrossRef]
  23. Wang, Z.; Zhang, X.; Lv, G.; Feng, Q.; Wang, A.; Ming, H. Conjugate Wavefront Encoding: An Efficient Eyebox Extension Approach for Holographic Maxwellian near-Eye Display. Opt. Lett. 2021, 46, 5623–5626. [Google Scholar] [CrossRef] [PubMed]
  24. Wang, Z.; Tu, K.; Pang, Y.; Zhang, X.; Lv, G.; Feng, Q.; Wang, A.; Ming, H. Simultaneous Multi-Channel near-Eye Display: A Holographic Retinal Projection Display with Large Information Content. Opt. Lett. 2022, 47, 3876–3879. [Google Scholar] [CrossRef] [PubMed]
  25. Takaki, Y.; Fujimoto, N. Flexible Retinal Image Formation by Holographic Maxwellian-View Display. Opt. Express 2018, 26, 22985–22999. [Google Scholar] [CrossRef]
  26. Mi, L.; Chen, C.P.; Lu, Y.; Zhang, W.; Chen, J.; Maitlo, N. Design of Lensless Retinal Scanning Display with Diffractive Optical Element. Opt. Express 2019, 27, 20493–20507. [Google Scholar] [CrossRef]
  27. Choi, M.-H.; Shin, K.-S.; Jang, J.; Han, W.; Park, J.-H. Waveguide-Type Maxwellian near-Eye Display Using a Pin-Mirror Holographic Optical Element Array. Opt. Lett. 2022, 47, 405–408. [Google Scholar] [CrossRef]
  28. Wang, Z.; Tu, K.; Pang, Y.; Xu, M.; Lv, G.; Feng, Q.; Wang, A.; Ming, H. Lensless Phase-Only Holographic Retinal Projection Display Based on the Error Diffusion Algorithm. Opt. Express 2022, 30, 46450–46459. [Google Scholar] [CrossRef]
  29. Wang, Z.; Tu, K.; Pang, Y.; Lv, G.Q.; Feng, Q.B.; Wang, A.T.; Ming, H. Enlarging the FOV of Lensless Holographic Retinal Projection Display with Two-Step Fresnel Diffraction. Appl. Phys. Lett. 2022, 121, 081103. [Google Scholar] [CrossRef]
  30. Chang, C.; Cui, W.; Park, J.; Gao, L. Computational Holographic Maxwellian Near-Eye Display with an Expanded Eyebox. Sci. Rep. 2019, 9, 18749. [Google Scholar] [CrossRef]
  31. Zhang, W.; Wang, J.; Tan, C.; Wu, Y.; Zhang, Y.; Chen, N. Large Field-of-View Holographic Maxwellian Display Based on Spherical Crown Diffraction. Opt. Express 2023, 31, 22660–22670. [Google Scholar] [CrossRef] [PubMed]
  32. Takaki, Y.; Nago, N. Multi-Projection of Lenticular Displays to Construct a 256-View Super Multi-View Display. Opt. Express 2010, 18, 8824–8835. [Google Scholar] [CrossRef] [PubMed]
  33. Liu, L.; Ye, Q.; Pang, Z.; Huang, H.; Lai, C.; Teng, D. Polarization Enlargement of FOV in Super Multi-View Display Based on near-Eye Timing-Apertures. Opt. Express 2022, 30, 1841–1859. [Google Scholar] [CrossRef] [PubMed]
  34. Wan, W.; Qiao, W.; Pu, D.; Chen, L. Super Multi-View Display Based on Pixelated Nanogratings under an Illumination of a Point Light Source. Opt. Lasers Eng. 2020, 134, 106258. [Google Scholar] [CrossRef]
  35. Teng, D.; Lai, C.; Song, Q.; Yang, X.; Liu, L. Super Multi-View near-Eye Virtual Reality with Directional Backlights from Wave-Guides. Opt. Express 2023, 31, 1721–1736. [Google Scholar] [CrossRef]
  36. Han, W.; Han, J.; Ju, Y.-G.; Jang, J.; Park, J.-H. Super Multi-View near-Eye Display with a Lightguide Combiner. Opt. Express 2022, 30, 46383–46403. [Google Scholar] [CrossRef]
  37. Ueno, T.; Takaki, Y. Super Multi-View near-Eye Display to Solve Vergence—Accommodation Conflict. Opt. Express 2018, 26, 30703–30715. [Google Scholar] [CrossRef]
  38. Wang, L.; Li, Y.; Liu, S.; Su, Y.; Teng, D. Large Depth of Range Maxwellian-Viewing SMV Near-Eye Display Based on a Pancharatnam-Berry Optical Element. IEEE Photonics J. 2022, 14, 1–7. [Google Scholar] [CrossRef]
  39. Zhang, X.; Pang, Y.; Chen, T.; Tu, K.; Feng, Q.; Lv, G.; Wang, Z. Holographic Super Multi-View Maxwellian near-Eye Display with Eyebox Expansion. Opt. Lett. 2022, 47, 2530–2533. [Google Scholar] [CrossRef]
  40. Wang, Z.; Zhang, X.; Lv, G.; Feng, Q.; Ming, H.; Wang, A. Hybrid Holographic Maxwellian Near-Eye Display Based on Spherical Wave and Plane Wave Reconstruction for Augmented Reality Display. Opt. Express 2021, 29, 4927–4935. [Google Scholar] [CrossRef] [PubMed]
  41. Cao, X.; Sang, X.; Chen, Z.; Zhang, Y.; Leng, J.; Guo, N.; Yan, B.; Yuan, J.; Wang, K.; Yu, C. Fresnel Hologram Reconstruction of Complex Three-Dimensional Object Based on Compressive Sensing. Chin. Opt. Lett. 2014, 12, 080901–080904. [Google Scholar] [CrossRef]
  42. Gilles, A.; Gioia, P.; Cozot, R.; Morin, L. Hybrid Approach for Fast Occlusion Processing in Computer-Generated Hologram Calculation. Appl. Opt. 2016, 55, 5459–5470. [Google Scholar] [CrossRef] [PubMed]
  43. Gilles, A.; Gioia, P.; Cozot, R.; Morin, L. Computer Generated Hologram from Multiview-plus-Depth Data Considering Specular Reflections. In Proceedings of the 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Seattle, WA, USA, 11–15 July 2016; pp. 1–6. [Google Scholar]
  44. Wang, Z.; Zhang, X.; Tu, K.; Lv, G.; Feng, Q.; Wang, A.; Ming, H. Lensless Full-Color Holographic Maxwellian near-Eye Display with a Horizontal Eyebox Expansion. Opt. Lett. 2021, 46, 4112–4115. [Google Scholar] [CrossRef] [PubMed]
  45. Wang, Z.; Lv, G.; Feng, Q.; Wang, A.; Ming, H. Enhanced Resolution of Holographic Stereograms by Moving or Diffusing a Virtual Pinhole Array. Opt. Express 2020, 28, 22755–22766. [Google Scholar] [CrossRef]
  46. Wang, Z.; Lv, G.Q.; Feng, Q.B.; Wang, A.T.; Ming, H. Resolution Priority Holographic Stereogram Based on Integral Imaging with Enhanced Depth Range. Opt. Express 2019, 27, 2689–2702. [Google Scholar] [CrossRef] [PubMed]
  47. Wang, Z.; Chen, R.S.; Zhang, X.; Lv, G.Q.; Feng, Q.B.; Hu, Z.A.; Ming, H.; Wang, A.T. Resolution-Enhanced Holographic Stereogram Based on Integral Imaging Using Moving Array Lenslet Technique. Appl. Phys. Lett. 2018, 113, 221109. [Google Scholar] [CrossRef]
  48. Wang, Z.; Lv, G.; Feng, Q.; Wang, A.; Ming, H. Simple and Fast Calculation Algorithm for Computer-Generated Hologram Based on Integral Imaging Using Look-up Table. Opt. Express 2018, 26, 13322–13330. [Google Scholar] [CrossRef]
Figure 1. (a) Human eye focusing on the display panel. (b) Human eye focusing on a point on the 3D objects.
Figure 1. (a) Human eye focusing on the display panel. (b) Human eye focusing on a point on the 3D objects.
Micromachines 14 01720 g001
Figure 2. (a) Conventional parallax-capturing experiment (infinite DOF cameras are used to ensure that every object is clear); (b) conventional hologram generation with one IRP.
Figure 2. (a) Conventional parallax-capturing experiment (infinite DOF cameras are used to ensure that every object is clear); (b) conventional hologram generation with one IRP.
Micromachines 14 01720 g002
Figure 3. (a) Simulation in conventional holographic SMV display; (b) experimental results in conventional holographic SMV Maxwellian display; (c) experimental system.
Figure 3. (a) Simulation in conventional holographic SMV display; (b) experimental results in conventional holographic SMV Maxwellian display; (c) experimental system.
Micromachines 14 01720 g003
Figure 4. Principle of the limited DOF.
Figure 4. Principle of the limited DOF.
Micromachines 14 01720 g004
Figure 5. (a) The proposed process of capturing parallax and depth maps; (b) parallax image segmentation and projection process; (c) proposed hologram generation with multiple IRPs.
Figure 5. (a) The proposed process of capturing parallax and depth maps; (b) parallax image segmentation and projection process; (c) proposed hologram generation with multiple IRPs.
Micromachines 14 01720 g005
Figure 6. (a) Principle of non-overlapping sub-parallax images on the IRP. (b) Position offset of the parallax images on the IRP.
Figure 6. (a) Principle of non-overlapping sub-parallax images on the IRP. (b) Position offset of the parallax images on the IRP.
Micromachines 14 01720 g006
Figure 7. (a) When the diffraction field size is smaller than the projection zone size:  S i < S i . (b) When the diffraction field size is larger than the projection zone size:    S i > S i .
Figure 7. (a) When the diffraction field size is smaller than the projection zone size:  S i < S i . (b) When the diffraction field size is larger than the projection zone size:    S i > S i .
Micromachines 14 01720 g007
Figure 8. Simulation and experimental results of the proposed method.
Figure 8. Simulation and experimental results of the proposed method.
Micromachines 14 01720 g008
Figure 9. Comparison of PSNR of the proposed method with the conventional method.
Figure 9. Comparison of PSNR of the proposed method with the conventional method.
Micromachines 14 01720 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Su, Y.; Pang, Y.; Feng, Q.; Lv, G. A Depth-Enhanced Holographic Super Multi-View Display Based on Depth Segmentation. Micromachines 2023, 14, 1720. https://doi.org/10.3390/mi14091720

AMA Style

Wang Z, Su Y, Pang Y, Feng Q, Lv G. A Depth-Enhanced Holographic Super Multi-View Display Based on Depth Segmentation. Micromachines. 2023; 14(9):1720. https://doi.org/10.3390/mi14091720

Chicago/Turabian Style

Wang, Zi, Yumeng Su, Yujian Pang, Qibin Feng, and Guoqiang Lv. 2023. "A Depth-Enhanced Holographic Super Multi-View Display Based on Depth Segmentation" Micromachines 14, no. 9: 1720. https://doi.org/10.3390/mi14091720

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop