Next Article in Journal
Ecological Characterization and Bio-Mitigation Potential of Heavy Metal Contamination in Metallurgically Affected Soil
Next Article in Special Issue
Development of Low-Fidelity Virtual Replicas of Products for Usability Testing
Previous Article in Journal
Mechanical Properties of Metasandstone under Uniaxial Graded Cyclic Loading and Unloading
Previous Article in Special Issue
Smart Factory Using Virtual Reality and Online Multi-User: Towards a Metaverse for Experimental Frameworks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stray Light Detection Model for VR Head-Mounted Display Based on Visual Perception

1
Department of Cosmetic Science, Chang Gung University of Science and Technology, Taoyuan 33303, Taiwan
2
Graduate Institute of Color and Illumination Technology, National Taiwan University of Science and Technology, Taipei 10607, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(13), 6311; https://doi.org/10.3390/app12136311
Submission received: 9 May 2022 / Revised: 15 June 2022 / Accepted: 19 June 2022 / Published: 21 June 2022
(This article belongs to the Collection Virtual and Augmented Reality Systems)

Abstract

:
In recent years, the general public and the technology industry have favored stereoscopic vision, immersive experience, and real-time visual information reception of virtual reality (VR) and augmented reality (AR). The device carrier, the Head-Mounted Display (HMD), is recognized as one of the next generation’s most promising computing and communication platforms. HMD is a virtual image optical display device that combines optical lens modules and binocular displays. The visual impact it brings is much more complicated than the traditional display and also influences the performance of image quality. This research investigated the visual threshold of stray light for three kinds of VR HMD devices, and proposes a qualitative model, derived from psychophysical experiments and the measurement of images on VR devices. The recorded threshold data of the psychophysical stray light perception experiment was used as the target when training. VR display image captured by a wide-angle camera was processed, through a series of image processing procedures, to extract variables in the range of interest. The machine learning algorithm established an evaluation method for human eye-perceived stray light in the study. Four supervised learning algorithms, including K-Nearest Neighbor (KNN), Logistic Regression (LR), Support Vector Machine (SVM), and Random Forest (RF), were compared. The established model’s accuracy was about 90% in all four algorithms. It also proved that different percentages of thresholds could be used to label data according to demand to predict the feasibility of various subdivision inspection specifications in the future. This research aimed to provide a fast and effective stray light qualitative evaluation method to be used as a basis for future HMD optical system design and quality control. Thus, stray light evaluation will become one of the critical indicators of image quality and will be applicable to VR or AR content design.

1. Introduction

Regarding recent requirements for consumer electronics in multimedia communications, applications of display technologies and communication devices have rapidly become universal. It has led to various digital products, such as personal computers, notebook computers, and smartphones, and the rise of software and hardware industries, and supply chains. In recent years, stereo vision, immersive experience, and real-time reception of visual information of virtual reality (VR) and augmented reality (AR) have gained popularity in the general public and manufacturing industries. HMD is considered to have the most potential as the computing and communication platform for the next generation [1,2,3]. Compared with the most popular computing platforms on the market, like personal computers and smartphones, the optoelectronics industry is remarkably important in regards to VR/AR HMD products, due to this industry’s advantages in optical design architectures [4]. The VR/AR HMD, which is an optical device combining optical lenses and binocular displays, can offer a wide range of visual fields and information in the form of stereoscopic images. However, it is highly reliant on human visual perception and customized design.
Although both VR and AR HMD are virtual imaging devices, there are still some application and design differences. Among them, VR mainly focuses on the immersive experience. Therefore, to ensure the user enters the virtual world as entirely as possible, a larger field of view (FOV) is applied to cover the user’s entire field of view [5,6,7,8,9]. The soft black materials, such as sponges, silicone gaskets, soft plastics, etc., are displayed in the non-display area to avoid ambient light interference. Commercially available VR HMDs generally cover most of the user’s field of view with a display range, and the display panel is placed directly in front of the user’s eyes. Many recent studies have shown that VR can be applied in various fields to improve the quality of human life. In medicine [10], VR training was applied for medical staff to improve techniques and training regarding the correct use of medical equipment, with safety and health in mind. For education, it could be used for sketching and modeling in an immersive environment to introduce modeling concepts and increase understanding of the spatial connection between an object and its 2D projections [11]. VR has been adopted in inclusive teaching to simulate difficulties, and has been beneficial to deliver materials more engagingly and easily gather required information for dyslexic students [12]. VR technology in cultural heritage preservation, with gesture interaction-based technology, could achieve the advantages of touchless, distant interaction of users with reconstructed artifacts [13]. Moreover, most applications pertain to human–computer interaction technology, with VR as the interface between computer technology and people as users [14,15,16]. AR display is based on real-world environmental information with superimposed auxiliary input on display as the core of the application. Thus, the display panel, or light engine module, is placed above, or on the left or right side, of the user’s eyes according to its application, cost, and volume, its counterweight and position, and other inherent constraints, to prevent obscuration of the field of view. Commercially available AR display architectures are usually presented in the form of transparent displays (the transmittance of the ambient light adjusted according to optical efficiency and image contrast) and a small FOV, less than 40 degrees [17].
From the viewpoint of vision and optics, regardless of the difference in the visual field of view and light transmittance provided by VR and AR, VR/AR HMD still face lots of common problems, such as vergence-accommodation conflict (VAC), screen door effect (SDE) [18], image distortion [19], pupil swim, small eye-box [20], stray light interference, chromatic dispersion, non-uniform color and brightness [21], latency [22], cybersickness, motion sickness [23,24], etc. Due to their larger field of view, VR users may more easily encounter these phenomena. Hence, image quality variance, influence, and acceptance of VR HMD on human visual perception are worthy of attention, research, and discussion and are expected to extend the experience of AR display applications.
The VR viewing optical module is the most critical core in the VR HMD system, and the control of optical quality is essential to ensure visual quality. There have been many types of research and well-established standard documents in the consumer electronics industry to control optical quality in camera modules. Signal receivers are the most fundamental differential factor when comparing camera modules with VR. Digital camera modules have a charge-coupled device (CCD), or complementary metal oxide semiconductor (CMOS), and the captured signal can be further processed by the software and firmware, based on the requirements and image location, to perform noise suppression and make other adjustments. For the visual optics of VR, the signal receiver is human vision. The eyeball can passively receive all the light from the output of the VR viewing optical module. The human eye contains a series of physiological reflexes. The eyeball has the ability to accommodate the amount of incident light, to focus on different image surfaces of more than ten centimeters to infinity, tremble unconsciously, and can quickly scan an entire visual field of images [25,26,27,28,29]. These differences bring difficulties in detecting and suppressing stray light when producing VR display modules.
Stray light generally refers to all rays generated along unexpected paths in an optical system or light noise that is not required by the system, usually caused by multiple interface reflections, refractions, or material scattering. Most of today’s HMDs use a Fresnel lens with a convex lens function as the imaging lens, which increases refractive power, reduces weight, reduces volume, and saves a lot of material costs. However, in the mass production of the Fresnel sawtooth structure, in addition to the expected loss of optical efficiency caused by the ineffective surface, there is also the problem of connection surface inclination. It increases the light-receiving area of the ineffective area, thus resulting in more ineffective stray light generation. When the shape of the sharp corners of the Fresnel sawtooth is farther from the ideal sharp point, the more stray light is generated. The stray light produced by the Fresnel lens is very easy to detect, especially when the brightness and contrast of the showing pattern are relatively high. Therefore, it is dubbed, by VR developers and consumers, as God’s rays or the ring effect.
Observing the current development trends of many electronic products, it is not difficult to infer that there will inevitably be an increasing expectation for better optical system performance in the requirements of visual optical modules. According to the Fresnel equation [30], it can be expected that the qualitative and quantitative aspects of stray light will become too important to be ignored with increase in the complexity of lenses, films, and other components of visual optical modules and optical transmission interfaces in the future [31].
The performance of optical imaging systems will be reduced due to the intrusion of stray light from inside and outside of the system’s field of view. Most research results have contributed to the camera [32,33,34,35,36]. ISO 12231 standard clearly describes the source of stray light in the field of electronic imaging. It could be foreign material between the light source and the lens, such as fog, dirt on the windshield or lens surface, inter-reflection from the surface of optical components, non-uniformity, which can be reduced by an anti-reflection (AR) coating. Stray light might also be induced by the lens, aperture, shutter, and the parts of the exposure mechanism in the camera, spherical and coma aberration from the lens, and light leakage from the system.
Previous studies have designated various types of stray light generated by different characteristics, such as lens flare, aperture ghosting, and veiling glare, with quantitate standards, such as ISO 9358 [37], ISO 18844 [38], and IEC 62676-5 [39]. The following articles revealed the adverse effects of stray light in different optical systems, and corresponding improvement methods were proposed. The stray light of the optical viewing system from a helmet-mounted display might distract visual attention. Therefore, the far-field model in optical simulation was adopted to search out the position where the light source’s transmission path for the image and the glare were separated to further shield it [40]. For astronomical or surveillance camera systems, which are often placed outdoors, glare due to water droplets attached to the lens surface is likely to blur the image. The entrance position of the light attached by water droplets could be detected and closed by placing an LCD on the position of the camera’s light barrier as a shutter array [41]. Using a light-field camera to capture 4D light field instead of general 2D image photography could effectively eliminate stray light signals of various light-emitting positions and apply image synthesis to remove the stray light of the original 2D image [42].
Compared with the stray light of a camera that could be blocked with physical shielding, absorption, light reduction, polarization control, and image signal processing to suppress stray light for post-processing of images, the research methods for VR/AR stray light are more limited. Some researchers designed the VR optical module based on the architecture of polarized catadioptric optics. In the simulation software, an ideal lens without aberrations was utilized instead of the human eye, and the image surface was used as a simplified retina. The analysis was carried out based on the three primary ghost paths caused by interface reflection and fitting problems of the degree of polarization. The light spot’s size on the image surface of the ghost image was considered in the function for curved surface optimization so that the stray light energy was evenly distributed as low noise during the imaging of the human eye’s retina and this reduced the chance of its being noticed [43].
At present, the primary sources of stray light in VR are all generated within the effective diameter from the lens of the optical system. Since the image light source and the stray light source have different light emission positions, there is still no way to optimize through image processing.
Therefore, designing a fast and efficient stray light detection method, based on human perception, was the primary purpose of this research. The study combined psychophysical experiments and image-based measurements from three VR devices to establish a set of stray light detection methods based on visual perception. The stray light threshold can be clearly defined by introducing human factors and applying artificial intelligence (AI). It can also be used to design measurement equipment and provide optimized parameters for the VR viewing optical designer when setting the merit function with a goal. The study makes the quantitative evaluation of stray light a critical indicator of image quality and VR/AR content design.

2. Methods

A total of 40 observers, corrected for hyperopia, or near-sightedness, participated in the study. Their binocular parallax was limited to not more than 200° to avoid the different image problems induced by a significant difference in spectacle lens magnification. The observers, 22 males and 18 females, that were recruited had experience in more than two VR HMDs. They were aged between 23 to 35 years of age and had normal color vision. To ensure the accuracy of the experiment, the day before the task, the observers were required to avoid alcohol abuse, excessive caffeine intake, staying up late, etc. Any glasses worn were not to have an anti-blue light function, and contact lenses were to be without dilated pupils or embedded with any style of pattern.
In the psychophysical experiment, three kinds of VR HMDs were selected, as shown in Figure 1: Oculus Quest, Vive Focus Plus, and Vive Focus. The VR viewing optical modules of these three devices were all designed with Fresnel lenses. The monocular display module was an OLED display with a resolution of 1440 × 1600 pixels. The advantage of the VR HMD with an OLED display as the light source was that the partial black background image would not have the problem of light leakage from the LCD backlight module, so the experiment could eliminate the stray light generated by the display element itself. The measurement was carried out by capturing the image of the VR viewing optical module. The FOV of the current mainstream VR HMD viewing optical module is about 110°, so the FOV range to be obtained in the experiment had to be at least greater than or equal to that. Therefore, a wide-angle lens (Theia ML-183-M) with a horizontal field of view (HFOV) of up to 115° was used with a C-mount industrial camera (IDS UI-3590CP-C-HQ). The power and the operation of the camera were controlled by the USB port of a personal computer. The experiment and measurements were conducted in a low-light dark environment with a temperature of 25 °C and a humidity of 60%.
Three independent variables containing stimuli were investigated in the research to better know how they affect the threshold of stray light perceived by the observer: visual field, grayscale, and the VR HMD device. The stimulus of the visual field was the radius of the light source displayed on the image. The size of the stimulus was defined by adding the inside and outside diameter pixels and dividing by 4. For example, the circle image number of the largest visual field stimulus was 750 ((1550 pixels of outside diameter + 1450 pixels of inside diameter)/4), as shown in Figure 2. Sixteen ring patterns, with sizes varying from 0 to 750, were acquired, and the difference between the inside and the outside diameter was 50 pixels. Figure 2 illustrates the images with different visual field stimuli. For grayscale stimuli, all the color depths of the VR HMDs were 8-bit RGB values, and, thus, 20 grayscales, ranging from 10 to 200, with ten intervals were designed and are shown in Figure 3. The test image background was set as 0 to exclude the stray light from the device itself. Both visual field and grayscale stimuli were controlled by the images presented on the VR HMD. In addition, the dependent variable was the threshold of the stray light perceived by the human eye. According to the observer’s subjective evaluation, the visible stray light’s grayscale value was recorded for each stimulus with different independent variables.
In total 960 images (16 stimuli visual field × 20 stimuli grayscale × 3 VR HMD devices) with a resolution of 1600 × 1600 pixels were tested in the experiment. The information on the parameters is listed in Table 1. The image VR content from three devices was identical and established by Oculus and Vive Wave software development kit (SDK) with Unity 3D engine. Before starting the experiment, the observers were invited to a completely dark room and adapted to the environment for 10 min. After the adaptation, an experimental assistant helped the observers accurately wear the VR HMD and explained the testing procedure and the points for attention, such as switching the images, recognizing the stray light, and responding.
From Figure 4, the observer’s HFOV was determined by assessing the range of visible scale from the rendering pattern on VR HMD. To ensure consistency, the HFOV of each observer was recorded before the judgment and measured after completing all evaluations, and the difference was to be less than 1 degree. By comparison, the stability of wearing VR HMD could be verified. For the stray light assessment, the value of stimuli of grayscale was shown from high to low, and the observers were asked to respond if stray light was present on the image. The staircase method, which has been widely applied in psychophysics, was adopted to obtain the just noticeable difference (JND) as the threshold. The method is an adjustment and adaptation method. The principle is to increase or decrease the signal strength of a fixed gradient so that the observer repeatedly evaluates the signal strength of JND until the subject answers the JND level, which has been converged and stabilized as the observer’s perceptual threshold. A full-screen image with R, G, and B values equal to 15 was displayed for one minute between two test images to eliminate the visual afterimage.

3. Results

The 40 observers completed the experiment according to the above experimental procedure. This study separately recorded the HFOV data and the threshold value of just noticeable stray light when using three VR HMDs. The coefficient of variation (CV) was primarily adopted to estimate the inter-observer variability to investigate the agreement between two data sets. The larger the CV value the poorer the deal. The CV values were formulated as Equation (1) for estimating the inter-observer variability. In the equation, ΔV is the estimation of a stimulus from an individual observer, ΔE is the geometric mean of ΔV from all observers, and Δ E ¯ is the arithmetic mean of all evaluated stimuli N. The scaling factor f was used to adjust the data between ΔV and ΔE. Fleiss’ kappa was also used to analyze the level of agreement in the judges between the observers. Table 2 shows the mean results of the observer variability for each VR HMD. The values of Fleiss’ kappa were 0.671, 0.851, and 0.784 for Oculus Quest, Vive Focus Plus, and Vive Focus and showed good levels of inter-rater agreement. The visual response was analyzed by ignoring the data in which the observer’s CV value was larger than 20%. Hence, the valid data were 93%, 95%, and 85% sets for Oculus Quest, Vive Focus Plus, and Vive Focus, respectively.
CV = 100 × 1 N ( Δ E f × Δ V ) 2 ( Δ E ¯ ) 2 ,   f = Δ E × Δ V ( Δ V ) 2
The max rendering HFOV of three VR HMDs could be found by a free development software called Vysor, where the Oculus Quest was 92°, Vive Focus Plus was 80°, and Vive Focus was 79°, as shown in Figure 5. It could be found that even though the three VR HMDs utilized OLED displays with the same size and resolution, the scales and proportions on the screen were different. There were still apparent differences in design and adjustment between the lens and the display. This phenomenon was also reflected in the subject’s actual HFOV size, as shown in Figure 6. Based on the magnitude reported by most observers, the average HFOVs practically experienced by Oculus Quest, Vive Focus Plus, and Vive Focus were 74°, 73°, and 72°, respectively. The Oculus Quest was the furthest from the theoretical value provided, with a difference of up to 18°, and Vive Focus was 7° between the actual and theoretical values. The phenomenon might have been caused by the distortion of the lens optical design, the matching degree of the eye-box and the user’s eye relief, the size of the black area on the edge of the display, and the different adjustments of Frustum.
The mean threshold of stray light for three kinds of VR HMD devices afterimage with different visual field sizes was analyzed with acceptable visual data, which passed the observer variability exam, as shown in Figure 7. The results indicated that Oculus Quest reached the maximum number of stimuli visual field, whereas the number for Vive Focus Plus and Vive Focus was equivalent. In the experiments regarding Vive Focus Plus and Vive Focus, some image scenes directly exceeded the image display range that most subjects could observe, such as 650, 700, and 750. Therefore, only the data from 0 to 600 were finally retained. By comparing the overall image output performance, the higher stray light threshold meant that the observer felt more challenged to detect the stray light, even though the three VR HMDs had an essential difference in display luminance conditions. The difference might be caused by the lens design, correction method, pre-set luminance, frame rate, etc. From Figure 6, it can be seen that Vive Focus Plus had a higher threshold on average, and Oculus Quest had a higher point in parts of the visual fields. However, Vive Focus showed a low threshold in each visual field. The performance seemed to be in accord with the trend of product innovation over time. Additionally, the comparison of Vive Focus Plus and Vive Focus with similar HFOV showed that their stray light distribution characteristics were almost opposite. The visual field of Vive Focus with a higher value was around 250, while the position represented the lower area of Vive Focus Plus. Hence, discussing whether there is a correlation between different models of VR HMDs has no apparent physical meaning when concerning the correlation between HFOV and stray light threshold.

4. Modeling

The stray light evaluation model is a calculation mechanism that further connects the stray light threshold obtained from psychophysical experiments with the image data taken by the camera to generate a corresponding relationship. It is the final critical step in establishing a stray light detection method based on human perception. This section describes the feature selection and extraction of the image, training, and performance in detail to establish the stray light evaluation model. The approach of feature extraction in image processing is shown in Figure 8.
By applying an image processing algorithm, the outside radius and gray level of circular stimuli, gray level, and standard deviation of the stray light image were extracted as the machine learning model’s inputs to conduct the learning procedure. The outside radius and the gray level of stimuli were selected because they are the independent variables of the psychophysical experiment. The gray level and standard deviation of stray light were selected because there may be stray light outside the area presented by the image pattern, but the human eye may be unable to detect it. In addition, using the standard deviation of stray light concerns the concept of stray light contrast.
The following steps obtained the features of the stray light image. First, we resized the input images to 419 × 368 pixels with cubic interpolation and selected a region of interest (ROI) in which half the width and height was 184 pixels from the central point, respectively, to avoid the impacts of lens distortion. After selecting the ROI, the following procedures focused on acquiring the masks of the stimuli and stray light region, which are complementary in relation, and then multiplying the masks with the original image to separate the stimulus image and the stray light image. Second, we converted the original images to YCbCr color space and obtained the grayscale image (Y channel). After applying histogram equalization for image enhancement to increase the global contrast, the image was processed by edge detection and binary thresholding. A Canny edge detector found the edges with thresholds of 250 and 100 for the upper and lower limits to exclude the non-edge pixels. Since the edge detected by the Canny algorithm is discontinuous, it would obviously influence subsequent contour retrieval. Therefore, morphological image processing was adopted to extend the range for connecting. The dilation with a small elliptical structuring element (3 × 3) was beneficial for repairing the stimulus image’s edges. Third, we adopted contour retrieval to find the contour from the dilated images, and binarized the pixel enclosed by each contour to acquire the mask for the part of stimuli and stray light, and then calculated the outside radius of the circular stimuli. Then, the masks could be multiplied by the captured image to extract the information of stimuli and stray light. Finally, the gray level of stimuli, gray level, and standard deviation of stray light could be calculated according to these valid data and inputted into the classification algorithm for training a stray light evaluation model. The obtained features could be visualized using the t-Distributed Stochastic Neighbor Embedding (t-SNE) technique, which can explore the feature distribution of the dataset and visualize the feature representation of the samples in two dimensions. Figure 9 illustrates the result of the t-SNE projection.
The initial step for building a machine learning model was to determine the inputs. The outside radius and gray level of circular stimuli, gray level, and standard deviation of the stray light image were utilized to describe the relationship between the input parameters and the visual response. The four features were selected as the inputs for the training procedure. The outputs were the records if the observer perceived the stray light and were labelled 0 when there was no stray light. In contrast, they were labelled as 1 to express that the stray light was present. As a consequence, the task was regarded as a binary classification problem. Four supervised learning algorithms, including K-Nearest Neighbor (KNN), Logistic Regression (LR), Support Vector Machine (SVM), and Random Forest (RF), were tested in the study.
In the field of machine learning, k-fold cross-validation, especially the 10-fold cross-validation, is most commonly used to verify the performance of a model. The parameter k randomly splits the data into k subsets, and one of the subsets is chosen for testing with the remaining k − 1 subsets for training. The procedure is repeated until each subset is treated as a testing set. The total number of original data samples used for training the model was 440. Due to the k-fold cross-validation method with k = 10, the training data accounted for 396, and the verification data accounted for 44. The confusion matrixes, including the true positive count (TP), false positive count (FP), false negative count (FN), and true negative count (TN), are mainly used to evaluate a classification model’s performance. For describing the model’s performance, TP, TN, FP, and FN are frequently combined into several indices, such as accuracy, recall, and precision, and they can be formulated by Equations (2)–(5).
Accuracy = TP + TN TP + TN + FP + FN
Recall = TP TP + FN
True   Negative   Rate = TN TN + FP
Precision = TP TP + FP
The model performance obtained by the use of the four algorithms is shown in Table 3. As a result, all models could ideally predict the visible stray light of three types of VR HMDs with high accuracy and acceptable true negative rate, whereas the models needed to improve for the recall. The four approaches’ accuracy was about 90%, and the logistic regression algorithm performed the best. Nevertheless, it is essential that more experiments be conducted and more VR HMDs tested to build a comprehensive model.

5. Conclusions

In summary, this research aimed to provide a qualitative evaluation method for effectively identifying stray light in VR display modules and to add more complex factors for gradually improving quantitative grading and standardization in the future as a reference for VR HMD optical system design and quality control. A psychophysical experiment was carried out to record subject threshold in perception of stray light. The response from the observer was considered the target parameter for training models. A wide-angle camera was used to capture the VR display module’s image, following the common point between the human eye and the wide-angle camera. We applied a series of image processing procedures to extract features of the region of interest (ROI) and used the acquired data as input variables for training. Four machine learning models were applied to classify whether the stray light could be perceived or not. For predictive models’ performance, the algorithm of logistic regression predicted the ground truth more precisely than other models. Based on these research results, more variables can be further added to the future design of experiments. Moreover, the stray light detection method can also be upgraded for estimating VR HMD devices’ image quality with higher reliability.

Author Contributions

H.-C.L., data analysis, modeling, and writing—manuscript preparation. M.-C.T., conducting experiments, data analysis, concept provider, and writing—manuscript preparation. T.-X.L., supervision and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology (MOST), grant number MOST 110-2222-E-255-001.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to the no requirement for the project, which the psychophysical experiment is conducted off-campus.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The image dataset presented in this study are available on request from the author. The implemented code that supports the research are openly available at https://github.com/HungChungLi/Stray-Light-Detection-Model.git (accessed on 8 June 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brooks, F.P. What’s real about virtual reality? IEEE Comput. Graph. Appl. 1999, 19, 16–27. [Google Scholar] [CrossRef]
  2. Meehan, M.; Insko, B.; Whitton, M.; Brooks, F.P., Jr. Physiological measures of presence in stressful virtual environments. Acm Trans. Graph. (Tog) 2002, 21, 645–652. [Google Scholar] [CrossRef] [Green Version]
  3. Bowman, D.A.; McMahan, R.P. Virtual reality: How much immersion is enough? Computer 2007, 40, 36–43. [Google Scholar] [CrossRef]
  4. Rolland, J.; Cakmakci, O. Head-worn displays: The future through new eyes. Opt. Photonics News 2009, 20, 20–27. [Google Scholar] [CrossRef]
  5. Swaminathan, K.; Sato, S. Interaction design for large displays. Interactions 1997, 4, 15–24. [Google Scholar] [CrossRef]
  6. Simmons, T. What’s the optimum computer display size? Ergon. Des. 2001, 9, 19–25. [Google Scholar] [CrossRef]
  7. Toet, A. Effects of Field-of-View Restrictions on Speed and Accuracy of Manoeuvring. Percept. Mot. Ski. 2007, 105, 1245–1256. [Google Scholar] [CrossRef]
  8. Czerwinski, M.; Smith, G.; Regan, T.; Meyers, B.; Robertson, G.G.; Starkweather, G.K. Toward characterizing the productivity benefits of very large displays. In Interact; IOS Press: Amsterdam, The Netherlands, 2003; Volume 3, pp. 9–16. [Google Scholar]
  9. Sabri, A.J.; Ball, R.G.; Fabian, A.; Bhatia, S.; North, C. High-resolution gaming: Interfaces, notifications, and the user experience. Interact. Comput. 2007, 19, 151–166. [Google Scholar] [CrossRef]
  10. Boros, M.; Sventekova, E.; Cidlinova, A.; Bardy, M.; Batrlova, K. Application of VR Technology to the Training of Paramedics. Appl. Sci. 2022, 12, 1172. [Google Scholar] [CrossRef]
  11. Conesa-Pastor, J.; Contero, M. EVM: An Educational Virtual Reality Modeling Tool; Evaluation Study with Freshman Engineering Students. Appl. Sci. 2021, 12, 390. [Google Scholar] [CrossRef]
  12. Zingoni, A.; Taborri, J.; Panetti, V.; Bonechi, S.; Aparicio-Martínez, P.; Pinzi, S.; Calabrò, G. Investigating Issues and Needs of Dyslexic Students at University: Proof of Concept of an Artificial Intelligence and Virtual Reality-Based Supporting Platform and Preliminary Results. Appl. Sci. 2021, 11, 4624. [Google Scholar] [CrossRef]
  13. Popovici, D.-M.; Iordache, D.; Comes, R.; Neamțu, C.G.D.; Băutu, E. Interactive Exploration of Virtual Heritage by Means of Natural Gestures. Appl. Sci. 2022, 12, 4452. [Google Scholar] [CrossRef]
  14. Katona, J. A Review of Human–Computer Interaction and Virtual Reality Research Fields in Cognitive InfoCommunications. Appl. Sci. 2021, 11, 2646. [Google Scholar] [CrossRef]
  15. Jiang, S.; Wang, L.; Dong, Y.; Zhou, M. Application of Virtual Reality Human-Computer Interaction Technology Based on the Sensor in English Teaching. J. Sens. 2021, 2021, 2505119. [Google Scholar] [CrossRef]
  16. Adnan, M.; Sardaraz, M.; Tahir, M.; Dar, M.N.; Alduailij, M.; Alduailij, M. A Robust Framework for Real-Time Iris Landmarks Detection Using Deep Learning. Appl. Sci. 2022, 12, 5700. [Google Scholar] [CrossRef]
  17. Ens, B.M.; Finnegan, R.; Irani, P.P. The personal cockpit: A spatial interface for effective task switching on head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; pp. 3171–3180. [Google Scholar]
  18. Buker, T.J.; Vincenzi, D.A.; Deaton, J.E. The effect of apparent latency on simulator sickness while using a see-through helmet-mounted display: Reducing apparent latency with predictive compensation. Hum. Factors 2012, 54, 235–249. [Google Scholar] [CrossRef]
  19. Lin, J.W.; Duh, H.B.L.; Parker, D.E.; Abi-Rached, H.; Furness, T.A. Effects of field of view on presence, enjoyment, memory, and simulator sickness in a virtual environment. In Proceedings of the IEEE Virtual Reality, Orlando, FL, USA, 24–28 March 2002; IEEE: New York City, NY, USA; pp. 164–171. [Google Scholar]
  20. Patterson, R.; Winterbottom, M.D.; Pierce, B.J. Perceptual issues in the use of head-mounted visual displays. Hum. Factors 2006, 48, 555–573. [Google Scholar] [CrossRef]
  21. Pölönen, M.; Hakala, J.; Bilcu, R.; Järvenpää, T.; Häkkinen, J.; Salmimaa, M. Color asymmetry in 3D imaging: Influence on the viewing experience. 3D Res. 2012, 3, 5. [Google Scholar] [CrossRef]
  22. Cho, J.M.; Kim, Y.D.; Jung, S.H.; Shin, H.; Kim, T. 78-4: Screen door effect mitigation and its quantitative evaluation in VR display. In SID Symposium Digest of Technical Papers; SID: Campbell, CA, USA, 2017; Volume 48, pp. 1154–1156. [Google Scholar]
  23. Cakmakci, O.; Hoffman, D.M.; Balram, N. 31-4: Invited Paper: 3D Eyebox in Augmented and Virtual Reality Optics. In SID Symposium Digest of Technical Papers; SID: Campbell, CA, USA, 2019; Volume 50, pp. 438–441. [Google Scholar]
  24. Zhan, T.; Yin, K.; Xiong, J.; He, Z.; Wu, S.T. Augmented Reality and Virtual Reality Displays: Perspectives and Challenges. iScience 2020, 23, 101397. [Google Scholar] [CrossRef]
  25. Carpenter, R.H. Movements of the Eyes, 2nd ed.; Pion Limited: London, UK, 1988. [Google Scholar]
  26. Ogle, K.N.; Schwartz, J.T. Depth of focus of the human eye. JOSA 1959, 49, 273–280. [Google Scholar] [CrossRef]
  27. Ovenseri-Ogbomo, G.O.; Oduntan, O.A. Mechanism of accommodation: A review of theoretical propositions. Afr. Vis. Eye Health 2015, 74, 6. [Google Scholar] [CrossRef]
  28. Järvenpää, T.; Pölönen, M. Optical characterization and ergonomical factors of near-to-eye displays. J. Soc. Inf. Disp. 2010, 18, 285–292. [Google Scholar] [CrossRef]
  29. Tung, K.; Miller, M.; Colombi, J.; Uribe, D.; Smith, S. Effect of vibration on eye, head and helmet movements while wearing a helmet-mounted display. J. Soc. Inf. Disp. 2014, 22, 535–544. [Google Scholar] [CrossRef]
  30. Querry, M.R. Direct solution of the generalized Fresnel reflectance equations. JOSA 1969, 59, 876–877. [Google Scholar] [CrossRef]
  31. Geng, Y.; Gollier, J.; Wheelwright, B.; Peng, F.; Sulai, Y.; Lewis, B.; McEldowney, S. Viewing optics for immersive near-eye displays: Pupil swim/size and weight/stray light. In Digital Optics for Immersive Displays; International Society for Optics and Photonics: Bellingham, WA, USA, 2018; Volume 10676, p. 1067606. [Google Scholar]
  32. Matsuda, S.; Nitoh, T. Flare as applied to photographic lenses. Appl. Opt. 1972, 11, 1850–1856. [Google Scholar] [CrossRef]
  33. Talvala, E.-V.; Adams, A.; Horowitz, M.; Levoy, M. Veiling glare in high dynamic range imaging. ACM Trans. Graph. 2007, 26, 37-es. [Google Scholar] [CrossRef]
  34. Rizzi, A.; McCann, J.J. Glare-limited appearances in HDR images. J. Soc. Inf. Disp. 2009, 17, 3–12. [Google Scholar] [CrossRef]
  35. Dietmar, W. Image Flare measurement according to ISO 18844. Electron. Imaging 2016, 28, art00013. [Google Scholar] [CrossRef] [Green Version]
  36. Koren, N. Measuring the impact of flare light on Dynamic Range. Electron. Imaging 2018, 30, art00004. [Google Scholar] [CrossRef] [Green Version]
  37. ISO 9358; Optics and Optical Instruments—Veiling Glare of Image Forming Systems—Definitions and Methods of Measurement. International Organization for Standardization: Geneva, Switzerland, 1994.
  38. ISO 18844; Photography—Digital Cameras—Image Flare Measurement. International Organization for Standardization: Geneva, Switzerland, 2017.
  39. IEC 62676-5; Video Surveillance Systems for Use in Security Applications—Part 5: Data Specifications and Image Quality Performance for Camera Devices. International Electrotechnical Commission: Geneva, Switzerland, 2018.
  40. Hasenauer, D.M.; Kunick, J.M. Full-field mapping and analysis of veiling glare sources for helmet-mounted display systems. In Current Developments in Optical Design and Optical Engineering VIII; International Society for Optics and Photonics: Bellingham, WA, USA, 1999; Volume 3779, pp. 382–389. [Google Scholar]
  41. Hara, T.; Saito, H.; Kanade, T. Removal of Glare Caused by Water Droplets. In Proceedings of the 2009 Conference for Visual Media Production, London, UK, 12–13 November 2009; pp. 144–151. [Google Scholar]
  42. Raskar, R.; Agrawal, A.; Wilson, C.A.; Veeraraghavan, A. Glare aware photography. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  43. Peng, X.; Gao, Z.; Ding, Y.; Zhao, D.; Chi, X. Study of ghost image suppression in polarized catadioptric virtual reality optical systems. Virtual Real. Intell. Hardw. 2020, 2, 70–78. [Google Scholar] [CrossRef]
Figure 1. The VR HMDs for the experiment. (a) Oculus Quest; (b) Vive Focus Plus; (c) Vive Focus.
Figure 1. The VR HMDs for the experiment. (a) Oculus Quest; (b) Vive Focus Plus; (c) Vive Focus.
Applsci 12 06311 g001
Figure 2. Different stimuli visual fields.
Figure 2. Different stimuli visual fields.
Applsci 12 06311 g002
Figure 3. Different stimuli grayscale.
Figure 3. Different stimuli grayscale.
Applsci 12 06311 g003
Figure 4. Rendering HFOV of VR HMD (Oculus Quest).
Figure 4. Rendering HFOV of VR HMD (Oculus Quest).
Applsci 12 06311 g004
Figure 5. Rendering HFOV of different VR HMDs.
Figure 5. Rendering HFOV of different VR HMDs.
Applsci 12 06311 g005
Figure 6. Observer’s HFOV of three VR HMDs.
Figure 6. Observer’s HFOV of three VR HMDs.
Applsci 12 06311 g006
Figure 7. Stray light threshold of three VR HMDs.
Figure 7. Stray light threshold of three VR HMDs.
Applsci 12 06311 g007
Figure 8. Flow chart of image processing algorithm.
Figure 8. Flow chart of image processing algorithm.
Applsci 12 06311 g008
Figure 9. t-SNE visualization of the obtained features.
Figure 9. t-SNE visualization of the obtained features.
Applsci 12 06311 g009
Table 1. Parameters of the experiment.
Table 1. Parameters of the experiment.
FactorsNumberVariable
Stimuli visual field160, 50, 100, 150, 200, 250…750
Stimuli grayscale2010, 20, 30, 40, 50, 60, 70, 80…200
VR HMD device3Oculus Quest, Vive Focus Plus, Vive Focus
Table 2. Mean value of inter-observer variability for VR HMDs.
Table 2. Mean value of inter-observer variability for VR HMDs.
VR HMDOculus QuestVive Focus PlusVive Focus
CV(Mean)15.22%10.95%16.46%
Fleiss’ kappa0.6710.8510.784
Table 3. Test results of machine learning models.
Table 3. Test results of machine learning models.
ModelAccuracyRecallTrue Negative RatePrecision
KNNAverage ± SD88.6% ± 2.7%84.7% ± 5.1%90.8% ± 2.6%80.3% ± 4.8%
Min ± SD89.1% ± 2.0%73.4% ± 7.0%92.2% ± 2.4%65.3% ± 7.8%
Max ± SD91.6% ± 1.4%88.8% ± 3.9%93.6% ± 2.3%91.4% ± 2.8%
Logistic
Regression
Average ± SD91.1% ± 1.6%85.1% ± 3.4%94.2% ± 2.0%88.1% ± 3.5%
Min ± SD91.4% ± 1.9%79.3% ± 7.4%93.8% ± 2.0%71.6% ± 7.8%
Max ± SD93.2% ± 1.7%92.4% ± 4.0%93.8% ± 3.3%91.8% ± 3.6%
SVMAverage ± SD90.5% ± 1.3%88.7% ± 3.2%91.4% ± 1.7%80.5% ± 3.5%
Min ± SD91.8% ± 2.2%80.3% ± 7.4%93.8% ± 2.2%71.6% ± 8.3%
Max ± SD93.0% ± 1.6%92.1% ± 4.2%93.4% ± 2.7%91.0% ± 2.8%
Random
Forest
Average ± SD90.0% ± 2.7%85.5% ± 4.7%92.6% ± 2.6%84.6% ± 4.5%
Min ± SD91.2% ± 1.7%80.2% ± 7.5%93.5% ± 2.2%76.1% ± 8.1%
Max ± SD92.3% ± 1.7%90.8% ± 4.0%93.4% ± 3.2%91.3% ± 3.6%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, H.-C.; Tsai, M.-C.; Lee, T.-X. A Stray Light Detection Model for VR Head-Mounted Display Based on Visual Perception. Appl. Sci. 2022, 12, 6311. https://doi.org/10.3390/app12136311

AMA Style

Li H-C, Tsai M-C, Lee T-X. A Stray Light Detection Model for VR Head-Mounted Display Based on Visual Perception. Applied Sciences. 2022; 12(13):6311. https://doi.org/10.3390/app12136311

Chicago/Turabian Style

Li, Hung-Chung, Meng-Che Tsai, and Tsung-Xian Lee. 2022. "A Stray Light Detection Model for VR Head-Mounted Display Based on Visual Perception" Applied Sciences 12, no. 13: 6311. https://doi.org/10.3390/app12136311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop